Prestigious AI Series Wraps up with Lecture by the Father of Machine Learning

Vladimir Vapnik, a professor at Columbia University’s Center for Computational Learning System and Professor Anna Choromanska

Vladimir Vapnik, a professor at Columbia University’s Center for Computational Learning Systems and widely regarded as the father of machine learning, explained critical aspects of machine pedagogy and statistical learning theory in a May 4, 2018 presentation at the NYU Tandon School of Engineering.

Vapnik’s presentation was the finale of a successful new seminar series, Modern Artificial Intelligence, organized by Professor Anna Choromanska and hosted by NYU Tandon’s Department of Electrical and Computer Engineering.

Credited with developing the first support vector machine (SVM) algorithm, used for machine learning for analysis of text, image, and other types of content, Vapnik kicked off his lecture with the assertion that learning machines don’t need so-called “big data” as much as they need smart ways of analyzing data. To that end, strong pedagogical principles are just as important for teaching machines as they are for teaching people.

He made the point with a Japanese proverb that predates the Information Age by several centuries: “Better than a thousand days of diligent study is one day with a great teacher.”

In his lecture, Vapnik described new principles of teacher-student interactions that can be used by intelligent machines to speed up the learning process.

“Every problem of natural science contains three elements,” he said. “The setting of the problem in mathematical terms; the resolution of the problem, which suggests a mathematical solution; and proofs showing that the resolution leads to the solution of the mathematical problem. I will try to convince you that classical approach to the setting of the problem in machine learning was too primitive.”

Vapnik, the author of the recent books Statistical Learning Theory and The Nature of Statistical Learning Theory, explained the differences between classical approaches to machine learning constructs and a new paradigm — Learning Using Statistical Invariants (LUSI). While the classical approach is purely data-driven and uses a classification or regression function to minimize expected loss, the LUSI paradigm is both data and intelligence-driven, employing data as well as teacher input to build a classification or regression function to invariants that are specific to the problem. It then minimizes the expected loss in a way that preserves these invariants.

Besides Vapnik, the AI Seminar Series featured Yann LeCun, a member of the faculty of NYU, director of Facebook AI Research, and the Series’ inaugural speaker; Yoshua Bengio, head of the Montreal Institute for Learning Algorithms (MILA); and Stefano Soatto, founding director of the UCLA Vision Lab.

The series attracted an audience of some 1,000 students, faculty, and researchers, with hundreds attending live in NYU Tandon’s Pfizer Auditorium, and many viewing remotely from NYU Abu Dhabi, NYU Shanghai, the Indian Institute of Technology, and elsewhere. The audience included attendees from Columbia University, Harvard, Yale, Rutgers, Princeton, and companies like Microsoft, Google, and the IBM T.J Watson Research Center, according to Choromanska, the assistant professor of electrical and computer engineering who spearheaded the events.

“I think that the success of the series is measured in terms of how students and researchers responded to it,” Choromanska said. “I am proud that we also allocated time for students to meet with speakers; this makes tangible for them figures that they know from AI news and media, or that they know are extremely accomplished.”