Congratulations to Michael I. Jordan for being awarded the 2021 Ulf Grenander Prize for his foundations contributions to machine learning!
This information appeared in the latest (April 2021) issue of the Notices of American Mathematical Society
The Ulf Grenander Prize in Stochastic Theory and Modeling is awarded to Michael I. Jordan for foundational contributions to machine learning (ML), especially unsupervised learning, probabilistic computation, and core theory for balancing statistical fidelity with computation. (photo by Justin Bettman)
One of Jordan’s core contributions to ML is the development of the field of unsupervised learning. In his hands it has moved from a collection of unrelated algorithms to an intellectually coherent field—one largely based on probabilistic inference—that can be used to solve real-world problems.
Unsupervised learning dispenses with the labels and reinforcement signals of the other main branches of machine learning, developing algorithms that reason backwards from data to the patterns that underlie its generative mechanisms. Working from the general perspective of stochastic modeling and Bayesian inference, Jordan augmented the classical analytical distributions of Bayesian statistics with computational entities having graphical, combinatorial, temporal and spectral structure. Furthermore, making use of ideas from convex analysis and statistical physics, he developed new methods for approximate inference that exploited these structures. The resulting algorithms, which are called variational inference, are now a major area of ML and the principal engine behind scalable unsupervised learning.
Jordan has also made significant contributions to many of the other important methodologies of ML, such as neural networks, reinforcement learning, and dimensionality reduction. He is known for prescient early work on recurrent neural networks, for the first rigorous theory of convergence of Q-learning (the core dynamic-programming-based framework that underlies reinforcement learning) and for his work on “classification-calibrated loss functions,” which provides a general theory of classification that encompasses boosting and the support vector machine. In recent years, Jordan has turned his attention to optimization theory and Monte Carlo sampling, focusing on nonconvex optimization and sampling in high-dimensional spaces. Overall, his research accomplishments have been broader than any specific technique; rather, they go to the core of what it means for a real-world system to learn, and they herald the emergence of machine learning as a science.
Response of Michael I. Jordan
My career had its origins in the fields of cognitive psychology and philosophy, where, inspired by logicians such as Bertrand Russell, I was drawn to the problem of finding mathematical expression for aspects of human intelligence, including reasoning and learning. Eventually my work began to take mathematical shape in the study of relationships between computation and inference, where again I found myself in debt to pioneers of the past century, including von Neumann, Kolmogorov, Neyman, Wald, Turing, Blackwell, and Wiener. The problems that have fascinated me have revolved around how humans and machines can make good decisions based on uncertain data, and do so in a computationally-efficient, real-time manner. In studying such problems I’ve made use of a wide range of mathematics, including convex analysis, variational analysis, stochastic differential equations, symplectic integration, partial differential equations, graph theory, and random measures. It’s been exciting to uncover some of the algorithmic consequences of the mathematical structures studied in these fields, while working within the overall framework of inferential statistics.
My first decade as a professor took place at Massachusetts Institute of Technology, and I was well aware of the nearby presence at Brown University of Ulf Grenander and his “pattern theory” school, including the friendly and stimulating welcome to be found in that school from mathematicians such as Stuart Geman and David Mumford. In accepting this award, I wish to indicate my delight and honor to be associated with such individuals and with the intellectual tradition of Grenander’s pattern theory.
Biographical sketch of Michael I. Jordan
Michael I. Jordan is the Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley. His research interests bridge the computational, statistical, cognitive and biological sciences. He is known for his work on variational inference, topic models, Bayesian nonparametrics, reinforcement learning, convex and nonconvex optimization, distributed computing systems, and game-theoretic learning. Jordan is a member of the National Academy of Sciences and a member of the National Academy of Engineering. He has been named a Neyman Lecturer and a Medallion Lecturer by the Institute of Mathematical Statistics, and he has given a Plenary Lecture at the International Congress of Mathematicians. He received the IEEE John von Neumann Medal in 2020, the IJCAI Research Excellence Award in 2016, the David E. Rumelhart Prize in 2015, and the ACM/AAAI Allen Newell Award in 2009.
About the prize
The Ulf Grenander Prize in Stochastic Theory and Modeling, awarded every three years, recognizes exceptional theoretical and applied contributions in stochastic theory and modeling. It is awarded for seminal work, theoretical or applied, in the areas of probabilistic modeling, statistical inference, or related computational algorithms, especially for the analysis of complex or high-dimensional systems. The prize was established in 2016 by colleagues of Grenander (1923-2016), who was an influential scholar in stochastic processes, abstract inference, and pattern theory. The 2021 prize was recognized during the 2021 Virtual Joint Mathematics Meetings in January.