ACDL 2021 registration fee includes online participation to LOD 2021 and ACAIN 2021

In this edition of ACDL, the registration fee includes online participation to two additional Events:

  1. LOD 2021 – The 7th International Online & Onsite Conference on Machine Learning, Optimization, and Data Science – October 5 – 8, 2021, and
  2. ACAIN 2021 – Advanced Course and Symposium  on Artificial Intelligence and Neuroscience (co-located event to LOD 2021).

One registration for participation in three events: ACDL, LOD & ACAIN!

 

Congratulations to Michael I. Jordan for being awarded the 2021 Ulf Grenander Prize for his foundations contributions to machine learning!

Congratulations to Michael I. Jordan for being awarded the 2021 Ulf Grenander Prize for his foundations contributions to machine learning!

This information appeared in the latest (April 2021) issue of the Notices of American Mathematical Society

https://www.ams.org/news?news_id=6500

The Ulf Grenander Prize in Stochastic Theory and Modeling is awarded to Michael I. Jordan for foundational contributions to machine learning (ML), especially unsupervised learning, probabilistic computation, and core theory for balancing statistical fidelity with computation. (photo by Justin Bettman)

One of Jordan’s core contributions to ML is the development of the field of unsupervised learning. In his hands it has moved from a collection of unrelated algorithms to an intellectually coherent field—one largely based on probabilistic inference—that can be used to solve real-world problems.

Unsupervised learning dispenses with the labels and reinforcement signals of the other main branches of machine learning, developing algorithms that reason backwards from data to the patterns that underlie its generative mechanisms. Working from the general perspective of stochastic modeling and Bayesian inference, Jordan augmented the classical analytical distributions of Bayesian statistics with computational entities having graphical, combinatorial, temporal and spectral structure. Furthermore, making use of ideas from convex analysis and statistical physics, he developed new methods for approximate inference that exploited these structures. The resulting algorithms, which are called variational inference, are now a major area of ML and the principal engine behind scalable unsupervised learning.

Jordan has also made significant contributions to many of the other important methodologies of ML, such as neural networks, reinforcement learning, and dimensionality reduction. He is known for prescient early work on recurrent neural networks, for the first rigorous theory of convergence of Q-learning (the core dynamic-programming-based framework that underlies reinforcement learning) and for his work on “classification-calibrated loss functions,” which provides a general theory of classification that encompasses boosting and the support vector machine. In recent years, Jordan has turned his attention to optimization theory and Monte Carlo sampling, focusing on nonconvex optimization and sampling in high-dimensional spaces. Overall, his research accomplishments have been broader than any specific technique; rather, they go to the core of what it means for a real-world system to learn, and they herald the emergence of machine learning as a science.

Response of Michael I. Jordan

My career had its origins in the fields of cognitive psychology and philosophy, where, inspired by logicians such as Bertrand Russell, I was drawn to the problem of finding mathematical expression for aspects of human intelligence, including reasoning and learning. Eventually my work began to take mathematical shape in the study of relationships between computation and inference, where again I found myself in debt to pioneers of the past century, including von Neumann, Kolmogorov, Neyman, Wald, Turing, Blackwell, and Wiener. The problems that have fascinated me have revolved around how humans and machines can make good decisions based on uncertain data, and do so in a computationally-efficient, real-time manner. In studying such problems I’ve made use of a wide range of mathematics, including convex analysis, variational analysis, stochastic differential equations, symplectic integration, partial differential equations, graph theory, and random measures. It’s been exciting to uncover some of the algorithmic consequences of the mathematical structures studied in these fields, while working within the overall framework of inferential statistics.

My first decade as a professor took place at Massachusetts Institute of Technology, and I was well aware of the nearby presence at Brown University of Ulf Grenander and his “pattern theory” school, including the friendly and stimulating welcome to be found in that school from mathematicians such as Stuart Geman and David Mumford. In accepting this award, I wish to indicate my delight and honor to be associated with such individuals and with the intellectual tradition of Grenander’s pattern theory.

Biographical sketch of Michael I. Jordan

Michael I. Jordan is the Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley. His research interests bridge the computational, statistical, cognitive and biological sciences. He is known for his work on variational inference, topic models, Bayesian nonparametrics, reinforcement learning, convex and nonconvex optimization, distributed computing systems, and game-theoretic learning. Jordan is a member of the National Academy of Sciences and a member of the National Academy of Engineering. He has been named a Neyman Lecturer and a Medallion Lecturer by the Institute of Mathematical Statistics, and he has given a Plenary Lecture at the International Congress of Mathematicians. He received the IEEE John von Neumann Medal in 2020, the IJCAI Research Excellence Award in 2016, the David E. Rumelhart Prize in 2015, and the ACM/AAAI Allen Newell Award in 2009.

About the prize

The Ulf Grenander Prize in Stochastic Theory and Modeling, awarded every three years, recognizes exceptional theoretical and applied contributions in stochastic theory and modeling. It is awarded for seminal work, theoretical or applied, in the areas of probabilistic modeling, statistical inference, or related computational algorithms, especially for the analysis of complex or high-dimensional systems. The prize was established in 2016 by colleagues of Grenander (1923-2016), who was an influential scholar in stochastic processes, abstract inference, and pattern theory. The 2021 prize was recognized during the 2021 Virtual Joint Mathematics Meetings in January.

It is possible to change the mode of participation (the Registration): from Onsite → to Online and similarly from Online → to Onsite (by 19 June)

It is possible to change the mode of participation (the Registration):
from Onsite Registration → to Online Registration and similarly
from Online Registration → to Onsite Registration

We are offering the possibility to change the mode of participation to ACDL 2021. Those who register in one mode can easily change it by 19 June (one month before the course starts).

  • It is possible to take Onsite Registration and then change it to Online Registration and get a corresponding refund, but this decision must be made by 19 June.
  • Similarly, you can do Online Registration and then upgrade to Onsite Registration and pay the difference (via PayPal); this decision must also be made by 19 June.

If you have any questions please write to the organising committee: acdl@icas.cc

New Keynote Speaker: Prof. Michael I. Jordan, University of California, Berkeley, USA

Prof. Michael I. Jordan, University of California, Berkeley, USA

Pehong Chen Distinguished Professor
Department of EECS
Department of Statistics
AMP Lab
Berkeley AI Research Lab
University of California, Berkeley, USA

Michael I. Jordan is the Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley. He received his Masters in Mathematics from Arizona State University, and earned his PhD in Cognitive Science in 1985 from the University of California, San Diego. He was a professor at MIT from 1988 to 1998. His research interests bridge the computational, statistical, cognitive and biological sciences. Prof. Jordan is a member of the National Academy of Sciences, a member of the National Academy of Engineering and a member of the American Academy of Arts and Sciences. He is a Fellow of the American Association for the Advancement of Science. He has been named a Neyman Lecturer and a Medallion Lecturer by the Institute of Mathematical Statistics. He was a Plenary Lecturer at the International Congress of Mathematicians in 2018. He received the Ulf Grenander Prize from the American Mathematical Society in 2021, the IEEE John von Neumann Medal in 2020, the IJCAI Research Excellence Award in 2016, the David E. Rumelhart Prize in 2015 and the ACM/AAAI Allen Newell Award in 2009. He is a Fellow of the AAAI, ACM, ASA, CSS, IEEE, IMS, ISBA and SIAM. In 2016, Professor Jordan was named the “most influential computer scientist” worldwide in an article in Science, based on rankings from the Semantic Scholar search engine.

https://people.eecs.berkeley.edu/~jordan/

https://en.wikipedia.org/wiki/Michael_I._Jordan

https://scholar.google.com/citations?user=yxUduqMAAAAJ&hl=en

Biographical highlights

  • Professor, University of California, Berkeley, 1998-present
  • Professor, MIT, 1988-1998
  • Honorary Doctorate, Yale University, 2020
  • Honorary Professor, Peking University, 2018-present
  • Distinguished Visiting Professor, Tsinghua University, 2017-2019
  • Chaire d’Excellence, Fondation Sciences Mathématiques de Paris, 2012
  • Member, National Academy of Sciences
  • Member, National Academy of Engineering
  • Member, American Academy of Arts and Sciences
  • Fellow, American Association for the Advancement of Science
  • Fellow of the AAAI, ACM, ASA, CSS, IEEE, IMS, ISBA and SIAM
  • Elected Member, International Statistical Institute
  • AMS Ulf Grenander Prize in Stochastic Theory and Modeling, 2021
  • IEEE John von Neumann Medal, 2020
  • Plenary Speaker, International Congress of Mathematicians, 2018
  • IJCAI Research Excellence Award, 2016
  • David E. Rumelhart Prize, 2015
  • IMS Neyman Lecture, 2011
  • ACM/AAAI Allen Newell Award, 2009
  • SIAM Activity Group on Optimization Prize, 2008
  • IEEE Neural Networks Pioneer Award, 2006
  • IMS Medallion Lecture, 2004

 

New Keynote Speaker: Naftali Tishby, Hebrew University, Israel

Professor of Computer Science and Computational Neuroscientist at the Hebrew University of Jerusalem.

Ruth & Stan Flinkman Family Endowment Fund Chair in Brain Research.

Awards: Israel Defense Prize, Landau Prize in Computer Science, The 2019 IBT Award in Mathematical Neuroscience.

He is one of the leaders in machine learning research and computational neuroscience, and his numerous former students serve in key academic and industrial research positions all over the world. Tishby was the founding chair of the new computer-engineering program, and a director of the Leibnitz Center for Research in Computer Science at Hebrew University. Tishby received his PhD in theoretical physics from Hebrew University in 1985, and was a research staff member at MIT and Bell Labs from 1985 to 1991. Tishby has been a visiting professor at Princeton NECI, the University of Pennsylvania, UCSB, and IBM Research.

He works at the interfaces between computer science, physics, and biology which provide some of the most challenging problems in today’s science and technology. We focus on organizing computational principles that govern information processing in biology, at all levels. To this end, we employ and develop methods that stem from statistical physics, information theory and computational learning theory, to analyze biological data and develop biologically inspired algorithms that can account for the observed performance of biological systems. We hope to find simple yet powerful computational mechanisms that may characterize evolved and adaptive systems, from the molecular level to the whole computational brain and interacting populations.

New Keynote Speaker: Marta Kwiatkowska, Computer Science Dept., University of Oxford, UK

Marta Kwiatkowska, Computer Science Dept., University of Oxford, UK

Marta Kwiatkowska is Professor of Computing Systems and Fellow of Trinity College, University of Oxford, and Associate Head of MPLS. Prior to this she was Professor in the School of Computer Science at the University of Birmingham, Lecturer at the University of Leicester and Assistant Professor at the Jagiellonian University in Cracow, Poland. She holds a BSc/MSc in Computer Science from the Jagiellonian University, MA from Oxford and a PhD from the University of Leicester. In 2014 she was awarded an honorary doctorate from KTH Royal Institute of Technology in Stockholm.

Marta Kwiatkowska spearheaded the development of probabilistic and quantitative methods in verification on the international scene and is currently working on safety and robustness for machine learning and AI. She led the development of the PRISM model checker, the leading software tool in the area and widely used for research and teaching and winner of the HVC 2016 Award. Applications of probabilistic model checking have spanned communication and security protocols, nanotechnology designs, power management, game theory, planning and systems biology, with genuine flaws found and corrected in real-world protocols. Kwiatkowska gave the Milner Lecture in 2012 in recognition of “excellent and original theoretical work which has a perceived significance for practical computing”. She is the first female winner of the 2018 Royal Society Milner Award and Lecture, see her lecture here, and won the BCS Lovelace Medal in 2019. Marta Kwiatkowska was invited to give keynotes at the LICS 2003, ESEC/FSE 2007 and 2019, ETAPS/FASE 2011, ATVA 2013, ICALP 2016, CAV 2017, CONCUR 2019 and UbiComp 2019 conferences.

She is a Fellow of the Royal Society, Fellow of ACM, member of Academia Europea, Fellow of EATCS, Fellow of the BCS and Fellow of Polish Society of Arts & Sciences Abroad. She serves on editorial boards of several journals, including Information and Computation, Formal Methods in System Design, Logical Methods in Computer Science, Science of Computer Programming and Royal Society Open Science journal. Kwiatkowska’s research has been supported by grant funding from EPSRC, ERC, EU, DARPA and Microsoft Research Cambridge, including two prestigious ERC Advanced Grants, VERIWARE (“From software verification to everyware verification”) and FUN2MODEL (“From FUNction-based TO MOdel-based automated probabilistic reasoning for DEep Learning”), and the EPSRC Programme Grant on Mobile Autonomy.