Biography
Davide Bacciu is Associate Professor at the Computer Science Department, University of Pisa, where he heads the Pervasive Artificial Intelligence laboratory (pai.di.unipi.it). The core of his research is on Machine Learning (ML) and deep learning models for structured data processing, including sequences, trees and graphs. He is the PI of an Italian National project on ML for structured data and the Coordinator of the H2020-RIA project TEACHING (2020-2022). He is an IEEE Senior Member, the founder and chair of the IEEE Task Force on learning for structured data (www.learning4graphs.org), a member of the IEEE NN Technical Committee and of the IEEE CIS Task Force on Deep Learning. He is an Associate Editor of the IEEE Transactions on Neural Networks and Learning Systems. Since 2017 he is the Secretary of the Italian Association for Artificial Intelligence (AI*IA). He coordinates the task force on Bioinformatics and Drug Repurposing of the CLAIRE-COVID-19 European initiative (covid19.claire-ai.org).
Lectures
Graphs are an effective representation for complex information, providing a straightforward means to bridge numerical data and symbolic relationships. The lecture will provide an easy paced introduction to the lively field of deep learning for graphs covering foundational aspects and consolidated deep learning models for graph structured data, including spectral and spatial convolutional networks for graphs, contexual graph processing and attention-based methods.
The lecture will build on the content of the first seminar on deep graph network fundamentals, to introduce some recent generative approaches for dealing with graph-structured data. We will also devote some time to single out open research challenges, applications, and interesting directions of future research in the field.
Topics
Artificial Intelligence in the Life Sciences, Machine Learning in Chemistry, Drug DiscoveryBiography
Jürgen Bajorath received a diploma and PhD in biochemistry (1988) from the Free University in West-Berlin. He was a postdoctoral Fellow with Arnie Hagler, Biosym Technol./Agouron Inst., in San Diego where he began to work on computational methods for bioinformatics and drug design.
From 1990-2004, Jürgen held positions at the Bristol-Myers Squibb Pharmaceutical Research Institute, New Chemical Entities, and the University of Washington in Seattle. During this time, his work increasingly focused on bio- and cheminformatics.
In 2004, Jürgen was appointed Professor and Chair of the newly formed Department of Life Science Informatics at the University of Bonn. He also continues to be an Affiliate Professor in the Department of Biological Structure at the University of Washington.
Research of Jürgen’s group currently encompasses the development of computational methods for medicinal chemistry and drug discovery and machine learning in chemistry.
From 2008-2020, Jürgen served as the computational editor of the Journal of Medicinal Chemistry. In 2021, he was appointed editor-in-chief of the new journal Artificial Intelligence in the Life Sciences launched by Elsevier.
Recent honors include the 2015 Herman Skolnik Award and the 2018 National Award for Computers in Chemical and Pharmaceutical Research of the American Chemical Society.
In 2021, the German Research Foundation approved a new Core Area Program for ‘Molecular Machine Learning in Chemistry’ that was conceptualized by Frank Glorius (University of Münster), Karsten Reuter (Max-Planck Institute Berlin), and Jürgen.
Lectures
Topics
Artificial Intelligence, Deep Learning.Biography
Pierre Baldi is a chancellor’s professor of computer science at University of California Irvineand the director of its Institute for Genomics and Bioinformatics.
Pierre Baldi received his Bachelor of Science and Master of Science degrees at the University of Paris, in France. He then obtained his Ph.D. degree in mathematics at the California Institute of Technology in 1986 supervised by R. M. Wilson.
From 1986 to 1988, he was a postdoctoral fellow at the University of California, San Diego. From 1988 to 1995, he held faculty and member of the technical staff positions at the California Institute of Technology and at the Jet Propulsion Laboratory, where he was given the Lew Allen Award for Research Excellence in 1993. He was CEO of a start up company called Net-ID from 1995 to 1999 and joined University of California, Irvine in 1999.
Baldi’s research interests include artificial intelligence, statistical machine learning, and data mining, and their applications to problems in the life sciences in genomics, proteomics, systems biology, computational neuroscience, and, recently, deep learning.
Baldi has over 250 publications in his field of research and five books including
- “Bioinformatics: the Machine Learning Approach” (MIT Press, 1998; 2nd Edition, 2001, ISBN 978-0262025065) a worldwide best-seller
- “Modeling the Internet and the Web. Probabilistic Methods and Algorithms“, by Pierre Baldi, Paolo Frasconi and Padhraic Smyth. Wiley editors, 2003.
- “The Shattered Self—The End of Natural Evolution“, by Pierre Baldi. MIT Press, 2001.
- “DNA Microarrays and Gene Regulation“, Pierre Baldi and G. Wesley Hatfield. Cambridge University Press, 2002.
- “Deep Learning in Science”, Pierre Baldi, Cambridge University press, 2021.
Baldi is a fellow of the Association for the Advancement of Artificial Intelligence (AAAI), the AAAS, the IEEE,and the Association for Computing Machinery (ACM). He is also the recipient of the 2010 Eduardo R. Caianiello Prize for Scientific Contributions to the field of Neural Networks and a fellow of the International Society for Computational Biology (ISCB).
Deep learning algorithm solves Rubik’s Cube faster than any human.
https://news.uci.edu/2019/07/15/uci-researchers-deep-learning-algorithm-solves-rubiks-cube-faster-than-any-human/
AI solves Rubik’s Cube in one second
https://www.bbc.com/news/technology-49003996
https://scholar.google.com/citations?user=RhFhIIgAAAAJ&hl=it
Lectures
Topics
Information Theory, Mathematics for Machine LearningBiography
Roman Belavkin is a Reader in Informatics at the Department of Computer Science, Middlesex University, UK. He has MSc degree in Physics from the Moscow State University and PhD in Computer Science from the University of Nottingham, UK. In his PhD thesis, Roman combined cognitive science and information theory to study the role of emotion in decision-making, learning and problem solving. His main research interests are in mathematical theory of dynamics of information and optimization of learning, adaptive and evolving systems. He used information value theory to give novel explanations of some common decision-making paradoxes. His work on optimal transition kernels showed non-existence of optimal deterministic strategies in a broad class of problems with information constraints.
Roman’s theoretical work on optimal parameter control in algorithms has found applications to computer science and biology. From 2009, Roman lead a collaboration between four UK universities involving mathematics, computer science and experimental biology on optimal mutation rate control, which lead to the discovery in 2014 of mutation rate control in bacteria (reported in Nature Communications http://doi.org/skb and PLOS Biology http://doi.org/cb9s). He also contributed to research projects on neural cell-assemblies, independent component analysis and anomaly detection, such as cyber attacks.
Lectures
Abstract: TBA
Abstract: TBA
Abstract: TBA
Topics
Critical Data Science, Data Science, Ethics and AI, Privacy/Data Protection, Discrimination and Fairness.Biography
Bettina Berendt is Professor for Internet and Society at the Faculty of Electrical Engineering and Computer Science at Technische Universität Berlin, Germany, Director of the Weizenbaum Institute for the Networked Society, Germany, and guest professor at KU Leuven, Belgium. She previously held positions as professor in the Artificial Intelligence group (Department of Computer Science at KU Leuven) and in the Information Systems group (School of Business and Economics at Humboldt-Universität zu Berlin). Her research centres on data science and critical data science, including privacy/data protection, discrimination and fairness, and ethics and AI, with a focus on textual and web-related data.
Lectures
Violations of privacy as well as unfairness and discrimination have been highlighted as two of the biggest ethical challenges for AI. At the same time, computer scientists have proposed a large number of methods for enhancing (data) privacy and fairness. Can these strategies support one another, or is there a tradeoff between privacy and fairness? And how can interdisciplinary perspectives inspire, enhance or correct computational ones? In this talk, I will present and discuss several answers that have been given to this question and discuss the assumptions that lead to “support” or “tradeoff” results. Specific attention will be given to the use of obfuscation for enhancing fairness, and the larger question of whether, when or how information hiding is fair (or not).
Violations of privacy as well as unfairness and discrimination have been highlighted as two of the biggest ethical challenges for AI. At the same time, computer scientists have proposed a large number of methods for enhancing (data) privacy and fairness. Can these strategies support one another, or is there a tradeoff between privacy and fairness? And how can interdisciplinary perspectives inspire, enhance or correct computational ones? In this talk, I will present and discuss several answers that have been given to this question and discuss the assumptions that lead to “support” or “tradeoff” results. Specific attention will be given to the use of obfuscation for enhancing fairness, and the larger question of whether, when or how information hiding is fair (or not).
Topics
Quantum Machine LearningBiography
Quantum machine learning, Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe & Seth Lloyd, Nature, volume 549, pages 195–202 (2017).
After his undergraduate studies (Bachelor of Science from Portland State University), Biamonte was employed as one of the world’s first quantum software programmers at D-Wave Systems Inc. in Vancouver B.C., Canada (2004-2007). His subsequent Doctorate from Oxford (2010) earned a Chancellors award. Biamonte worked as a research fellow at Harvard and as part of a joint Oxford/Singapore postdoctoral program before joining the Institute for Scientific Interchange (ISI Foundation) in Torino Italy to direct the institute’s Quantum Science Division (2012-2017). Biamonte joined Skoltech in 2017, while Skoltech’s Laboratory for Quantum Information Processing was officially founded in 2019 with Biamonte appointed Head of Laboratory. Biamonte’s research focuses broadly on the theory and implementation of modern quantum algorithms and employs various mathematical techniques, particularly group-algebraic techniques, tensor networks and the formal theory of computation and information. Biamonte is best known for several results:
- A 2019 proof that variational quantum computation can be used as a computationally universal model of quantum computation [arXiv:1903.04500].
- A definition given in 2016 of a spectral graph function which provably satisfies both (i) the definition of an entropy and (ii) subadditivity [with Domenico in PRX 6, 041062 (2016)].
- A 2015 proof that #P-hard counting problems (and hence 2, 3-SAT decision problems) can be solved efficiently when their tensor network expression has at most O(log c) COPY-tensors and polynomial bounded fan-out [with Turner and Morton in J. Stat. Phys. 160, 1389 (2015)].
- A 2008 proof that the two-body model Hamiltonian with tunable XX, ZZ terms is (i) computationally universal for adiabatic quantum computation and (ii) admits a QMA-complete ground state energy decision problem [with Love in PRA 78, 012352 (2008)]
Biamonte is also credited for pioneering work developing quantum algorithms for electronic structure calculations and more recently for work uniting quantum information processing with machine learning. Biamonte has further provided theoretical support to enable milestone quantum information processing experimental demonstrations. The list includes the first quantum algorithmic demonstration of quantum chemistry [Nature Chemistry 2, 106 (2009)] (linear optics), the first experimental implementation of optimal control [Nature Communications 5, 3371 (2014)] (creating a quantum random access memory using NV-centers in diamond) as well as the first demonstration of neural network quantum state tomography on actual experimental data [npj Quantum Information 6:20 (2020)] (linear optics).
International Awards.
- Usern Medal Laureate in Formal Sciences (2018)
- Shapiro Lecture in Mathematical Physics, Pennsylvania State University (2014)
- Invited lifelong member (from 2013) of the Foundational Questions Institute (FQXi)
- Longuet-Higgins Paper Prize [jointly with JD Whitfield and AA Guzik for Molecular Physics 109, 735 (2011)]
Lectures
Lecture 1: Introduction to variational quantum algorithms: optimisation, machine learning and universality of the variational model (1/2)
Modern quantum processors enable the execution of short quantum circuits. These quantum circuits can be iteratively tuned — trained — to minimise an objective function and solve problem instances. This is known as variational quantum computation: local measurements are repeated and modified to determine the expected value of an effective Hamiltonian. Whereas solving practical problems still appears out of reach, many questions of theoretical interest surround the variational model. I will provide a tutorial introduction to this model and also some recent limitations found in collaboration, including reachability deficits in QAOA (i.e. increasing problem density — the ratio of constraints to variables — induces under parameterisation at fixed circuit depth), parameter saturations in QAOA (that layer-wise training plateaus) and the existence of abrupt trainability transitions (that a critical number of layers exists where any fewer layers results in no training for certain objective functions). I will also explain some more forward looking findings, including the concentration of parameters in QAOA (showing a problem instance independence of optimised circuit parameters) and my proof that the variational model is, in theory, a universal model of quantum computation.
Lesson 1
1.0 Survey of modern results
1.1. List of experimental demonstrations of variational quantum computation and machine learning
1.2 List of theoretical milestones
2.0 Introduction to variational quantum computation
2.1 Variational state-space, penalty function cardinality and Clifford invariance
2.2 QAOA
2.2.1 exact solution (Grover QAOA)
2.2.2 Parameter concentrations
2.2.3 MAX 3-SAT and reachability deficits
Modern quantum processors enable the execution of short quantum circuits. These quantum circuits can be iteratively tuned — trained — to minimise an objective function and solve problem instances. This is known as variational quantum computation: local measurements are repeated and modified to determine the expected value of an effective Hamiltonian. Whereas solving practical problems still appears out of reach, many questions of theoretical interest surround the variational model. I will provide a tutorial introduction to this model and also some recent limitations found in collaboration, including reachability deficits in QAOA (i.e. increasing problem density — the ratio of constraints to variables — induces under parameterisation at fixed circuit depth), parameter saturations in QAOA (that layer-wise training plateaus) and the existence of abrupt trainability transitions (that a critical number of layers exists where any fewer layers results in no training for certain objective functions). I will also explain some more forward looking findings, including the concentration of parameters in QAOA (showing a problem instance independence of optimised circuit parameters) and my proof that the variational model is, in theory, a universal model of quantum computation.
Lesson 2
3.0 Variational quantum computation revisited
3.1 Telescoping construction and stability lemma
3.2 Solving linear systems by variational algorithms
3.3 Universality of the variational model
4.0 Open problems
Topics
Machine learningBiography
Christopher Michael Bishop FRS FRSE FREng is the Laboratory Director at Microsoft Research Cambridge, Professor of Computer Science at the University of Edinburgh and a Fellow of Darwin College, Cambridge.
Author of Pattern Recognition and Machine Learning (PRML) book.
Bishop obtained a Bachelor of Arts degree in Physics from St Catherine’s College, Oxford, and a PhD in Theoretical Physics from the University of Edinburgh, with a thesis on quantum field theory supervised by David Wallace and Peter Higgs.
Bishop’s research investigates machine learning by allowing computers to learn from data and experience.
Awards and Honours
Chris Bishop at the Royal Society admissions day in London, July 2017 Bishop was awarded the Tam Dalyell prize in 2009 and the Rooke Medal from the Royal Academy of Engineering in 2011. He gave the Royal Institution Christmas Lectures in 2008 and the Turing Lecture in 2010. Bishop was elected a Fellow of the Royal Academy of Engineering (FREng) in 2004, a Fellow of the Royal Society of Edinburgh (FRSE) in 2007, and Fellow of the Royal Society (FRS) in 2017.
https://en.wikipedia.org/wiki/Christopher_Bishop
https://scholar.google.co.uk/citations?user=gsr-K3ADUvAC&hl=en
Lectures
Abstract: TBA
Abstract: TBA
Topics
Bayesian & causal reasoning, graphical models and variational inferenceBiography
Senior Staff Research Scientist in Machine Learning at DeepMind.
She received a Diploma di Laurea in Mathematics from University of Bologna and a PhD in Machine Learning from École Polytechnique Fédérale de Lausanne (IDIAP Research Institute). Before joining DeepMind, she worked in the Empirical Inference Department at the Max-Planck Institute for Intelligent Systems (Prof. Dr. Bernhard Schölkopf), in the Machine Intelligence and Perception Group at Microsoft Research Cambridge (Prof. Christopher Bishop) and in the Statistical Laboratory at the University of Cambridge (Prof. Philip Dawid).
Her research interests are based around Bayesian & causal reasoning, graphical models, variational inference, time-series models, deep learning, and ML fairness and bias.
Lectures
Abstract (TBA)
Abstract (TBA)
Topics
AI, Meta-Search, Machine Reading, Open Information ExtractionBiography
Dr. Oren Etzioni is Chief Executive Officer at AI2. He is Professor Emeritus, University of Washington as of October 2020 and a Venture Partner at the Madrona Venture Group since 2000. His awards include Seattle’s Geek of the Year (2013), and he has founded or co-founded several companies, including Farecast (acquired by Microsoft). He has written over 100 technical papers, as well as commentary on AI for The New York Times, Wired, and Nature. He helped to pioneer meta-search, online comparison shopping, machine reading, and Open Information Extraction.
Lectures
Abstract: TBA
Topics
Learning with constraints, VisionBiography
Marco Gori received the Ph.D. degree in 1990 from Università di Bologna, Italy, working partly at the School of Computer Science (McGill University, Montreal). In 1992, he became an Associate Professor of Computer Science at Università di Firenze and, in November 1995, he is currently leading the Siena Artificial Intelligence Lab (SAILAB) http://sailab.diism.unisi.it/ Professor Gori is primarily interests in machine learning with applications to pattern recognition, Web mining, game playing, and bioinformatics. He has recently published the monograph “Machine Learning: A constraint-based approach,” (MK, 560 pp., 2018), which contains a unified view of his approach. His pioneering role in neural networks has been emerging especially from the recent interest in Graph Neural Networks, that he contributed to introduce in the seminal paper “Graph Neural Networks,” IEEE-TNN, 2009. Professor Gori has been the chair of the Italian Chapter of the IEEE Computation Intelligence Society and the President of the Italian Association for Artificial Intelligence. He is a Fellow of IEEE, a Fellow of EurAI, and a Fellow of IAPR. He was one the first people involved in European project on Artificial Intelligence CLAIRE, and he is currently a Fellow of Machine Learning association ELLIS. He is in the scientific committee of ICAR-CNR and is the President of the Scientific Committee of FBK-ICT. Dr. Gori is currently holding an international 3IA Chair at the Université Cote d’Azur.
Lectures
Deep Learning To See Towards New Foundations of Computer Vision
(with Alessandro Betti and Stefano Melacci)
Deep learning has revolutionized computer vision and visual perception. Amongst others, the great representational power of convolutional neural networks and the elegance and efficiency of Backpropagation have played a crucial role. By and large, there is a strong scientific recognition of their popularity, which is very well deserved. However, as yet, most significant results are still based on the truly artificial supervised learning communication protocol, which sets in fact a battlefield for computers, and it is far from being natural. In these lectures we argue that, when relying on supervised learning, we have been working on a problem that is remarkably different with respect to the one offered by Nature. We claim that motion invariance is in fact the only process which is in charge for conquering visual skills. Based on the underlying representational capabilities of deep architectures and learning algorithms that are still related to Backpropagation, in these lectures we show that massive image supervision can in fact be replaced with the natural communication protocol arising from living in a visual environment, just like animals do. This leads to formulate learning regardless of the accumulation of labelled visual databases, but simply by allowing visual agents to live in their own visual environments. We show that learning arises from motion invariance principles that makes it possibile to gain object identity as well as its affordance. We introduce a vision field theory for expressing those motion invariance principles and we enlighten the indissoluble pair of visual features and their conjugated velocities, thus extending the classic brightness invariance principle for the optical flow estimation. The emergence of visual feature in the natural framework of visual environments is given a systematic foundation by establishing information-based laws that naturally enable deep learning processes.The vision field theory herein proposed might offer interesting support to visual perception and neuroscience, while it opens the doors to massive applications in computer vision, thus removing the need for labelled visual databases.
Deep Learning To See Towards New Foundations of Computer Vision
(with Alessandro Betti and Stefano Melacci)
Deep learning has revolutionized computer vision and visual perception. Amongst others, the great representational power of convolutional neural networks and the elegance and efficiency of Backpropagation have played a crucial role. By and large, there is a strong scientific recognition of their popularity, which is very well deserved. However, as yet, most significant results are still based on the truly artificial supervised learning communication protocol, which sets in fact a battlefield for computers, and it is far from being natural. In these lectures we argue that, when relying on supervised learning, we have been working on a problem that is remarkably different with respect to the one offered by Nature. We claim that motion invariance is in fact the only process which is in charge for conquering visual skills. Based on the underlying representational capabilities of deep architectures and learning algorithms that are still related to Backpropagation, in these lectures we show that massive image supervision can in fact be replaced with the natural communication protocol arising from living in a visual environment, just like animals do. This leads to formulate learning regardless of the accumulation of labelled visual databases, but simply by allowing visual agents to live in their own visual environments. We show that learning arises from motion invariance principles that makes it possibile to gain object identity as well as its affordance. We introduce a vision field theory for expressing those motion invariance principles and we enlighten the indissoluble pair of visual features and their conjugated velocities, thus extending the classic brightness invariance principle for the optical flow estimation. The emergence of visual feature in the natural framework of visual environments is given a systematic foundation by establishing information-based laws that naturally enable deep learning processes.The vision field theory herein proposed might offer interesting support to visual perception and neuroscience, while it opens the doors to massive applications in computer vision, thus removing the need for labelled visual databases.
Topics
Knowledge Processing, Logic, AIBiography
Georg Gottlob is a Royal Society Research Professor and a Professor of Informatics at Oxford University. and at TU Wien. At Oxford he is a Fellow of St John’s College. His interests include knowledge representation, logic and complexity, and database and Web querying. He has received various awards, among which the Wittgenstein Award (Austria) and the Ada Lovelace Medal (UK). He is a Fellow of the Royal Society, of the Austrian Academy of Science, the Leopoldina National Academyof Sciences (Germany), and of the Academia Europaea. He was a founder of Lixto, a company specialised in semi-automatic web data extraction which was acquired by McKinsey in 2013. Gottlob was awarded an ERC Advanced Investigator’s Grant for the project “DIADEM: Domain-centric Intelligent Automated Data Extraction Methodology”. Based on the results of this project, he co-founded Wrapidity Ltd, a company that specialised in fully automated web data extraction, which was acquired in 2016 by Meltwater. He recently co-founded DeepReason.ai, which puts the logic-based VADALOG system into practice and applies it with banks and other corporate customers.
Lectures
Nowadays, when people speak about AI, they usually mean machine learning. Machine learning, in particular, deep learning, is a powerful method for generating a type of knowledge that could be classified as self-learned knowledge. We humans, on the other hand, make heavy use of two types of knowledge: (i) self-learned knowledge and (ii) transferable knowledge learned or generated by others. If you read this and/or attend the talk, this is mainly because of this second type of Knowledge. In these lectures, I will argue that the combination of both types of knowledge is needed for more powerful and fair automated decision making or decision support, and thus for the next level of AI. I will discuss various requirements for reasoning formalisms towards this purpose. After discussing logical languages for knowledge-representation and reasoning, I will briefly introduce the VADALOG system developed at Oxford and give an outlook on my recent project RAISON DATA funded by the Royal Society.
Nowadays, when people speak about AI, they usually mean machine learning. Machine learning, in particular, deep learning, is a powerful method for generating a type of knowledge that could be classified as self-learned knowledge. We humans, on the other hand, make heavy use of two types of knowledge: (i) self-learned knowledge and (ii) transferable knowledge learned or generated by others. If you read this and/or attend the talk, this is mainly because of this second type of Knowledge. In these lectures, I will argue that the combination of both types of knowledge is needed for more powerful and fair automated decision making or decision support, and thus for the next level of AI. I will discuss various requirements for reasoning formalisms towards this purpose. After discussing logical languages for knowledge-representation and reasoning, I will briefly introduce the VADALOG system developed at Oxford and give an outlook on my recent project RAISON DATA funded by the Royal Society.
Topics
machine learning, computer science, statistics, artificial intelligence, optimizationBiography
Michael I. Jordan is the Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley. He received his Masters in Mathematics from Arizona State University, and earned his PhD in Cognitive Science in 1985 from the University of California, San Diego. He was a professor at MIT from 1988 to 1998. His research interests bridge the computational, statistical, cognitive and biological sciences. Prof. Jordan is a member of the National Academy of Sciences, a member of the National Academy of Engineering and a member of the American Academy of Arts and Sciences. He is a Fellow of the American Association for the Advancement of Science. He has been named a Neyman Lecturer and a Medallion Lecturer by the Institute of Mathematical Statistics. He was a Plenary Lecturer at the International Congress of Mathematicians in 2018. He received the Ulf Grenander Prize from the American Mathematical Society in 2021, the IEEE John von Neumann Medal in 2020, the IJCAI Research Excellence Award in 2016, the David E. Rumelhart Prize in 2015 and the ACM/AAAI Allen Newell Award in 2009. He is a Fellow of the AAAI, ACM, ASA, CSS, IEEE, IMS, ISBA and SIAM. In 2016, Professor Jordan was named the “most influential computer scientist” worldwide in an article in Science, based on rankings from the Semantic Scholar search engine.
https://people.eecs.berkeley.edu/~jordan/
https://en.wikipedia.org/wiki/Michael_I._Jordan
https://scholar.google.com/citations?user=yxUduqMAAAAJ&hl=en
Lectures
Topics
Probabilistic reasoning, Deep Learning, Safety and Trust for Mobile Autonomous RobotsBiography
Marta Kwiatkowska is Professor of Computing Systems and Fellow of Trinity College, University of Oxford, and Associate Head of MPLS. Prior to this she was Professor in the School of Computer Science at the University of Birmingham, Lecturer at the University of Leicester and Assistant Professor at the Jagiellonian University in Cracow, Poland. She holds a BSc/MSc in Computer Science from the Jagiellonian University, MA from Oxford and a PhD from the University of Leicester. In 2014 she was awarded an honorary doctorate from KTH Royal Institute of Technology in Stockholm.
Marta Kwiatkowska spearheaded the development of probabilistic and quantitative methods in verification on the international scene and is currently working on safety and robustness for machine learning and AI. She led the development of the PRISMmodel checker, the leading software tool in the area and widely used for research and teaching and winner of the HVC 2016 Award. Applications of probabilistic model checking have spanned communication and security protocols, nanotechnology designs, power management, game theory, planning and systems biology, with genuine flaws found and corrected in real-world protocols. Kwiatkowska gave the Milner Lecture in 2012 in recognition of “excellent and original theoretical work which has a perceived significance for practical computing”. She is the first female winner of the 2018 Royal Society Milner Award and Lecture, see her lecture here, and won the BCS Lovelace Medal in 2019. Marta Kwiatkowska was invited to give keynotes at the LICS 2003, ESEC/FSE 2007 and 2019, ETAPS/FASE 2011, ATVA 2013, ICALP 2016, CAV 2017, CONCUR 2019 and UbiComp 2019 conferences.
She is a Fellow of the Royal Society, Fellow of ACM, member of Academia Europea, Fellow of EATCS, Fellow of the BCS and Fellow of Polish Society of Arts & Sciences Abroad. She serves on editorial boards of several journals, including Information and Computation, Formal Methods in System Design, Logical Methods in Computer Science, Science of Computer Programming and Royal Society Open Science journal. Kwiatkowska’s research has been supported by grant funding from EPSRC, ERC, EU, DARPA and Microsoft Research Cambridge, including two prestigious ERC Advanced Grants, VERIWARE (“From software verification to everyware verification”) and FUN2MODEL (“From FUNction-based TO MOdel-based automated probabilistic reasoning for DEep Learning”), and the EPSRC Programme Grant on Mobile Autonomy.
Lectures
Topics
Data Science, Optimization, NetworksBiography
Panos M. Pardalos serves as distinguished professor of industrial and systems engineering at the University of Florida. Additionally, he is the Paul and Heidi Brown Preeminent Professor of industrial and systems engineering. He is also an affiliated faculty member of the computer and information science Department, the Hellenic Studies Center, and the biomedical engineering program. He is also the director of the Center for Applied Optimization. Pardalos is a world leading expert in global and combinatorial optimization. His recent research interests include network design problems, optimization in telecommunications, e-commerce, data mining, biomedical applications, and massive computing.
https://en.wikipedia.org/wiki/Panos_M._Pardalos
https://scholar.google.com/citations?user=4e_KEdUAAAAJ&hl=en
Lectures
Topics
Science of Autonomy, AI & ML, Robotics, Systems & NetworkingBiography
Daniela Rus is the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science, Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, and Deputy Dean of Research in the Schwarzman College of Computing at MIT. Rus’ research interests are in robotics and artificial intelligence. The key focus of her research is to develop the science and engineering of autonomy. Rus is a Class of 2002 MacArthur Fellow, a fellow of ACM, AAAI and IEEE, a member of the National Academy of Engineering, and of the American Academy of Arts and Sciences. She is a senior visiting fellow at MITRE Corporation. She is the recipient of the Engelberger Award for robotics. She earned her PhD in Computer Science from Cornell University.
Awards
Woman in STEM Award, Wheaton College, 2018
Member, American Academy of Arts and Sciences, 2017
Robotic Industries Association: Joseph F Engelberger Robotics Award for Education, 2017
Member of the National Academy of Engineering (NAE)
Fellow of the Association for Computing Machinery (ACM)
Fellow of the Institute of Electrical and Electronics Engineers (IEEE)
Fellow of the Association for the Advancement of Artificial Intelligence (AAAI)
MacArthur Fellow, Class of 2002
Andrew (1956) and Erna Viterbi Chair
Best Paper Award Finalist, ICRA 2015
Best Manipulation Paper Finalist, ICRA 2015
Best Paper Award, ROBIO 2014
Best Paper Award, IROS 2014
Best Presented Paper, Mobicom 2014
Best Paper, Robotics Systems&Science 2014
Curiosity Award, Cambridge Science Festival
1st Place Hardware & Curriculum Categories for Seg robot at the 2014 AFRON Robot
Best Paper for Entertainment Robot Ent. Systems, IROS 2013
Most Societally Beneficial Video, IJCAI 2013
Best Automation Paper Finalist, ICRA 2013
Best Robot Actor for Seraph, Robot Film Festival 2012
Best Paper Finalist, BIOROB 2012
Best Paper Award, ACM Sensys 2004
http://danielarus.csail.mit.edu/index.php/about-daniela-2/press-2/
http://danielarus.csail.mit.edu
https://youtu.be/CBbiDBJSNXM
Lectures
Deployment of autonomous vehicles on public roads promises increases in efficiency and safety, and requires evaluating risk, understanding the intent of human drivers, and adapting to different driving styles. Autonomous vehicles must also behave in safe and predictable ways without requiring explicit communication. This talk describes how to integrate risk and behavior analysis in the control look of an autonomous car. I will describe how Social Value Orientation (SVO), which captures how an agent’s social preferences and cooperation affect their interactions with others by quantifying the degree of selfishness or altruism, can be integrated in decision making and provide recent examples of developing and deploying self-driving vehicles with adaptation capabilities.
Topics
Machine Learning, Probabilistic Methods, Ethical Machine LearningBiography
Isabel Valera is a full Professor on Machine Learning at the Department of Computer Science of Saarland University in Saarbrücken (Germany), and an independent group leader at the MPI for Intelligent Systems in Tübingen (Germany) until the end of the year.
She is a fellow of the European Laboratory for Learning and Intelligent Systems ( ELLIS), where she is part of the Robust Machine Learning Program and of the Saarbrücken Artificial Intelligence & Machine learning (Sam) Unit.
Prior to this, she has held a German Humboldt Post-Doctoral Fellowship, and a “Minerva fast track” fellowship from the Max Planck Society. She obtained thePhD in 2014 and MSc degree in 2012 from the University Carlos III in Madrid (Spain), and worked as postdoctoral researcher at the MPI for Software Systems (Germany) and at the University of Cambridge (UK).
Lectures
Topics
Machine Learning for Medicine, Data Science and decisions, Artificial IntelligenceBiography
Mihaela van der Schaar is the John Humphrey Plummer Professor of Machine Learning, Artificial Intelligence and Medicine at the University of Cambridge, a Fellow at The Alan Turing Institute in London, and a Chancellor’s Professor at UCLA.
Mihaela was elected IEEE Fellow in 2009. She has received numerous awards, including the Oon Prize on Preventative Medicine from the University of Cambridge (2018), a National Science Foundation CAREER Award (2004), 3 IBM Faculty Awards, the IBM Exploratory Stream Analytics Innovation Award, the Philips Make a Difference Award and several best paper awards, including the IEEE Darlington Award.
Mihaela’s work has also led to 35 USA patents (many widely cited and adopted in standards) and 45+ contributions to international standards for which she received 3 International ISO (International Organization for Standardization) Awards.
In 2019, she was identified by National Endowment for Science, Technology and the Arts as the most-cited female AI researcher in the UK. She was also elected as a 2019 “Star in Computer Networking and Communications” by N²Women. Her research expertise spans signal and image processing, communication networks, network science, multimedia, game theory, distributed systems, machine learning and AI.
Mihaela’s research focus is on machine learning, AI and operations research for healthcare and medicine.
In addition to leading the van der Schaar Lab, Mihaela is founder and director of the Cambridge Centre for AI in Medicine (CCAIM).
9 papers @ NeurIPS 2020.
7 papers accepted at ICLM 2020.
2 papers @ ICLR 2020.
4 papers @ AISTATS 2020.
5 papers accepted at NeurIPS 2019.
https://www.vanderschaar-lab.com/publications/
Lectures
Medicine stands apart from other areas where machine learning can be applied. While we have seen advances in other fields with lots of data, it is not the volume of data that makes medicine so hard, it is the challenges arising from extracting actionable information from the complexity of the data. It is these challenges that make medicine the most exciting area for anyone who is really interested in the frontiers of machine learning – giving us real-world problems where the solutions are ones that are societally important and which potentially impact on us all. Think Covid 19!
In this talk I will show how machine learning is transforming medicine and how medicine is driving new advances in machine learning, including new methodologies in automated machine learning, interpretable and explainable machine learning, dynamic forecasting, and causal inference.
Biography
Lectures
In this second lecture, we’ll transfer prior knowledge to not (only) learn the model weights faster, but also find the optimal model architecture for a new task. This is also known as automated machine learning, which can involve Neural Architecture Search or finding optimal machine learning pipelines. Such a model (re)design is especially necessary when new tasks are somehow different from previous tasks. While most AutoML methods simply start from scratch every time they are given a new task, we’ll specifically look at ways to transfer prior knowledge to speed up the search using prior experience.
Each Lecturer will hold three/four lessons on a specific topic.