Lectures


Lecture 1: Deep Graph Networks: Fundamentals   

Graphs are an effective representation for complex information, providing a straightforward means to bridge numerical data and symbolic relationships. The lecture will provide an easy paced introduction to the lively field of deep learning for graphs covering foundational aspects and consolidated deep learning models for graph structured data, including spectral and spatial convolutional networks for graphs, contexual graph processing and attention-based methods.

Lecture 2: Deep Graph Networks: Generative approaches and research directions

The lecture will build on the content of the first seminar on deep graph network fundamentals, to introduce some recent generative approaches for dealing with graph-structured data. We will also devote some time to single out open research challenges, applications, and interesting directions of future research in the field.



Lecture 1: Data trends and artificial intelligence in medicinal chemistry and drug design
In medicinal chemistry, biological activities of small molecules take center stage. Different from other areas of chemistry, machine learning already has a long history in compound property prediction. Notably, the increasing popularity of deep learning (DL) approaches is also influencing computational medicinal chemistry. However, compound data characteristics and features of widely used molecular representations do not necessarily play into the strengths of DL. Accordingly, in applications such as activity prediction, there is often no detectable advantage in employing increasingly complex learning strategies. On the other hand, DL opens the door to new applications that have been difficult to consider thus far. This lecture presents exemplary computational applications in medicinal chemistry and drug design where methodological complexity does not necessarily scale with success and others that have essentially been impossible to address until recently. Importantly, to further increase the acceptance of machine learning in the practice of medicinal chemistry, it is essential to alleviate the black box character of predictions.
Lecture 2: Rationalizing molecular promiscuity through data analysis and machine learning
Multi-target activity of small molecules, also referred to as promiscuity, leads to desired as well as undesired effects in drug discovery. From a basic research perspective, the ability of small molecules to engage in well-defined interactions with different targets is of considerable interest to rationalize different facets of molecular recognition. In addition to applying experimental techniques, compound promiscuity can also be investigated through systematic computational analysis of compound activity data and machine learning, which represents the main topic of this lecture. Both from a principal and practical point of view, a key question in promiscuity analysis is whether or not structural features exist that set compounds with multi-target activity apart from others with corresponding single-target activity. This question can be directly addressed through machine learning. Visualization of structural features that determine predictions provides an attractive basis for follow-up analysis in drug discovery and design.


Lecture 1: Deep Learning in Science 1/2   
Lecture 2: Deep Learning in Science 2/2

Abstract: TBA



Tutorial 1: Probability and Information   

Abstract: TBA

Tutorial 2: Random Functions and Stochastic Processes

Abstract: TBA

Tutorial 3: Game and Optimization Theories

Abstract: TBA

Lecture 1: Value of Information in Neural Networks   


Lecture 1: Is hiding fair? (1/2)   

Violations of privacy as well as unfairness and discrimination have been highlighted as two of the biggest ethical challenges for AI. At the same time, computer scientists have proposed a large number of methods for enhancing (data) privacy and fairness. Can these strategies support one another, or is there a tradeoff between privacy and fairness? And how can interdisciplinary perspectives inspire, enhance or correct computational ones? In this talk, I will present and discuss several answers that have been given to this question and discuss the assumptions that lead to “support” or “tradeoff” results. Specific attention will be given to the use of obfuscation for enhancing fairness, and the larger question of whether, when or how information hiding is fair (or not).

Lecture 2: Is hiding fair? (2/2)

Violations of privacy as well as unfairness and discrimination have been highlighted as two of the biggest ethical challenges for AI. At the same time, computer scientists have proposed a large number of methods for enhancing (data) privacy and fairness. Can these strategies support one another, or is there a tradeoff between privacy and fairness? And how can interdisciplinary perspectives inspire, enhance or correct computational ones? In this talk, I will present and discuss several answers that have been given to this question and discuss the assumptions that lead to “support” or “tradeoff” results. Specific attention will be given to the use of obfuscation for enhancing fairness, and the larger question of whether, when or how information hiding is fair (or not).



Lecture 1: Introduction to variational quantum algorithms: optimisation, machine learning and universality of the variational model (1/2)   

Modern quantum processors enable the execution of short quantum circuits.  These quantum circuits can be iteratively tuned — trained — to minimise an objective function and solve problem instances.  This is known as variational quantum computation: local measurements are repeated and modified to determine the expected value of an effective Hamiltonian.  Whereas solving practical problems still appears out of reach, many questions of theoretical interest surround the variational model.  I will provide a tutorial introduction to this model and also some recent limitations found in collaboration, including reachability deficits in QAOA (i.e. increasing problem density — the ratio of constraints to variables — induces under parameterisation at fixed circuit depth), parameter saturations in QAOA (that layer-wise training plateaus) and the existence of abrupt trainability transitions (that a critical number of layers exists where any fewer layers results in no training for certain objective functions).  I will also explain some more forward looking findings, including the concentration of parameters in QAOA (showing a problem instance independence of optimised circuit parameters) and my proof that the variational model is, in theory, a universal model of  quantum computation.

Lesson 1

1.0 Survey of modern results
1.1. List of experimental demonstrations of variational quantum computation and machine learning
1.2 List of theoretical milestones

2.0 Introduction to variational quantum computation
2.1 Variational state-space, penalty function cardinality and Clifford invariance
2.2 QAOA
2.2.1 exact solution (Grover QAOA)
2.2.2 Parameter concentrations
2.2.3 MAX 3-SAT and reachability deficits

 

Lecture 2: Introduction to variational quantum algorithms: optimisation, machine learning and universality of the variational model (2/2)

Modern quantum processors enable the execution of short quantum circuits.  These quantum circuits can be iteratively tuned — trained — to minimise an objective function and solve problem instances.  This is known as variational quantum computation: local measurements are repeated and modified to determine the expected value of an effective Hamiltonian.  Whereas solving practical problems still appears out of reach, many questions of theoretical interest surround the variational model.  I will provide a tutorial introduction to this model and also some recent limitations found in collaboration, including reachability deficits in QAOA (i.e. increasing problem density — the ratio of constraints to variables — induces under parameterisation at fixed circuit depth), parameter saturations in QAOA (that layer-wise training plateaus) and the existence of abrupt trainability transitions (that a critical number of layers exists where any fewer layers results in no training for certain objective functions).  I will also explain some more forward looking findings, including the concentration of parameters in QAOA (showing a problem instance independence of optimised circuit parameters) and my proof that the variational model is, in theory, a universal model of  quantum computation.

Lesson 2

3.0 Variational quantum computation revisited
3.1 Telescoping construction and stability lemma
3.2 Solving linear systems by variational algorithms
3.3 Universality of the variational model

4.0 Open problems



Lecture 1: Introduction to Machine Learning: Part 1

Abstract: TBA

Lecture 2: Introduction to Machine Learning: Part 2

Abstract: TBA



Lecture 1: Causal Inference I

Abstract (TBA)

Lecture 2: Causal Inference II

Abstract (TBA)



Lecture 1: Is AI Good or Evil?

Abstract: TBA

Lecture 2: Semantic Scholar, NLP, and the Fight against COVID-19

Abstract: TBA



Lecture 1: Deep Learning To See Towards New Foundations of Computer Vision (1/2)   

Deep Learning To See Towards New Foundations of Computer Vision

(with Alessandro Betti and Stefano Melacci)

Deep learning has revolutionized computer vision and visual perception. Amongst others, the great representational power of convolutional neural networks and the elegance and efficiency of Backpropagation have played a crucial role. By and large, there is a strong scientific recognition of their popularity, which is very well deserved. However, as yet, most significant results are still based on the truly artificial supervised learning communication protocol, which sets in fact a battlefield for computers, and it is far from being natural. In these lectures we argue that, when relying on supervised learning, we have been working on a problem that is remarkably different with respect to the one offered by Nature. We claim that motion invariance is in fact the only process which is in charge for conquering visual skills. Based on the underlying representational capabilities of deep architectures and learning algorithms that are still related to Backpropagation, in these lectures we show that massive image supervision can in fact be replaced with the natural communication protocol arising from living in a visual environment, just like animals do. This leads to formulate learning regardless of the accumulation of labelled visual databases, but simply by allowing visual agents to live in their own visual environments. We show that learning arises from motion invariance principles that makes it possibile to gain object identity as well as its affordance. We introduce a vision field theory for expressing those motion invariance principles and we enlighten the indissoluble pair of visual features and their conjugated velocities, thus extending the classic brightness invariance principle for the optical flow estimation. The emergence of visual feature in the natural framework of visual environments is given a systematic foundation by establishing information-based laws that naturally enable deep learning processes.The vision field theory herein proposed might offer interesting support to visual perception and neuroscience, while it opens the doors to massive applications in computer vision, thus removing the need for labelled visual databases. 

Lecture 2: Deep Learning To See Towards New Foundations of Computer Vision (2/2)

Deep Learning To See Towards New Foundations of Computer Vision

(with Alessandro Betti and Stefano Melacci)

Deep learning has revolutionized computer vision and visual perception. Amongst others, the great representational power of convolutional neural networks and the elegance and efficiency of Backpropagation have played a crucial role. By and large, there is a strong scientific recognition of their popularity, which is very well deserved. However, as yet, most significant results are still based on the truly artificial supervised learning communication protocol, which sets in fact a battlefield for computers, and it is far from being natural. In these lectures we argue that, when relying on supervised learning, we have been working on a problem that is remarkably different with respect to the one offered by Nature. We claim that motion invariance is in fact the only process which is in charge for conquering visual skills. Based on the underlying representational capabilities of deep architectures and learning algorithms that are still related to Backpropagation, in these lectures we show that massive image supervision can in fact be replaced with the natural communication protocol arising from living in a visual environment, just like animals do. This leads to formulate learning regardless of the accumulation of labelled visual databases, but simply by allowing visual agents to live in their own visual environments. We show that learning arises from motion invariance principles that makes it possibile to gain object identity as well as its affordance. We introduce a vision field theory for expressing those motion invariance principles and we enlighten the indissoluble pair of visual features and their conjugated velocities, thus extending the classic brightness invariance principle for the optical flow estimation. The emergence of visual feature in the natural framework of visual environments is given a systematic foundation by establishing information-based laws that naturally enable deep learning processes.The vision field theory herein proposed might offer interesting support to visual perception and neuroscience, while it opens the doors to massive applications in computer vision, thus removing the need for labelled visual databases. 



Lecture 1: Knowledge Processing, Logic, and the Future of AI (I)

Nowadays, when people speak about AI, they usually mean machine learning. Machine learning, in particular, deep learning, is a  powerful method for generating a type of knowledge that could be classified as self-learned knowledge. We humans, on the other hand, make heavy use of two types of knowledge: (i) self-learned knowledge and (ii) transferable knowledge learned or generated by others. If you read this and/or attend the talk, this is  mainly because of this second type of Knowledge. In these lectures, I will argue that the combination of both types of knowledge is needed for more powerful and fair automated decision making or decision support,  and thus for the next level of AI. I will discuss various requirements for reasoning formalisms towards this purpose.  After discussing logical languages for knowledge-representation and reasoning, I will briefly  introduce the VADALOG  system developed at Oxford and give an outlook on my recent project RAISON DATA funded by the Royal Society.

Lecture 2: Knowledge Processing, Logic, and the Future of AI (II)

Nowadays, when people speak about AI, they usually mean machine learning. Machine learning, in particular, deep learning, is a  powerful method for generating a type of knowledge that could be classified as self-learned knowledge. We humans, on the other hand, make heavy use of two types of knowledge: (i) self-learned knowledge and (ii) transferable knowledge learned or generated by others. If you read this and/or attend the talk, this is  mainly because of this second type of Knowledge. In these lectures, I will argue that the combination of both types of knowledge is needed for more powerful and fair automated decision making or decision support,  and thus for the next level of AI. I will discuss various requirements for reasoning formalisms towards this purpose.  After discussing logical languages for knowledge-representation and reasoning, I will briefly  introduce the VADALOG  system developed at Oxford and give an outlook on my recent project RAISON DATA funded by the Royal Society.



Lecture 1: The Decision-Making Side of Machine Learning: Computational, Inferential and Economic Perspectives


Lecture 1: Safety and Robustness for Deep Learning Part 1

Abstract: TBA

Lecture 2: Safety and Robustness for Deep Learning Part 2

Abstract: TBA



Lecture 1: Networks of Networks
Many complex systems in nature (or man made) are represented  not by single networks but by sets of interdependent networks.  Such networks of networks (NoN) include the internet, airline alliances,
biological networks, and smart city networks. There is no doubt that NoN will be the next frontier in network sciences. In my lecture I will address some recent developments (robustness, diversity)
and discuss some challenging problems in NoN.


Lecture 1: Understanding Risk and Social Behavior Improves Decision Making for Autonomous Vehicles

Deployment of autonomous vehicles on public roads promises increases in efficiency and safety, and requires evaluating risk, understanding the intent of human drivers, and adapting to different driving styles. Autonomous vehicles must also behave in safe and predictable ways without requiring explicit communication. This talk describes how to integrate risk and behavior analysis in the control look of an autonomous car. I will describe how Social Value Orientation (SVO), which captures how an agent’s social preferences and cooperation affect their interactions with others by quantifying the degree of selfishness or altruism, can be integrated in decision making and provide recent examples of developing and deploying self-driving vehicles with adaptation capabilities.