Invited Speakers and Panel


Dr. Alexander Gray
IBM

Logical Neural Networks

Recently there has been renewed interest in the long-standing goal of somehow unifying the capabilities of both statistical AI (learning and prediction) and symbolic AI (knowledge representation and reasoning). We introduce Logical Neural Networks, a new neuro-symbolic framework which identifies and leverages a 1-to-1 correspondence between an artificial neuron and a logic gate in a weighted form of real-valued logic. With a few key modifications of the standard modern neural network, we construct a model which performs the equivalent of logical inference rules such as modus ponens within the message-passing paradigm of neural networks, and utilizes a new form of loss, contradiction loss, which maximizes logical consistency in the face of imperfect and inconsistent knowledge. The result differs significantly from other neuro-symbolic ideas in that 1) the model is fully disentangled and understandable since every neuron has a meaning, 2) the model can perform both classical logical deduction and its real-valued generalization (which allows for the representation and propagation of uncertainty) exactly, as special cases, as opposed to approximately as in nearly all other approaches, and 3) the model is compositional and modular, allowing for fully reusable knowledge across talks. The framework has already enabled state-of-the-art results in several problems, including question answering.
Alexander Gray serves as VP of Foundations of AI at IBM, and currently leads a global research program in Neuro-Symbolic AI at IBM. He received AB degrees in Applied Mathematics and Computer Science from UC Berkeley and a PhD in Computer Science ...
Prof. Alessandra Russo
Imperial College London

Symbolic Machine Learning and its role in Neuro-symbolic AI

Learning interpretable models from data is one of the main challenges of AI. Over the last two decades there has been growing interest in Symbolic Machine Learning, a field of Machine Learning that aims to develop algorithms and systems for learning models that explain data within the context of some given background knowledge. In contrast to statistical learning methods, models learned by Symbolic Machine Learning are interpretable: they can be translated into natural language and understood by humans. In the first part of this talk we present Learning from Answer Sets (LAS), a state-of-the-art Symbolic Machine Learning approach capable of learning different classes of models, including those which are non-monotonic, nondeterministic and/or preference-based. We show how the advanced features of the LAS framework have made it possible to solve a variety of real-world problems in a manner that is data efficient, scalable and robust to noise. LAS can be combined with statistical learning methods to realise neuro-symbolic solutions that perform both fast, “low-level” prediction from unstructured data, and “high-level” logical and interpretable learning. In the second part of this talk we will present two such neuro-symbolic solutions for respectively solving image classification problems in the presence of distribution shifts, and discovering sub-goal structures for reinforcement learning agents.
Alessandra Russo is Professor of Applied Computational Logic at the Department of Computing, Imperial College London, where she leads the Structured and Probabilistic Intelligent Knowledge Engineering (SPIKE) group. She has strong expertise ...
Dr. Pasquale Minervini
University College London (UCL)

From Complex Query Answering to Neural Theorem Proving

Neural link predictors are immensely useful for identifying missing edges in large scale Knowledge Graphs. However, it is still not clear how to use these models for answering more complex queries that arise in a number of domains, such as queries using logical conjunctions, disjunctions, and existential quantifiers, while accounting for missing edges. In this work, we propose a framework for efficiently answering complex queries on incomplete Knowledge Graphs. We translate each query into an end-to-end differentiable objective, where the truth value of each atom is computed by a pre-trained neural link predictor; we then analyse two solutions to the optimisation problem, including gradient-based and combinatorial search. The proposed approach produces more accurate results than state-of-the-art methods --- black-box neural models trained on millions of generated queries --- without the need for training on a large and diverse set of complex queries. Using orders of magnitude less training data, we obtain relative improvements ranging from 8% up to 40% in Hits@3 across different Knowledge Graphs containing factual information. Finally, we demonstrate that it is possible to explain the outcome of our model in terms of the intermediate solutions identified for each of the complex query atoms. This work was presented at ICLR 2021, where it was awarded an Outstanding Paper Award. We then will discuss how can we extend this framework to develop end-to-end differentiable reasoning systems, that can learn symbolic rules via back-propagation, use them for tasks that require deductive reasoning, and use the resulting proof paths for producing explanations to its users.
Pasquale Minervini is a Senior Research Fellow at University College London (UCL). He received a PhD in Computer Science from University of Bari, Italy, with a thesis on relational learning. After his PhD, he worked as a postdoc researcher at the ...
Dr. Antoine Bosselut
École Polytechnique Fédéral de Lausanne (EPFL)

Symbolic Scaffolds for Neural Commonsense Representation and Reasoning

Situations described using natural language are richer than what humans explicitly communicate. For example, the sentence "She pumped her fist" connotes many potential auspicious causes. For machines to understand natural language, they must be able to make commonsense inferences about explicitly stated information. However, current NLP systems lack the ability to ground the situations they encounter to relevant world knowledge. Moreover, they struggle to reason over available facts to robustly generalize to future unseen events. In this talk, I will describe efforts at transforming modern language models into robust commonsense knowledge models by leveraging implicitly encoded knowledge representations. Then, I will discuss work in designing robust reasoning systems that use knowledge graphs as a structural scaffold for aggregating information across relevant commonsense inferences.
Antoine Bosselut is an assistant professor in the School of Computer and Communication Sciences at the École Polytechnique Fédéral de Lausanne (EPFL). Previously, he was a postdoctoral scholar at Stanford University and a Young Investigator ...
Dr. Jaehun Lee
Samsung Research
Dr. Jaehun Lee is currently a researcher at Samsung Research, working in the space of large knowledge graph construction, reasoning and its applications for mobile phones and consumer electronics. The applications include search and recommendations for virtual assistants, diagnosis and treatment recommendations for call center operation, data curation to unify heterogeneous data. In the past, he has worked on various research topics such as knowledge representation, ontology reasoning, and machine learning.
Dr. Vaishak Belle
University of Edinburgh
Dr Vaishak Belle is a Chancellor’s Fellow and Faculty at the School of Informatics, University of Edinburgh, an Alan Turing Institute Faculty Fellow, a Royal Society University Research Fellow, and a member of the RSE (Royal Society of Edinburgh) Young Academy of Scotland. At the University of Edinburgh, he directs a research lab on artificial intelligence, specialising in the unification of logic and machine learning, with a recent emphasis on explainability and ethics. He has given research seminars ...