- Jyotirmoy Deshmukh: Synthesizing safe AI-based controllers for cyber-physical systems
- Stephen Muggleton: Meta-Interpretive Learning of Logic Programs
- Necmiey Ozay: Towards correct-by-construction controller synthesis for self-driving cars
- Martin Vechev: Probabilistic and Interpretable Models of Code
In this talk we will look at the problem of reasoning about safety for cyber-physical systems using semi-automated methods to learn safety invariants. We will leverage this basic technique to reason about AI-based controllers, for example, neural network based controllers trained using reinforcement learning. We show a technique that can potentially scale to controllers using 1000s of neurons, while able to prove safety of the synthesized controller. Finally, we chart a roadmap for synthesizing correct-by-design controllers that use AI-based algorithms.
Meta-Interpretive Learning (MIL) is a recent Inductive Logic Programming technique aimed at supporting learning of recursive definitions. A powerful and novel aspect of MIL is that when learning a predicate definition it automatically introduces sub-definitions, allowing decomposition into a hierarchy of reuseable parts. MIL is based on an adapted version of a Prolog meta-interpreter. Normally such a meta-interpreter derives a proof by repeatedly fetching first-order Prolog clauses whose heads unify with a given goal. By contrast, a meta-interpretive learner additionally fetches higher-order meta-rules whose heads unify with the goal, and saves the resulting meta-substitutions to form a program. This talk will overview theoretical and implementational advances in this new area including the ability to learn Turing computabale functions within a constrained subset of logic programs, the use of probabilistic representations within Bayesian meta-interpretive and techniques for minimising the number of meta-rules employed. The talk will also summarise applications of MIL including the learning of regular and context-free grammars, learning from visual representions with repeated patterns, learning string transformations for spreadsheet applications, learning and optimising recursive robot strategies and learning tactics for proving correctness of programs. The talk will conclude by pointing to the many challenges which remain to be addressed within this new area.
Can we design a provably-safe self-driving car? As the automobile evolves toward full autonomy, many driver convenience and safety automation systems are being introduced into production vehicles. Assuring the seamless and safe integration of each new automation function with existing control functions is a major challenge. The challenge gets exacerbated when considering the number of control and decision-making functionality required for a self-driving car. This talk presents recent results towards addressing this problem through the lens of formal methods and correct-by-construction controller synthesis. I will discuss three uses of controller synthesis: for composition, for supervision, and for falsification. Composition refers to the task of designing individual controllers for interacting subsystems and providing guarantees on their integrated operation. Supervision refers to the task of providing a safety envelope for a legacy controller or human drivers. Falsification refers to the task of algorithmically finding violations of specifications by given designs. I will use adaptive cruise control and lane keeping systems to demonstrate some of these ideas both in simulation, and also on a real vehicle in Mcity, the autonomous vehicle testing facility at the University of Michigan.