Important Dates

  • Paper Submission:
    May 1, 2018

    Author Notification:
    May 15, 2018

    Camera-ready Version:
    May 25, 2018

    July 18, 2018


Previous Workshops

Invited Talks

  • Jyotirmoy Deshmukh: Synthesizing safe AI-based controllers for cyber-physical systems
  • In this talk we will look at the problem of reasoning about safety for cyber-physical systems using semi-automated methods to learn safety invariants. We will leverage this basic technique to reason about AI-based controllers, for example, neural network based controllers trained using reinforcement learning. We show a technique that can potentially scale to controllers using 1000s of neurons, while able to prove safety of the synthesized controller. Finally, we chart a roadmap for synthesizing correct-by-design controllers that use AI-based algorithms.

  • Stephen Muggleton: Meta-Interpretive Learning of Logic Programs
  • Meta-Interpretive Learning (MIL) is a recent Inductive Logic Programming technique aimed at supporting learning of recursive definitions. A powerful and novel aspect of MIL is that when learning a predicate definition it automatically introduces sub-definitions, allowing decomposition into a hierarchy of reuseable parts. MIL is based on an adapted version of a Prolog meta-interpreter. Normally such a meta-interpreter derives a proof by repeatedly fetching first-order Prolog clauses whose heads unify with a given goal. By contrast, a meta-interpretive learner additionally fetches higher-order meta-rules whose heads unify with the goal, and saves the resulting meta-substitutions to form a program. This talk will overview theoretical and implementational advances in this new area including the ability to learn Turing computabale functions within a constrained subset of logic programs, the use of probabilistic representations within Bayesian meta-interpretive and techniques for minimising the number of meta-rules employed. The talk will also summarise applications of MIL including the learning of regular and context-free grammars, learning from visual representions with repeated patterns, learning string transformations for spreadsheet applications, learning and optimising recursive robot strategies and learning tactics for proving correctness of programs. The talk will conclude by pointing to the many challenges which remain to be addressed within this new area.

  • Necmiey Ozay: Towards correct-by-construction controller synthesis for self-driving cars
  • Can we design a provably-safe self-driving car? As the automobile evolves toward full autonomy, many driver convenience and safety automation systems are being introduced into production vehicles. Assuring the seamless and safe integration of each new automation function with existing control functions is a major challenge. The challenge gets exacerbated when considering the number of control and decision-making functionality required for a self-driving car. This talk presents recent results towards addressing this problem through the lens of formal methods and correct-by-construction controller synthesis. I will discuss three uses of controller synthesis: for composition, for supervision, and for falsification. Composition refers to the task of designing individual controllers for interacting subsystems and providing guarantees on their integrated operation. Supervision refers to the task of providing a safety envelope for a legacy controller or human drivers. Falsification refers to the task of algorithmically finding violations of specifications by given designs. I will use adaptive cruise control and lane keeping systems to demonstrate some of these ideas both in simulation, and also on a real vehicle in Mcity, the autonomous vehicle testing facility at the University of Michigan.

  • Martin Vechev: Probabilistic and Interpretable Models of Code
  • I will present a method for building probabilistic models of code which combines techniques from synthesis, sampling and decision tree learning. The method scales to large datasets and is more precise and faster to train than state-of-the-art neural networks. The resulting models can be used for various tasks including statistical code synthesis, repair, debugging and translation. In the talk I will discuss what theoretical guarantees can be stated about these models, appropriate experimental results showing the models are applicable to various programming languages (e.g., Java, JavaScript, Python), and also discuss several open problems that span the intersection of program analysis, synthesis and machine learning.