Home

Over the past decade, AI has achieved several breakthroughs, including Google’s driverless cars, IBM’s Watson, deep learning for playing Go, and the robot scientist Eve, to name just a few. Clearly, AI is rapidly making significant progress. Intelligent machines are becoming more popular and more widespread. But AI’s fast progress has also sparked a substantial debate on what AI systems will be able to do, what they should be allowed to do, and what the implications are for humans. Scientists, entrepreneurs, philosophers as well as the media have been participating in the debate. Almost on a weekly basis, magazines and newspapers such as Wired, New Scientist, Business Insider, and De Standaard cover not only the latest achievements of AI or its successful startups (that have been acquired by Google, Facebook or Apple), but also the threats of AI formulated by Elon Musk, Stephen Hawking, Bill Gates and Nick Bostrom. There is a concern that intelligent robots will take our jobs, be used as lethal weapons, or simply take over the world.

The ongoing debate has motivated the AI and computer science communities to investigate what constitutes “safe AI” and whether it is possible to develop such “safe AI” systems that can be trusted.2 Quoting Dr. Guruduth Banavar, Vice President of IBM Research: ”Artificial intelligence is not without its risks. But we believe the risks are manageable. And that the far greater risk would be to stifle or otherwise inhibit the development of a technology with the potential to greatly improve the quality of life around the world. [...] To reap the societal benefits of AI systems, we will first need to trust it. [...] Trust is built upon accountability. As such, the algorithms that underpin AI systems need to be as transparent, or at least interpretable, as possible.” In this project, we want to push this research agenda and we claim that AI systems must also be mathematically verifiable.

The present project proposal wants to address this central question of trust in the context of (1) software verification, and (2) machine learning. Software verification is a discipline within software engineering whose goal is to assure that software fully satisfies the expected requirements. Machine learning on the other hand is concerned with systems that modify themselves in order to perform better with experience and is central in most of AI systems today. A key open question Q in the quest for safe AI is how to integrate these two disciplines and in this way be able to provide strong guarantees about sotiware that learns and that adapts itself on the basis of past experience. The key goal of this project is to answer that question, which naturally decomposes into five subquestions:

  • Q1: What is the relationship between the representations and tools in machine learning and in verification?
  • Q2: How can one extend current software verification models and methods to accommodate machine learning?
  • Q3: How can one extend machine learning techniques so that they provide the guarantees offered by verification?
  • Q4: How can one ensure that the resulting technology addresses the ethical, privacy and societal concerns about artificial intelligence ?
  • Q5: How can one use the resulting techniques to develop applications ?

These are precisely the questions that we will address in this project. However, they are are too broad to be studied in their full generality directly. Therefore, we will initially focus on studying them in the context of a particular class of representations, those that involve logic and probability. The reason is that such models are common both in artificial intelligence and in verification. Indeed, within artificial intelligence, there is a strong tradition in integrating probability,logic and learning, cf. the fields of statistical relational artificial intelligence [1] and probabilistic logic programming [2]. The first two of these, that is, probabilistic models and logic, also form the basis for broad classes of software verification tools. A key advantage of logic and probability, in contrast to, say, deep neural networks, is that they are much easier to understand and to inspect by humans. Therefore, they form an ideal starting point for an investigation of safe AI that should lead to verified AI systems. While such representations will form the focus of the project, we shall also consider how other techniques and representations such as deep learning can be accommodated in the frameworks to be developed.