Abstract

"With great  power  comes  great  responsibility" is  a cliché  that  perfectly  applies  to  artificial intelligence (AI). As technology is progressing faster than ever before towards software with more and more abilities, adaptation and autonomy, the risks are becoming increasingly apparent. As a matter of fact, humans are still lacking the tools to systematically obtain strong guarantees - about their safety, privacy, etc. - from the intelligent software that serves them. Quite paradoxically, but not so surprisingly, better human control over AI could come from intelligent software, and more specifically, from automated verification software. Verification is a scientific discipline within computer science that devises automated methods to improve the dependability of computer-based systems. It  does so by exploiting mathematically-grounded techniques such as model checking, theorem proving and model-based testing. Although automated verification has made remarkable progress during the past decades, its application is still limited to traditional software that does not use AI or machine learning technology. This project aims to remove these limitations by investigating the logical and probabilistic underpinnings of both verification, machine learning and AI. The expected results are new theories, formalisms and algorithms that will constitute the foundations for a new generation of AI-ready verification methods.

 

Project number : 30992574

Dates : 01/01/2018 - 31/12/2021

Budget :  around 2,600,000 EURO