We develop methods and tools to rigorously engineer zero-bug systems.
The focus of our group is on the design of trustworthy AI systems. We apply state-of-the-art formal methods to make sure that AI systems are safe, secure, transparent, accountable, robust, and unbiased. We apply our new methods and tools to challenging application domains, e.g., safety assurance in autonomous vehicles. Furthermore, we study how to combine formal methods with AI to increase the learning performance, using formal methods for reward shaping and fuzzing to create artificial training data. We publish at conferences like International Joint Conference on Artificial Intelligence (IJCAI), Conference of the American Association for Artificial Intelligence (AAAI), and Computer Aided Verification (CAV).
We research formal methods for system design. We look at both functional properties and at security properties. For example, we develop methods to ensure that machine learning systems fulfill certain formal properties, or that systems are not vulnerable to power sidechannel or fault injection attacks. We use mathematical methods and create software tools to automatically prove correctness. One of our specialties is reactive synthesis: methods and tools that automatically generate correct systems from their specs. We publish at conferences like Computer Aided Verification (CAV) and Formal Methods
for Computer Aided Design (FMCAD).