The focus of our group is on the design of trustworthy AI systems. We apply state-of-the-art formal methods to make sure that AI systems are safe, secure, transparent, accountable, robust, and unbiased. We apply our new methods and tools to challenging application domains, e.g., safety assurance in autonomous vehicles. Furthermore, we study how to combine formal methods with AI to increase the learning performance, using formal methods for reward shaping and fuzzing to create artificial training data. We publish at conferences like International Joint Conference on Artificial Intelligence (IJCAI), Conference of the American Association for Artificial Intelligence (AAAI), and Computer Aided Verification (CAV).