The Institute for AI Safety and Security provides research and development services in the field of AI-related methods, processes, algorithms and execution environments. The focus is on ensuring safety and security for AI-based solutions in ambitious application classes. “Safety and security by design” is a central aspect in this context, since it directly supports future requirements of safety-critical applications that are based on AI or integrate AI-based components.
The department Execution Environments and Innovative Methods deals with the implementation and investigation of AI methods regarding platforms on which AI is trained and deployed, and performs research on innovative AI architectures. Among other things, we investigate different hardware and software environments and quantum AI.
As part of our research focus on hardware and software environments, you will develop robust and reliable AI systems that are trained and deployed on various systems - such as HPC systems, embedded environments and containerised AIs.
In particular, you ensure that AIs can be executed safely on these systems in the sense of the principle "Safety&Security by Design" and perform research on the transferability of methods and models between these systems. You will also examine exotic hardware architectures for their suitability for machine learning and AIs.
Thematically, you will address cross-sectoral research questions in DLR's focus areas, including aeronautics, space, energy and transport. You will therefore work closely with experts from other work areas of the Institute for AI Security and our external partners from science and industry.
With your work, you make an indispensable contribution to evaluating the performance of novel AI methods and identifying potential uses. This enables the responsible introduction of these AI-based solutions into ambitious application classes.
The opportunity to work on a PhD thesis can be offered, when the suitable qualification is given.