Centre for AI Security and Safety
AI systems, models and agents are becoming integral parts of national infrastructure. Centre for AI Security and Safety conducts research on the challenges this brings, and evaluates the security and safety of these systems.
Although security and safety address different types of risk, they are connected. A security breach can lead to a safety incident. Poor safety measures can create security vulnerabilities. That's why the centre brings these perspectives together in one integrated research environment.
AI Security: Addressing intentional harm caused by adversarial threats.
Led by Professor Leon Moonen, hosted at Simula Research Laboratory.
AI Safety: Addressing unintentional harm caused by system misbehaviour.
Led by Professor Michael Riegler, hosted at SimulaMet.
Researchers in this centre build practical tools and frameworks to test the security and safety of AI models and systems, including simulated cyber attacks (red teaming).
The research combines a deep technical approach with empirical evidence to better understand AI risk and effective risk-mitigation.
The objective is to expand the centre’s capacity over time in collaboration with partners from both the public and private sectors, and act as an independent advisor to the authorities on issues that are important to national security.
SimpleAudit – Simula's open AI safety auditing framework that allows you to test your AI systems and find their weak spots.
Recognised as a Digital Public Good, SimpleAudit moves structured AI governance from a theoretical concept to something that works in practice. The tool is a result from research conducted in the Centre for AI Security and Safety.

