
Simula establishes new Centre for AI Security and Safety
Published:
As artificial Intelligence (AI) becomes an integral part of digital infrastructure, risks related to misuse and unintended consequences are growing. In response, Simula is establishing an independent research centre dedicated to safe and secure AI.
The new Centre for AI Security and Safety is being established in February 2026, with an official opening scheduled for April.
Research will take place on two fundamental pillars:
- AI Security: Addressing intentional harm and adversarial threats.
- AI Safety: Addressing unintentional harm, trust and system robustness.
AI Security will be hosted at Simula Research Laboratory (SRL), led by Professor Leon Moonen. AI Safety will be hosted at SimulaMet, led by Professor Michael Riegler.
“Ambitious goals for the use of artificial intelligence must be balanced with an evidence-based focus on what can go wrong. Security is not a hindrance. It enables us to accelerate while maintaining full control. The centre has been established to research the most critical risks that already exist, as well as those emerging as adoption grows,” says Professor Michael Riegler.
The objective is to expand the centre’s capacity over time in collaboration with partners from both the public and private sectors.
“We will develop the knowledge, expertise, methods, and tools that contribute to secure implementation and regulation of artificial intelligence. At the same time, we have a clear ambition to act as an independent advisor to the authorities on issues we believe are important to national security”
Lillian Røstad, CEO of Simula
The momentum for this centre, which has been in the works for nearly two years, was reinforced by some important milestones for Simula in 2025:
- The award of a national AI research centre for sustainable, risk-averse and ethical AI (sure-ai.no).
- Funding for a research network on trustworthy AI and security in partnership with NTNU and OsloMet (simula.no).
- The launch of an independent open-source tool for safety analysis of AI systems (ntb.no).
“The AI safety auditing framework Simpleaudit, is a prime example of the kind of results that are part of, and will emerge from this new centre,” says Røstad.
Simpleaudit was developed by Michael Riegler and Sushant Gautam, and on February 10th received status as a digital public good.