
New project to make AI collaboration safer and more efficient
Published:
How can organisations collaborate using artificial intelligence (AI) without compromising privacy or sensitive data? A new project at Simula UiB, funded by the Research Council of Norway, aims to tackle this challenge by developing private and efficient distributed learning solutions.
"Our goal is to create systems that not only ensure strong privacy protections but also improve efficiency, making collaboration more practical and secure," says Eirik Rosnes, Chief Research Scientist at Simula UiB.
He will lead the newly funded "Private and Efficient Distributed Learning (PeerL)" project. With 12 million NOK in funding from the Research Council of Norway the project will run until early 2029 to advance this approach.
The research will have broad applications across various sectors, from enabling hospitals to collaborate on improving diagnoses while keeping patient data private, to helping insurance companies work together on fraud prevention without compromising their competitive advantages.
Collaborative learning without sharing data
Traditional AI systems often require centralising data in one location for training. However, this approach raises significant privacy concerns and may not be feasible when dealing with sensitive information. Distributed learning allows organisations to collaboratively train AI models without directly sharing their underlying data.
“This is particularly valuable for sectors like finance and healthcare, where data privacy is critical,” says Rosnes. By leveraging their expertise in information and coding theory, the PeerL team will develop new methods to ensure these systems are robust enough for real-world use.
Measuring competitive privacy
Insights from insurance companies in Norway and Germany show that competitive concerns have hindered the adoption of collaborative technologies like distributed learning. Yet, collaboration is essential to aggregate enough data to train effective models for tasks like fraud detection.
The PeerL project addresses this challenge with a new concept called "competitive privacy." This involves developing a mathematical framework to quantify the risk of leaking information that contains competitive insights.
“Competitive privacy goes beyond simple data anonymisation,” Rosnes explains. “Even anonymised, aggregated data from multiple collaborators could reveal unexpected insights that might be strategically valuable. This makes it crucial to have robust measures in place to assess and control risks.”
Efficient distributed learning
Another focus of the PeerL project is improving the efficiency of distributed learning systems. This includes addressing challenges like delays caused by slow or unresponsive network nodes and optimising how computers communicate with each other.
“We will, for example, add redundancy to tasks, to ensure that delays from slower devices don’t affect the overall system performance.”
Distributed learning systems are also more efficient than traditional AI systems because they process data locally. Reducing the need for large-scale data transfers minimises both network traffic and energy costs associated with moving data to a central server.
Project team and partners
The PeerL project is part of Simula UiB's broader research efforts in information theory. It will include researchers from Simula UiB and international collaborators from the New Jersey Institute of Technology (USA) and Imperial College London (UK), along with an industry partner from the Norwegian fintech sector.
The project will also hire a postdoctoral researcher and a PhD student, further strengthening Simula UiB’s capacity to contribute to this critical field.
Simula UiB is a research center owned by Simula and the University of Bergen (UiB). The center has two core research areas: cryptography and information theory. It was funded in 2015 to increase the security expertise in Norway through research and education.