Validation Intelligence for Autonomous Software Systems

Validation Intelligence for Autonomous Software Systems

Autonomous software systems, such as self-driving cars, autonomous ships and collaborative robots, are characterised by their ability to plan, schedule and execute complex tasks with limited human intervention. As we become increasingly reliant on these systems, it is crucial that we have means of validating and verifying their robustness, reliability, safety and security.

The Validation Intelligence for Autonomous Software Systems (VIAS) department mission is to automate the validation of intelligent autonomous systems using trustworthy artificial intelligence (AI).

Department head

Arnaud Gotlieb

Arnaud Gotlieb

Chief Research Scientist/Research ProfessorHead of Department

"The VIAS team tackles fundamental research problems in software validation of intelligent systems, with a particular focus on industrial applications. In all our research, industrial impact is paramount."

- Dr. Arnaud Gotlieb, head of the VIAS department

Focus areas

Autonomous systems are finding applications in areas ranging from transport to healthcare, and industrial automation. As our reliance on them grows, so too does the importance of developing software testing to check their expected reliability and safety properties, and to ensure appropriate cyber-surveillance.

Traditional validation methods are prone to human error as well as being time-consuming and expensive, and they have limitations when it comes to testing autonomous systems – for example, they do not allow for continuous testing to keep pace with software evolution. Also, as systems become more complex and adaptable and demonstrate emergent behaviours, manually testing every system state becomes an impossible task. Automated validation of autonomous software systems has the potential to be faster, more efficient, and more thorough than conventional methods.

Developing trustworthy AI for autonomous systems
Trustworthy AI involves a set of key requirements (transparency, robustness, human oversight, etc.) that need to be thought about prior to the development of autonomous systems. Our research in that area ranges from the development of tool-supported methods for testing autonomous systems to the proposition of fully compliant trustworthy-by-design methodologies. 

Testing Intelligent Transport Systems
Our researchers improve software testing processes for transport systems with robust, reliable and transparent AI methods. We generate test suites that can be optimised, prioritised and scheduled on multiple test agents dedicated to transport systems, using paradigms such as constraint optimisation, constraint-based scheduling, and machine learning.

Learning and reasoning for data-intensive systems
Society relies on data-intensive software systems to manage data for a range of purposes, including administrative processes, traffic surveillance, healthcare and scientific research. These systems must be resilient and reliable, and to ensure this, learning and reasoning methods are vital. We develop scalable symbolic AI techniques, using methods and tools that acquire and deduce new knowledge from data interactions, which we verify with human-in-the-loop approaches.

Key competencies and methods:

  • Trustworthy AI
  • Automated software testing
  • Constraint optimisation
  • Reinforcement learning
  • Self-supervised ML applications

Key partners

In addition, and since 2021, together with the INRIA Diverse research team located in Rennes, France, VIAS has created the first SIMULA-INRIA associate research team called RESIST_EA to develop resilience science for software systems.

People

Mohit Kumar Ahuja

Mohit Kumar Ahuja

Affiliated PhD student

Dogan Altan

Dogan Altan

Postdoctoral Fellow

Nassim Belmecheri

Nassim Belmecheri

Postdoctoral Fellow

Pierre Bernabé

Pierre Bernabé

Affiliated PhD student

Jørn Eirik Betten

Jørn Eirik Betten

PhD student

Sunanda Bose

Sunanda Bose

Postdoctoral Fellow

Arnaud Gotlieb

Arnaud Gotlieb

Chief Research Scientist/Research ProfessorHead of Department

Dennis Groß

Dennis Groß

Postdoctoral Fellow

Dusica Marijan

Dusica Marijan

Senior Research Scientist

Quentin Mazouni

Quentin Mazouni

PhD student

Preben Monteiro Ness

Preben Monteiro Ness

PhD student

Aizaz Sharif

Aizaz Sharif

Affiliated PhD student

Helge Spieker

Helge Spieker

Research Scientist

Bakht Zaman

Bakht Zaman

Postdoctoral Fellow

Publications

Read Trustworthy AI: Scientific, Industrial, and Societal Impact

H. Spieker

Trustworthy AI: Scientific, Industrial, and Societal Impact

OsloMet

Read Towards Trustworthy-AI-by-Design Methodology for Intelligent Radiology Systems

C. Braye, J. Clech, A. Gotlieb, N. Lazaar and P. Malléa

Towards Trustworthy-AI-by-Design Methodology for Intelligent Radiology Systems

'Santé et IA séminaire' at PFIA 23, Strasbrourg, France. July 6

Read Secure Traversable Event logging for Responsible Identification of Vertically Partitioned Health Data

S. Bose and D. Marijan

Secure Traversable Event logging for Responsible Identification of Vertically Partitioned Health Data

IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)

Read SafeWay: Improving the safety of autonomous waypoint detection in maritime using transformer and interpolation

D. Altan, D. Marijan and T. Kholodna

SafeWay: Improving the safety of autonomous waypoint detection in maritime using transformer and interpolation

Maritime Transport Research

Read SafeWay: Improving the Safety of Autonomous Waypoint Detection in Maritime using Transformer and Interpolation

D. Altan, D. Marijan and T. Kholodna

SafeWay: Improving the Safety of Autonomous Waypoint Detection in Maritime using Transformer and Interpolation

Maritime Transport Research

Read ReMAV: Reward Modeling of Autonomous Vehicles for Finding Likely Failure Events

A. Sharif and D. Marijan

ReMAV: Reward Modeling of Autonomous Vehicles for Finding Likely Failure Events

IEEE Transactions on Software Engineering (TSE)

Read Qualitative Constraint Acquisition

N. Belmecheri

Qualitative Constraint Acquisition

University of Southern Denmark

Read Measuring the Effect of Causal Disentanglement on the Adversarial Robustness of Neural Network Models

P. M. Ness, D. Marijan and S. Bose

Measuring the Effect of Causal Disentanglement on the Adversarial Robustness of Neural Network Models

International Conference on Information and Knowledge Management

Read Measuring the Effect of Causal Disentanglement on the Adversarial Robustness of Neural Network Models

P. M. Ness, D. Marijan and S. Bose

Measuring the Effect of Causal Disentanglement on the Adversarial Robustness of Neural Network Models

ACM International Conference on Information and Knowledge Management

Read Learning Input-aware Performance Models of Configurable Systems: An Empirical Evaluation

L. Lesoil, H. Spieker, A. Gotlieb, M. Acher, P. Temple, A. Blouin and J. Jézéquel

Learning Input-aware Performance Models of Configurable Systems: An Empirical Evaluation

The Journal of Systems & Software

Read Detecting Intentional AIS Shutdown in Open Sea Maritime Surveillance Using Self-Supervised Deep Learning

P. Bernabé, A. Gotlieb, B. Legeard, D. Marijan, F. O. Sem-Jacobsen and H. Spieker

Detecting Intentional AIS Shutdown in Open Sea Maritime Surveillance Using Self-Supervised Deep Learning

IEEE Transactions on Intelligent Transportation Systems

Read Constraint-guided Test Execution Scheduling: An Experience Report at ABB Robotics

A. Gotlieb, M. Mossige and H. Spieker

Constraint-guided Test Execution Scheduling: An Experience Report at ABB Robotics

SAFECOMP2023 42nd International Conference on Computer Safety, Reliability and Security 19-22 September 2023, Toulouse, France