Explainable LLMs in Generating Structured Argument for Assurance Case in Software Certification

Explainable LLMs in Generating Structured Argument for Assurance Case in Software Certification

Develop the future of software certification by using AI to automatically generate or verify structured, explainable arguments, transforming complex regulations into auditable evidence. This project pioneers a new, transparent approach to automated compliance detection for critical systems.

Imagine a world where software is so complex that certifying its safety or security requires painstakingly manual audits of countless documents. This project aims to revolutionize that process by applying cutting-edge AI. You'll work at the intersection of AI, software engineering, and legal compliance to build a system that not only automates regulatory checks but also explains its reasoning. If you have a keen interest in Explainable AI (XAI) and want to apply it to a real-world problem, this project is for you. The core of the project involves three main components:

  1. Data generation: LLMs synthesize structured argument and evaluating the synthetic data to human experts.
  2. Modelling: Multi-hop NLI with transformers or LLM fine-tuning.
  3. Explainability: Analyse models underlying reasoning process over structured arguments

Master's thesis opportunities (choose one):

  • Synthetic Data Evaluation/Human-centered AI of structured argumentation data
  • LLM Fine-tuning Representation for structured argumentation data
  • Explainable AI in structured argumentation data

Abstract

Goal: To develop a novel, AI-driven framework for transparent software certification. Methodology: Use Large Language Models (LLMs) to generate structured assurance cases and then apply a multi-hop Natural Language Inference (NLI) model to perform explainable compliance detection. Outcome: The system will not only automate regulatory checks but also provide a traceable, auditable chain of reasoning, addressing the critical need for transparency in AI-assisted safety and security certifications.

Learning outcome

  • By the end of this project, you will have a deep understanding of LLMs, explainable AI, and how to apply these powerful tools to solve a critical problem in the software industry. You will also develop a system that provides both a result and a clear, traceable explanation for that result.

Qualifications

Need-to-have qualifications:

  • Completed a course on Natural Language Processing (NLP) or have prior experience in NLP projects.
  • Comfortable with coding in Python and using libraries like PyTorch and Transformers.
  • An interest in the field of Explainable AI (XAI).

Nice-to-have qualifications:

  • Experience in the field of software engineering, particularly in a domain where safety or security is critical.
  • Familiarity with the concepts of large language models (LLMs).

Supervisors

  • Fariz Ikhwantri
  • Dusica Marijan

Collaboration partners

  • Tecnalia
  • DNV
  • NTNU
  • Hitachi (Previously Thales AG)
  • EZU
  • TTTech AG
  • UBITECH
  • MindChip
  • Catalink
  • Schneider Electric

References

Associated contacts