Explainable Artificial Intelligence

Explainable Artificial Intelligence (XAI) represent different techniques to be able to interpret opaque machine learning models and predictions. This is important to improve the reliability and thrust in machine learning methods.

We suggest three different topics:

Topic 1: Popular XAI methods for image classification are useful to identify which parts of the image that were important for making a decision. In this project we will develop methods to analyze trends in these explanations over many classifications. The overall aim with these analyzes is to be able to draw conclusions about the population in which the data used to train the machine learning method was collected from. A conclusion could for example be that “80% of the patients diagnosed with disease A separate from the healthy population by having … ”.

Topic 2: XAI methods are today mainly used to explain why the machine learning method made a specific decision. For example, “this patient was classified as sick because …”. In this project we will instead explore methods that can analyze more global properties of the machine learning model to be able to draw conclusions such that “exercising is associated with reduced risk of developing diabetes”. More specifically we will analyze how the input variables change the output of the model, and how this varies in different parts of the feature/input space.

Topic 3: Popular XAI methods for image classification, such as GradCAM, are able to identify the important parts of the input image for making a decision. The methods however do not explain what characterizes these parts. In this project, we will explore this issue. We hypothesize that it can be useful to establish a form of reference when doing explanations. For example, to explain why a deep learning method classified an image as sick, it is natural to use healthy controls as a reference.


Develop new XAI methods that improve and expand current state-of-the-art methods.

Learning outcome

Explainable AI
AI / Machine learning in general


Hard working and motivated. Interested in learning (the rest can be learned during the thesis work)


  • Michael Riegler
  • Hugo Hammer
  • Inga Strümke


Došilović, F. K., Brčić, M., & Hlupić, N. (2018, May). Explainable artificial intelligence: A survey. In 2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO) (pp. 0210-0215). IEEE.

Contact person