Explainable Artificial Intelligence methods (XAI) represent methods to understand and interpret machine learning (ML) methods, and have recently received a lot of attention. In this project we will also look into if XAI methods can be used to detect data outliers.
XAI methods are useful to understand why a ML model makes specific decisions, which again is important to put trust in the ML method. Out of domain data refers to data with other properties than the data used to train a ML model. Predictions made by ML models on out of domain data can rarely be trusted, and represent a substantial challenge in real-life applications. For example, if a ML model was trained on data from one hospital, it can be unreliable to use the trained model on data from another hospital. Often in practical situations, we don’t know if a data point is out of domain or not. In this project, we will explore if machine learning explanations can be used to identify out of domain data points.
Goal
Develop techniques based on machine learning to detect out of domain data.
Learning outcome
- Machine learning
- Explainable Artificial Intelligence
- Outlier detection
- Real-world applications
Qualifications
- Python programming
- Knowledge about machine learning is an advantage
Supervisors
- Hugo Hammer
- Michael Riegler
References
- Molnar, C. (2020). Interpretable machine learning. Lulu. com.