AuthorsA. Storås, I. Strümke, M. Riegler and P. Halvorsen
TitleExplainable Artificial Intelligence in Medicine
AfilliationMachine Learning
Project(s)Department of Holistic Systems
StatusAccepted
Publication TypeTalks, invited
Year of Publication2022
Location of TalkNordic AI Meet 2022
KeywordsExplainable artificial intelligence, Machine learning, medicine
Abstract

Machine learning (ML) has shown outstanding abilities to solve a large variety of tasks such as image recognition and natural language processing, which has huge relevance for the medical field. Complex ML models, including convolutional neural networks (CNNs), are used to analyse high dimensional data such as images and videos from medical examinations. With increasing model complexity, the demand for techniques improving human understanding of the ML models also increases. If medical doctors do not understand how the models work, they might not know when the models are actually wrong or even refuse to use them. This can hamper the implementation of ML systems in the clinic and negatively affect patients. To promote successful integration of ML systems in the clinic, it is important to provide explanations that establish trust in the models among healthcare personnel. Explainable artificial intelligence (XAI) aims to provide explanations about ML models and their predictions. Several techniques have already been developed. Existing XAI methods often fail to meet the requirements of medical doctors, probably because they are not sufficiently involved in the development of the methods. We develop ML models solving tasks in various medical domains. The resulting models are explained using a selection of existing XAI methods, and the explanations are evaluated by medical experts. Their feedback is used to develop improved XAI methods. We have investigated established techniques for making ML systems more transparent in the fields of gastroenterology, assisted reproductive technology, organ transplantation and cardiology. Experiences from our projects will be used to develop new explanation techniques for clinical practice in close collaboration with medical experts.

Citation Key42696