Explaining the Predictions of a Deep Neural Network used to Analyze Videos of Human Semen.
Deep learning-based algorithms have shown state-of-the-art performance across numerous different fields including medicine, finance and, robotics. However, these algorithms are commonly known as a black box, meaning the relationship between input and output is not well understood. Several works have proposed methods of explaining the output of deep neural networks used to analyze images, which have provided insight into the inner workings of these complex networks. However, these methods are mostly limited to static images and are not used to explain the predictions of models used to analyze videos. As the field of video analysis using deep learning is on the rise, new methods that explain the prediction of these models will be important in the future. This topic will explore using existing explainability algorithms used for image analysis and apply these to video analysis.
Goal
The goal of this project is to use existing image-based explainability algorithms to explain the predictions of a deep neural network used for video analysis. We have prepared a dataset consisting of microscopic videos of human semen which will be used to conduct the experiments.
Learning outcome
Deep learning, video analysis, explainable AI, experimental research.
Qualifications
- Basic Python programming skills.
- Some knowledge of machine learning and statistics could be an advantage.
Supervisors
- Pål Halvorsen
- Michael Riegler
- Steven Hicks