Improving the evaluation of the performance in machine learning methods
The performance of a machine learning method depends on the initial value of the numerical training algorithm, choices of hyperparameters and other settings. When evaluating the performance of a machine learning method these sources of variability is usually not taken (systematically) into account. In this project we will study how much they will affect the resulting performance, and explore techniques to improve the evaluation of performance under these variations.
The project will work with real-life data, for example images from the medical domain.
Goal
- Understand how optimizations settings, hyperparameters etc affect the performance of a machine learning method
- Develop methods to evaluate machine learning performance, that are able to take these sources of variability into account and in the end achieve more precise measures of performance
Learning outcome
Machine learning, deep learning, performance evaluation, uncertainty quantification
Qualifications
Programming skills and some experience with machine learning and statistics could be an advantage
Supervisors
- Michael Riegler
- Hugo Hammer