Improving the evaluation of the performance in machine learning methods

Evaluation of the performance of machine learning method is an indispensable part of modern machine learning. In project we will explore techniques to improve the evaluation.

The performance of a machine learning method depends on the initial value of the numerical training algorithm, choices of hyperparameters and other settings. When evaluating the performance of a machine learning method these sources of variability is usually not taken (systematically) into account. In this project we will study how much they will affect the resulting performance, and explore techniques to improve the evaluation of performance under these variations.

The project will work with real-life data, for example images from the medical domain.


  • Understand how optimizations settings, hyperparameters etc affect the performance of a machine learning method
  • Develop methods to evaluate machine learning performance, that are able to take these sources of variability into account and in the end achieve more precise measures of performance

Learning outcome

Machine learning, deep learning, performance evaluation, uncertainty quantification


Programming skills and some experience with machine learning and statistics could be an advantage


  • Michael Riegler
  • Hugo Hammer

Contact person