Parameter Estimating using Approximate Bayesian Computing

In this project we will explore techniques to estimate the parameters of a model, when the likelihood or loss function cannot be computed. A popular framework is approximate Bayesian computing (ABC), and we will explore new estimation methods within this framework.
Master

Parameter estimation, or model training, is at the core of statistics and machine learning. The most common approaches are to maximize the likelihood or some other loss function over the training data. However, for some methods, for example generative adversarial networks, this approach is not feasible because the likelihood or loss cannot be computed.

A popular alternative approach is what’s called approximate Bayesian computing. The core idea of the method is to generate samples from the model under different choices of the model parameters, and further compare against the training data. The goal is therefore to find values of model parameters such that the generated samples are similar to the training data. However, how to compare samples and training data is usually not straightforward. In this project we will explore new methods to do such comparisons. In particular, we will explore a concept from statistics called data depth.

Goal

Evaluate the potential of estimating parameters of a using data depth within the ABC framework.

Learning outcome

interdisciplinary research
Parameter estimation
Bayesian computing
Statistics

Qualifications

Hard working and motivated. Interested in learning (the rest can be learned during the thesis work)

Supervisors

  • Michael Riegler
  • Hugo Hammer

References

Beaumont, M. A. (2019). Approximate Bayesian Computation. Annual review of statistics and its application, 6, 379-403.

Hammer, H. L., Yazidi, A., Bratterud, A., Haugerud, H., & Feng, B. (2018). A Queue Model for Reliable Forecasting of Future CPU Consumption. Mobile Networks and Applications, 23(4), 840-853.

Contact person