Evaluation of privacy preservation in generative models

Evaluation of privacy preservation in generative models

This project aims to evaluate the privacy guarantees of generative models, specifically Generative Adversarial Networks (GANs), by exploring various approaches to measure data leakage in synthetic data.

Generative models have demonstrated an impressive ability to learn from a dataset of images and generate new images with similar characteristics. These models have been suggested as a way to replace private data with synthetic data, by training the model on data with privacy restrictions. However, it's unclear whether some of the private information from the original dataset might leak into the synthetic data. In this project, we aim to explore approaches to evaluate this risk.

Goal

Develop and implement methods to measure privacy data leakage in generated data.

Learning outcome

  • Implement and train generative models.
  • Develop and analyse methods for measuring privacy data leakage in generated data.

Qualifications

  • Experience with deep learning models, and preferably image generative models such as GANs.

Supervisors

  • Hugo Hammer
  • Vajira Thambawita

Associated contacts