Image to image translation (I2I) represents a fascinating group of methods that translate images from an input to an output. For example, the input can be an image of a summer landscape, and the output being images of how the same landscape could look during winter. Or the input image could be a medical image of a healthy patient, and the output shows how the image would look if the patient had some disease. Naturally, we expect that it should be a variation in how the output images should look. For example, the winter images can represent little snow or much snow, or the medical output images different stages of a disease.
In this project, we will aim to analyze different I2I methods' ability to correctly represent the variability in the output images. We aim to analyze the methods for both real data and real data with synthetic overlay, where we know exactly the correct variability in the output images. We will also look into how I2I methods can be modified to potentially improve the variability in the output images.
Goal
Develop and evaluate techniques to do I2I translation.
Learning outcome
- Machine learning
- Image to image translation methods
- Real-world applications
Qualifications
- Python programming
- Knowledge about machine learning is an advantage
Supervisors
- Hugo Hammer
- Michael Riegler
References
- Pang, Y., Lin, J., Qin, T., & Chen, Z. (2021). Image-to-image translation: Methods and applications. IEEE Transactions on Multimedia, 24, 3859-3881.