Polyps segmentation using synthetic images generated by conditionalGAN
Segmenting polyps locations of GI tract images which have polyps is a very important task in GI tract analysis. For image segmentation tasks, it is important to have a large dataset with images and corresponding segmentation labels for applying the supervised learning methods of deep learning. Normally, the mask (segmentation labels) of an image will be created by manual investigation and this process is a time-consuming task. The Conditional Generative Adversarial Networks (CGANs) which is one of the states of the art deep generative architectures generate synthetic images conditioned on a given input image or any other supporting data. Therefore, this CGAN can be used to generate a synthetic dataset for supervised polyps segmentation tasks. On the other hand, the same GAN architectures can be used to polyp segmentation task also.
The goal of this research is to generate synthetic polyp masks and corresponding synthetic polyp images using GAN architectures. Then, GAN networks will be trained for segmentation and compare with other U-NET based segmentation architectures.
- Applying a deep generative model for solving medical domain problems.
- Getting hands-on experience with image segmentation tasks.
- Evaluating and comparison of deep learning models.
- Python programming
- Knowledge about deep learning and deep generative models are an advantage
- Pål Halvorsen
- Michael Riegler
- Vajira Thambawita
Simula Metropolitan Center For Digital Engineering AS
Progressively Growing Generative Adversarial Networks for High-Resolution Semantic Segmentation of Satellite Images - arxiv.org/pdf/1902.04604.pdf
Abnormal Colon Polyp Image Synthesis Using Conditional Adversarial Networks for Improved Detection Performance - https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8478237
High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs -https://arxiv.org/pdf/1711.11585.pdf, https://github.com/NVIDIA/pix2pixHD, http://www.vision.ee.ethz.ch/ntire18/talks/Ming-YuLiu_pix2pixHD_NTIRE2018talk.pdf