Development of generalizable DL models for gastrointestinal disease segmentation

The detection of abnormalities in the gastrointestinal tract can help to reduce the chances of colorectal cancer and provide successful treatment. Towards fulfilling this goal, we target to build a generalizable and robust machine learning that can improve the healthcare system by automatically segmenting and detecting diseases and instruments inside the GI tract.
Master

We aim to create and benchmark a gastrointestinal tract and instruments datasets for automatic segmentation task. To address the demand for the annotated medical dataset, we will create a new dataset with the help of medical experts and make it publicly available to the multimedia community. Additionally, we will provide the benchmark algorithm and the parameters that can be used to test and compare the algorithms against each other. We will further analyze the results of MedicoTask 2018 and try to improve the results.

Goal

To develop a fully automated system for gastrointestinal tract disease segmentation.

Learning outcome

  • Deep understanding of semantic segmentation based approach
  • Working on a real-world application
  • Possibility of collaboration with researchers
  • Possibility to implement and research a novel approach
  • Opportunity to participate in challenges and conferences

Qualifications

  • Experience with Python programming
  • Understanding of machine learning
  • Experience with Keras and Tensorflow

Supervisors

  • Pål Halvorsen
  • Michael Riegler
  • Debesh Jha

Collaboration partners

Simula Metropolitan Center For Digital Engineering AS

References

Ronneberger, O., Fischer, P., & Brox, T. (2015, October). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (pp. 234-241). Springer, Cham.

Allan, M., Shvets, A., Kurmann, T., Zhang, Z., Duggal, R., Su, Y. H., ... & Herrera, L. (2019). 2017 Robotic instrument segmentation challenge. arXiv preprint arXiv:1902.06426.