|Authors||D. Jha, N. K. Tomar, S. Ali, M. Riegler, H. D. Johansen, D. Johansen, T. de Lange and P. Halvorsen|
|Title||NanoNet: Real-Time Polyp Segmentation in Video Capsule Endoscopy and Colonoscopy|
|Project(s)||Department of Holistic Systems|
|Publication Type||Proceedings, refereed|
|Year of Publication||2021|
|Conference Name||34th IEEE CBMS International Symposium on Computer-Based Medical Systems|
|Keywords||colonoscopy, deep learning, segmentation, tool segmentation, Video capsule endoscopy|
Deep learning in gastrointestinal endoscopy can assist to improve clinical performance and be helpful to assess lesions more accurately. To this extent, semantic segmentation methods that can perform automated real-time delineation of a region-of-interest, e.g., boundary identification of cancer or precancerous lesions, can benefit both diagnosis and interventions. However, accurate and real-time segmentation of endoscopic images is extremely challenging due to its high operator dependence and high-definition image quality. To utilize automated methods in clinical settings, it is crucial to design lightweight models with low latency such that they can be integrated with low-end endoscope hardware devices. In this work, we propose NanoNet, a novel architecture for the segmentation of video capsule endoscopy and colonoscopy images. Our proposed architecture allows real-time performance and has higher segmentation accuracy compared to other more complex ones. We use video capsule endoscopy and standard colonoscopy datasets with polyps, and a dataset consisting of endoscopy biopsies and surgical instruments, to evaluate the effectiveness of our approach. Our experiments demonstrate the increased performance of our architecture in terms of a trade-off between model complexity, speed, model parameters, and metric performances. Moreover, the resulting model size is relatively tiny, with only nearly 36,000 parameters compared to traditional deep learning approaches having millions of parameters.