TY - GEN
T1 - Exploiting the Transferability of Deep Learning Systems across Multi-modal Retinal Scans for Extracting Retinopathy Lesions
AU - Hassan, Taimur
AU - Akram, Muhammad Usman
AU - Werghi, Naoufel
N1 - Funding Information:
This work is supported by a research fund from Khalifa University: Ref: CIRA-2019-047.
Publisher Copyright:
© 2020 IEEE.
PY - 2020/10
Y1 - 2020/10
N2 - Retinal lesions play a vital role in the accurate classification of retinal abnormalities. Many researchers have proposed deep lesion-aware screening systems that analyze and grade the progression of retinopathy. However, to the best of our knowledge, no literature exploits the tendency of these systems to generalize across multiple scanner specifications and multi-modal imagery. Towards this end, this paper presents a detailed evaluation of semantic segmentation, scene parsing and hybrid deep learning systems for extracting the retinal lesions such as intra-retinal fluid, sub-retinal fluid, hard exudates, drusen, and other chorioretinal anomalies from fused fundus and optical coherence tomography (OCT) imagery. Furthermore, we present a novel strategy exploiting the transferability of these models across multiple retinal scanner specifications. A total of 363 fundus and 173,915 OCT scans from seven publicly available datasets were used in this research (from which 297 fundus and 59,593 OCT scans were used for testing purposes). Overall, a hybrid retinal analysis and grading network (RAGNet), backboned through ResNet50, stood first for extracting the retinal lesions, achieving a mean dice coefficient score of 0.822. Moreover, the complete source code and its documentation are released at http://biomisa.org/index.php/downloads/.
AB - Retinal lesions play a vital role in the accurate classification of retinal abnormalities. Many researchers have proposed deep lesion-aware screening systems that analyze and grade the progression of retinopathy. However, to the best of our knowledge, no literature exploits the tendency of these systems to generalize across multiple scanner specifications and multi-modal imagery. Towards this end, this paper presents a detailed evaluation of semantic segmentation, scene parsing and hybrid deep learning systems for extracting the retinal lesions such as intra-retinal fluid, sub-retinal fluid, hard exudates, drusen, and other chorioretinal anomalies from fused fundus and optical coherence tomography (OCT) imagery. Furthermore, we present a novel strategy exploiting the transferability of these models across multiple retinal scanner specifications. A total of 363 fundus and 173,915 OCT scans from seven publicly available datasets were used in this research (from which 297 fundus and 59,593 OCT scans were used for testing purposes). Overall, a hybrid retinal analysis and grading network (RAGNet), backboned through ResNet50, stood first for extracting the retinal lesions, achieving a mean dice coefficient score of 0.822. Moreover, the complete source code and its documentation are released at http://biomisa.org/index.php/downloads/.
KW - Convolutional Neural Networks
KW - Fundus Photography
KW - Ophthalmology
KW - Optical Coherence Tomography
KW - Retinal Lesions
UR - http://www.scopus.com/inward/record.url?scp=85099592689&partnerID=8YFLogxK
U2 - 10.1109/BIBE50027.2020.00099
DO - 10.1109/BIBE50027.2020.00099
M3 - Conference contribution
AN - SCOPUS:85099592689
T3 - Proceedings - IEEE 20th International Conference on Bioinformatics and Bioengineering, BIBE 2020
SP - 577
EP - 581
BT - Proceedings - IEEE 20th International Conference on Bioinformatics and Bioengineering, BIBE 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 20th IEEE International Conference on Bioinformatics and Bioengineering, BIBE 2020
Y2 - 26 October 2020 through 28 October 2020
ER -