TY - JOUR
T1 - Points of interest and visual dictionaries for automatic retinal lesion detection
AU - Rocha, Anderson
AU - Carvalho, Tiago
AU - Jelinek, Herbert F.
AU - Goldenstein, Siome
AU - Wainer, Jacques
N1 - Funding Information:
Manuscript received December 26, 2011; revised April 9, 2012; accepted May 19, 2012. Date of publication May 30, 2012; date of current version July 18, 2012. This work was supported by the Microsoft Research and Fapesp. Asterisk indicates corresponding author.
PY - 2012
Y1 - 2012
N2 - In this paper, we present an algorithm to detect the presence of diabetic retinopathy (DR)-related lesions from fundus images based on a common analytical approach that is capable of identifying both red and bright lesions without requiring specific pre- or postprocessing. Our solution constructs a visual word dictionary representing points of interest (PoIs) located within regions marked by specialists that contain lesions associated with DR and classifies the fundus images based on the presence or absence of these PoIs as normal or DR-related pathology. The novelty of our approach is in locating DR lesions in the optic fundus images using visual words that combines feature information contained within the images in a framework easily extendible to different types of retinal lesions or pathologies and builds a specific projection space for each class of interest (e.g., white lesions such as exudates or normal regions) instead of a common dictionary for all classes. The visual words dictionary was applied to classifying bright and red lesions with classical cross validation and cross dataset validation to indicate the robustness of this approach. We obtained an area under the curve (AUC) of 95.3 for white lesion detection and an AUC of 93.3 for red lesion detection using fivefold cross validation and our own data consisting of 687 images of normal retinae, 245 images with bright lesions, 191 with red lesions, and 109 with signs of both bright and red lesions. For cross dataset analysis, the visual dictionary also achieves compelling results using our images as the training set and the RetiDB and Messidor images as test sets. In this case, the image classification resulted in an AUC of 88.1 when classifying the RetiDB dataset and in an AUC of 89.3 when classifying the Messidor dataset, both cases for bright lesion detection. The results indicate the potential for training with different acquisition images under different setup conditions with a high accuracy of referral based on the presence of either red or bright lesions or both. The robustness of the visual dictionary against image quality (blurring), resolution, and retinal background, makes it a strong candidate for DR screening of large, diverse communities with varying cameras and settings and levels of expertise for image capture.
AB - In this paper, we present an algorithm to detect the presence of diabetic retinopathy (DR)-related lesions from fundus images based on a common analytical approach that is capable of identifying both red and bright lesions without requiring specific pre- or postprocessing. Our solution constructs a visual word dictionary representing points of interest (PoIs) located within regions marked by specialists that contain lesions associated with DR and classifies the fundus images based on the presence or absence of these PoIs as normal or DR-related pathology. The novelty of our approach is in locating DR lesions in the optic fundus images using visual words that combines feature information contained within the images in a framework easily extendible to different types of retinal lesions or pathologies and builds a specific projection space for each class of interest (e.g., white lesions such as exudates or normal regions) instead of a common dictionary for all classes. The visual words dictionary was applied to classifying bright and red lesions with classical cross validation and cross dataset validation to indicate the robustness of this approach. We obtained an area under the curve (AUC) of 95.3 for white lesion detection and an AUC of 93.3 for red lesion detection using fivefold cross validation and our own data consisting of 687 images of normal retinae, 245 images with bright lesions, 191 with red lesions, and 109 with signs of both bright and red lesions. For cross dataset analysis, the visual dictionary also achieves compelling results using our images as the training set and the RetiDB and Messidor images as test sets. In this case, the image classification resulted in an AUC of 88.1 when classifying the RetiDB dataset and in an AUC of 89.3 when classifying the Messidor dataset, both cases for bright lesion detection. The results indicate the potential for training with different acquisition images under different setup conditions with a high accuracy of referral based on the presence of either red or bright lesions or both. The robustness of the visual dictionary against image quality (blurring), resolution, and retinal background, makes it a strong candidate for DR screening of large, diverse communities with varying cameras and settings and levels of expertise for image capture.
KW - diabetes automated screening
KW - Diabetic retinopathy (DR)
KW - hard exudate detection
KW - hemorrhage detection
KW - microaneurysm detection
KW - red and bright lesion classification
KW - visual dictionaries
UR - http://www.scopus.com/inward/record.url?scp=84864225627&partnerID=8YFLogxK
U2 - 10.1109/TBME.2012.2201717
DO - 10.1109/TBME.2012.2201717
M3 - Article
C2 - 22665502
AN - SCOPUS:84864225627
SN - 0018-9294
VL - 59
SP - 2244
EP - 2253
JO - IEEE Transactions on Biomedical Engineering
JF - IEEE Transactions on Biomedical Engineering
IS - 8
M1 - 6208828
ER -