<<

Journal of Imaging

Review Automated Detection and Diagnosis of Diabetic Retinopathy: A Comprehensive Survey

Vasudevan Lakshminarayanan 1,* , Hoda Kheradfallah 1, Arya Sarkar 2 and Janarthanam Jothi Balaji 3

1 Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada; [email protected] 2 Department of Computer Engineering, University of Engineering and Management, Kolkata 700 156, India; [email protected] 3 Department of Optometry, Medical Research Foundation, Chennai 600 006, India; [email protected] * Correspondence: [email protected]

Abstract: Diabetic Retinopathy (DR) is a leading cause of vision loss in the world. In the past few years, artificial intelligence (AI) based approaches have been used to detect and grade DR. Early detection enables appropriate treatment and thus prevents vision loss. For this purpose, both fundus and optical coherence tomography (OCT) images are used to image the retina. Next, Deep-learning (DL)-/machine-learning (ML)-based approaches make it possible to extract features from the images and to detect the presence of DR, grade its severity and segment associated lesions. This review covers the literature dealing with AI approaches to DR such as ML and DL in classification and segmentation that have been published in the open literature within six years (2016–2021). In addition, a comprehensive list of available DR datasets is reported. This list was constructed using

 both the PICO (P-Patient, -Intervention, C-Control, O-Outcome) and Preferred Reporting Items for  Systematic Review and Meta-analysis (PRISMA) 2009 search strategies. We summarize a total of 114

Citation: Lakshminarayanan, V.; published articles which conformed to the scope of the review. In addition, a list of 43 major datasets Kheradfallah, H.; Sarkar, A.; Balaji, J.J. is presented. Automated Detection and Diagnosis of Diabetic Retinopathy: A Keywords: diabetic retinopathy; artificial intelligence; deep learning; machine-learning; datasets; Comprehensive Survey. J. Imaging fundus image; optical coherence tomography; ophthalmology 2021, 7, 165. https://doi.org/ 10.3390/jimaging7090165

Academic Editor: Reyer Zwiggelaar 1. Introduction Diabetic retinopathy (DR) is a major cause of irreversible visual impairment and Received: 30 June 2021 blindness worldwide [1]. This etiology of DR is due to chronic high blood glucose levels, Accepted: 24 August 2021 Published: 27 August 2021 which cause retinal capillary damage, and mainly affects the working-age population. DR begins at a mild level with no apparent visual symptoms but it can progress to severe

Publisher’s Note: MDPI stays neutral and proliferated levels and progression of the disease can lead to blindness. Thus, early with regard to jurisdictional claims in diagnosis and regular screening can decrease the risk of visual loss to 57.0% as well as published maps and institutional affil- decreasing the cost of treatment [2]. iations. DR is clinically diagnosed through observation of the retinal fundus either directly or through imaging techniques such as fundus photography or optical coherence tomography. There are several standard DR grading systems such as the Early Treatment Diabetic Retinopathy Study (ETDRS) [3]. ETDRS separates fine detailed DR characteristics using multiple levels. This type of grading is done upon all seven retinal fundus Fields of View Copyright: © 2021 by the authors. (FOV). Although ETDRS [4] is the gold standard, due to implementation complexity and Licensee MDPI, Basel, Switzerland. This article is an open access article technical limitations [5], alternative grading systems are also used such as the International distributed under the terms and Clinical Diabetic Retinopathy (ICDR) [6] scale which is accepted in both clinical and conditions of the Creative Commons Computer-Aided Diagnosis (CAD) settings [7]. The ICDR scale defines 5 severity levels Attribution (CC BY) license (https:// and 4 levels for Diabetic Macular Edema (DME) and requires fewer FOVs [6]. The ICDR creativecommons.org/licenses/by/ levels are discussed below and are illustrated in Figure1. 4.0/).

J. Imaging 2021, 7, 165. https://doi.org/10.3390/jimaging7090165 https://www.mdpi.com/journal/jimaging J. Imaging 2021, 7, 165 2 of 26

Figure 1. Retinal fundus images of different stages of diabetic retinopathy. (A) Stage II: Mild non-proliferative dia- betic retinopathy; (B) Stage III: Moderate non-proliferative diabetic retinopathy; (C) Stage IV: Severe non-proliferative diabetic retinopathy; (D) Stage V: Proliferative diabetic retinopathy (images courtesy of Rajiv Raman et al., Sankara Nethralaya, India).

• No Apparent Retinopathy: No abnormalities. • Mild Non-Proliferative Diabetic Retinopathy (NPDR): This is the first stage of diabetic retinopathy, specifically characterized by tiny areas of swelling in retinal blood vessels known as Microaneurysms (MA) [8]. There is an absence of profuse bleeding in retinal nerves and if DR is detected at this stage, it can help save the patient’s eyesight with proper medical treatment (Figure1A). • Moderate NPDR: When left unchecked, mild NPDR progresses to a moderate stage when there is blood leakage from the blocked retinal vessels. Additionally, at this stage, Hard Exudates (Ex) may exist (Figure1B). Furthermore, the dilation and constriction of venules in the retina causes Venous Beadings (VB) which are visible ophthalmospi- cally [8]. • Severe NPDR: A larger number of retinal blood vessels are blocked in this stage, causing over 20 Intra-retinal Hemorrhages (IHE; Figure1C) in all 4 fundus quadrants or there are Intra-Retinal Microvascular Abnormalities (IRMA) which can be seen as bulges of thin vessels. IRMA appears as small and sharp-border red spots in at least one quadran. Furthermore, there can be a definite evidence of VB in over 2 quadrants [8]. • Proliferative Diabetic Retinopathy (PDR): This is an advanced stage of the disease that occurs when the condition is left unchecked for an extended period of time. New J. Imaging 2021, 7, 165 3 of 26

blood vessels form in the retina and the condition is termed Neovascularization (NV). These blood vessels are often fragile, with a consequent risk of fluid leakage and proliferation of fibrous tissue [8]. Different functional visual problems occur at PDR, such as blurriness, reduced field of vision, and even complete blindness in some cases (Figure1D). DR detection has two main steps: screening and diagnosis. For this purpose, fine pathognomonic DR signs in initial stages are determined normally, after dilating pupils (mydriasis). Then, DR screening is performed through slit lamp bio-microscopy with a + 90.0 D lens, and direct [9]/indirect ophthalmoscopy [10]. The next step is to diagnose DR which is done through finding DR-associated lesions and comparing with the standard grading system criteria. Currently, the diagnosis step is done manually. This procedure is costly, time consuming and requires highly trained clinicians who have considerable experience and diagnostic precision. Even if all these resources are available there is still the possibility of misdiagnosis [11]. This dependency on manual evaluation makes the situation challenging. In addition, in year 2020, the number of adults worldwide with DR, and vision-threatening DR was estimated to be 103.12 million, and 28.54 million. By the year 2045, the numbers are projected to increase to 160.50 million, and 44.82 million [12]. In addition, in developing countries where there is a shortage of ophthalmologists [13,14] as well as access to standard clinical facilities. This problem also exists in underserved areas of the developed world. Recent developments in CAD techniques, which are defined in the subscope of artifi- cial intelligence (AI), are becoming more prominent in modern ophthalmology [15] as they can save time, cost and human resources for routine DR screening and involve lower diag- nostic error factors [15]. CAD can also efficiently manage the increasing number of afflicted DR patients [16] and diagnose DR in early stages when fewer sight threatening effects are present. The scope of AI based approaches are divided between Machine Learning-based (ML) and Deep Learning-based (DL) solutions. These techniques vary depending on the imaging system and disease severity. For instance, in early levels of DR, super-resolution ultrasound imaging of microvessels [17] is used to visualize the deep ocular vasculature. On this imaging system, a successful CAD method applied a DL model for segmenting lesions on ultrasound images [18]. The widely applied imaging methods such as Optical Coherence Tomography (OCT), OCT Angiography (OCTA), Ultrawide-field fundus (UWF) and standard 45◦ fundus photography are covered in this review. In addition to the men- tioned imaging methods, Majumder et al. [15] reported a real time DR screening procedure using a smartphone camera. The main purpose of this review is to analyze 114 articles published within the last 6 years that focus on the detection of DR using CAD techniques. These techniques have made considerable progress in performance with the use of ML and DL schemes that employ the latest developments in Deep Convolutional Neural Networks (DCNNs) architectures for DR severity grading, progression analysis, abnormality detection and semantic segmentation. An overview of ophthalmic applications of convolutional neural networks is presented in [19–22].

2. Methods 2.1. Literature Search Details For this review, literature from 5 publicly accessible databases were surveyed. The databases were chosen based on their depth, their ease of accessibility, and their popularity. These 5 databases are: • PubMed: Publications from MEDLINE (https://pubmed.ncbi.nlm.nih.gov/ accessed on date 14 June 2021) • IEEE Xplore: IEEE conference & journals (https://ieeexplore.ieee.org/Xplore/home. jsp accessed on 14 June 2021) • PUBLONS: Publications from Web of Science (https://publons.com/about/home/ accessed on 14 June 2021) J. Imaging 2021, 7, 165 4 of 26

• SPIE digital library: Conference & journals from SPIE (https://www.spiedigitallibrary. org/ accessed on 14 June 2021) • Google Scholar: Conference and journal proceedings from multiple databases (https: //scholar.google.co.in/ accessed on 14 June 2021). Google Scholar has been chosen to fill the gaps in the search strategy by identifying literature from multiple sources, along with articles that might be missed in manual selection from the other four databases. The articles of this topic within the latest six year time-period show that the advances in AI-enabled DR detection has increased considerably. Figure2 visualizes the articles matching with this topic. This figure was generated using the PubMed results.

Figure 2. Increase in the number of articles matching the predefined keywords over the last 6 years; the PubMed search results were used to create this figure.

At the time of writing this review, a total of 10,635 search results were listed in the PubMed database for this time period when just the term “diabetic retinopathy” was used. The MEDLINE database is arguably the largest for biomedical research. In addition, some resources from the National Library of Medicine which is a part of the U.S. National Institutes of Health, were employed in this review. A search of the IEEE Xplore library and the SPIE digital library for the given time period reports a total of 812 and 332 search results respectively. The IEEE Xplore and SPIE libraries contain only publications of these two professional societies. Further databases were added to this list by collecting papers from non-traditional sources such as pre-print servers such as ArXiv. In Figure3, by using data from all sources, we plot the number of papers published as a function of the year. The scope of this review is limited to “automated detection and grading of diabetic retinopathy using fundus & OCT images”. Therefore, to make the search more manageable, a combination of relevant keywords was applied using the PICO (P-Patient, I-Intervention, C-Control, O-Outcome) search strategy [23]. Keywords used in the PICO search were predetermined. A combination of (“DR” and (“DL” or “ML” or “AI”)) and (fundus or OCT) was used which reduced the initial 10,635 search results in PubMed to just 217 during the period under consideration. A manual process of eliminating duplicate search results carried out across the results from all obtained databases resulted in a total number of 114 papers. J. Imaging 2021, 7, 165 5 of 26

Figure 3. A plot of the number of articles as a function of year. This figure was generated using results from all 5 databases using search terms Diabetic Retinopathy AND (“Deep Learning” OR “Machine Learning”).

Overall, the search strategy for identifying relevant research for the review involved three main steps: 1. Using the predefined set of keywords and logical operators, a small set of papers were identified in this time range (2016–2021). 2. Using a manual search strategy, the papers falling outside the scope of this review were eliminated. 3. The duplicate articles (i.e., the papers occurring in multiple databases) were elimi- nated to obtain the set of unique articles. The search strategy followed by this review abides by the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) 2009 checklist [24], and the detailed search and identification pipeline is shown in Figure4.

Figure 4. Flowchart summarizing the literature search and dataset identification using PRISMA review strategy for identifying articles related to automated detection and diagnosis of diabetic retinopathy. J. Imaging 2021, 7, 165 6 of 26

2.2. Dataset Search Details The backbone of any automated detection model whether ML-based, DL-based, or multi-model-based, is the dataset. High-quality data with correct annotations have extreme importance in image feature extraction and training the DR detection model, properly. In this review, a comprehensive list of datasets has been created and discussed. A previously published paper [25] also gives a list of ophthalmic image datasets, containing 33 datasets that can be used for training DR detection and grading models. The paper by Khan et al. [25] highlighted 33 of the 43 datasets presented in Table1. However, some databases which are popular and publicly accessible are not listed by Khan et al. [25], e.g., UoA- DR [26], Longitudinal DR screening data [27], FGADR [28] etc. In this review, we identified additional datasets that are available to use. The search strategy for determining relevant DR detection datasets is as follows:

Table 1. Datasets for DR detection, grading and segmentation and their respective characteristics.

No. of No. of Dataset Device Used Access Country Year Type Format Remarks Image Subjects Canon CR5 non-mydriatic 3CCD Retinal vessel DRIVE [29] 40 OA Netherlands 2004 400 Fundus JPEG segmentation and camera with a 45◦ FOV ophthalmic diseases

DIARETDB0 ◦ DR detection and 130 50 FOV DFC OA Finland 2006 NR Fundus PNG [30] grading

DIARETDB1 ◦ 89 50 FOV DFC OA Finland 2007 NR Fundus PNG DR detection and [31] grading National Heidelberg retina Taiwan tomography with 30 OA Japan 2007–2017 30 Fundus TIFF DR, pseudo exfoliation University Rostock corneal Hospital [32] module Visucam PRO fundus HEI-MED DR detection and 169 camera (Zeiss, OA USA 2010 910 Fundus JPEG [33] grading Germany) 19 CF [34] 60 NR OA Iran 2012 60 Fundus JPEG DR detection FFA DR grading and lesion Photographs 120 NR OA Iran 2012 60 FFA JPEG & CF [35] detection Fundus Images with 35 NR OA Iran 2012 NR Fundus JPEG Lesion detection Exudates [36] Zeiss VISUCAM 200 BMP 50 ◦ AUR Croatia 2013 NR DRiDB [37] DFC at a 45 FOV Fundus files DR grading eOphtha [38] 463 NR OA France 2013 NR Fundus JPEG Lesion detection

Longitudinal Topcon TRC-NW65 DR screening 1120 with a 45 degrees field OA Netherlands 2013 70 Fundus JPEG DR grading data [27] of view Germany CF-60UVi camera 45 OA 2013 45 JPEG 22 HRF [39] (Canon) and Czech Fundus DR detection Republic Canon CR5 Retinal vessel non-mydriatic 3CCD Same As RITE [40] 40 AUR Netherlands 2013 Fundus TIFF segmentation and camera with a 45◦ Drive FOV ophthalmic diseases TRC-50× mydriatic DR1 [41] 1077 OA Brazil 2014 NR Fundus TIFF DR detection camera Topcon TRC-NW8 retinography (Topcon) DR2 [41] 520 OA Brazil 2014 NR Fundus TIFF DR detection with a D90 camera (Nikon, Japan) DRIMDB CF-60UVi fundus DR detection and 216 OA Turkey 2014 NR Fundus JPEG [42] camera (Canon) grading FFA DR grading and Lesion Photographs 70 NR OA Iran 2014 70 FFA JPEG [43] detection Topcon TRC NW6 MESSIDOR 1200 non-mydriatic OA France 2014 NR TIFF 1 [44] Fundus DR and DME grading retinography, 45◦ FOV J. Imaging 2021, 7, 165 7 of 26

Table 1. Cont.

No. of No. of Dataset Device Used Access Country Year Type Format Remarks Image Subjects Lotus Canon non-mydriatic eyecare 122 Zeiss fundus camera NOA India 2014 NR Fundus JPEG DR detection hospital [45] 90◦ FOV SD-OCT (Heidelberg Srinivasan 3231 Engineering, OA USA 2014 45 OCT TIFF DR detection and [46] grading, DME, AMD Germany) Centervue DRS (Centervue, Italy), Optovue iCam EyePACS (Optovue, USA), 88,702 OA USA 2015 NR Fundus JPEG DR grading [47] Canon CR1/DGi/CR2 (Canon), and Topcon NW (Topcon) 24 images Heidelberg Rabbani [48] & 24 SPECTRALIS OCT OA USA 2015 24 OCT TIFF Diabetic Eye diseases videos HRA system TRC-NW6s (Topcon), DR HAGIS TRC-NW8 (Topcon), DR, HT, AMD and 39 OA UK 2016 38 Fundus JPEG [49] or CR-DGi fundus Glaucoma camera (Canon) JICHI DR AFC-230 fundus 9939 OA Japan 2017 2740 Fundus JPEG DR grading [50] camera (Nidek) Rotterdam Ophthalmic TRC-NW65 Data 1120 non-mydriatic DFC OA Netherlands 2017 70 Fundus PNG DR detection Repository (Topcon) DR [51] Singapore National DR DR, Glaucoma and 494,661 NR NOA Singapore 2017 14,880 Fundus JPEG Screening AMD Program [52] DR grading and lesion IDRID [53] 516 NR OA India 2018 NR Fundus JPEG segmentation Cirrus HD-OCT OCTID [54] 500+ machine (Carl Zeiss OA Multi ethnic 2018 NR OCT JPEG DR, HT, AMD Mediatec) Zeiss VISUCAM 500 UoA-DR [55] 200 Fundus Camera FOV AUR India 2018 NR Fundus JPEG DR grading 45◦ APTOS [56] 5590 DFC OA India 2019 NR Fundus PNG DR grading NIDEK non-mydriatic CSME [57] 1445 AFC-330 auto-fundus NOA Pakistan 2019 NR Fundus JPEG DR grading camera OCTAGON DRI OCT Triton JPEG & 213 AUR Spain 2019 213 OCTA [58] (Topcon) TIFF DR detection Fundus camera (Canon), Fundus ODIR-2019 DR, HT, AMD and 8000 camera (ZEISS), and OA China 2019 5000 Fundus JPEG [59] Glaucoma Fundus camera (Kowa) OIA-DDR DR grading and lesion 13,673 NR OA China 2019 9598 NR JPEG [60] segmentation Zhongshan Hospital and Multiple colour DR grading and lesion First 19,233 NOA China 2019 5278 Fundus JPEG segmentation People’s fundus camera Hospital [61]

AGAR300 ◦ DR grading and MA 300 45 FOV OA India 2020 150 Fundus JPEG [62] detection Bahawal Vision Star, 24.1 Victoria 2500 Megapixel Nikon NOA Pakistan 2020 500 Fundus JPEG DR grading Hospital [57] D5200 camera Retinal Selected from EPACS DR grading and lesion 1593 AUR China 2020 NR Fundus JPEG Lesions [63] dataset segmentation Dataset of fundus Visucam 500 camera images for 757 OA Paraguay 2021 NR JPEG of the Zeiss brand Fundus DR grading the study of DR [64] FGADR [60] 2842 NR OA UAE 2021 NR Fundus JPEG DR and DME grading J. Imaging 2021, 7, 165 8 of 26

Table 1. Cont.

No. of No. of Dataset Device Used Access Country Year Type Format Remarks Image Subjects Optos Dataset 200 Tx DR, Glaucoma, AMD, (Tsukazaki 13,047 ultra-wide-field NOA Japan NR 5389 Fundus JPEG Hospital) device (Optos, UK) and other eye diseases [65] Topcon TRC NW6 MESSIDOR 1748 non-mydriatic AUR France NR 874 TIFF 2 [66] Fundus DR and DME grading retinography 45◦ FOV

Noor Heidelberg 4142 NOA Iran NR 148 OCT TIFF hospital [67] SPECTRALIS SD-OCT DR detection imaging system DFC: Digital fundus camera, RFC: Retinal Fundus Camera, FFA: Fundus Fluorescein Angiogram, DR: Diabetic retinopathy, MA: Microa- neurysms, DME: Diabetic Macular Edema, FOV: field-of-view, AMD: Age-related Macular Degeneration; OA: Open access AUR: Access upon request; NOA: Not Open access; CF: Colour Fundus; HT: Hypertension; NR: Not Represented.

Appropriate results from all 5 of the selected databases (PubMed, PUBLONS, etc.) were searched manually. We gathered information about names of datasets for DR detection and grading. 4. The original papers and websites associated with each dataset were analyzed and a systematic, tabular representation of all available information was created. 5. The Google dataset search and different forums were checked for missing dataset entries and step 2 was repeated for all original datasets found. 6. A final comprehensive list of datasets and its details was generated and represented in Table1. A total of 43 datasets were identified employing the search strategy given above. Upon further inspection, a total number of 30 datasets were identified as open access (OA), i.e., can be accessed easily without any permission or payment. Of the total number of datasets, 6 have restricted access. However, the databases can be accessed with the permission of the author or institution; the remaining 7 are private and cannot be accessed. These datasets were used to create a generalized model because of the diversity of images (multi-national and multi-ethnic groups).

3. Results 3.1. Dataset Search Results This section provides a high-level overview of the search results that were obtained using the datasets, as well as using different review articles on datasets in the domain of ophthalmology, e.g., Khan et al. [25]. Moreover, different leads obtained from GitHub and other online forums are also employed in this overview. Thus, 43 datasets were identified and a general overview of the datasets is systematically presented in this section. The datasets reviewed in this article are not limited to 2016 to 2021 and could have been released before that. The list of datasets and their characteristics are shown in Table1 below. Depending on the restrictions and other proforma required for accessing the datasets, the list has been divided into 3 classes; they are: • Public open access (OA) datasets with high quality DR grades. • DR datasets, that can be accessed upon request, i.e., can be accessed by filling necessary agreements and forms for fair usage; they are a sub-type of (OA) databases and are termed Access Upon Request (AUR) in the table. • Private datasets from different institutions that are not publicly accessible or require explicit permission can access are termed Not Open Access (NOA).

3.2. Diabetic Retinopathy Classification This section discusses the classification approaches used for DR detection. The classifi- cation can be for the detection of DR [68], referable DR (RDR) [66,69], vision threatening DR (vtDR) [66], or to analyze the proliferation level of DR using the ICDR system. Some studies J. Imaging 2021, 7, 165 9 of 26

also considered Diabetic Macular Edema (DME) [69,70]. Recent ML and DL methods have produced promising results in automated DR diagnosis. Thus, multiple performance metrics such as accuracy (ACC), sensitivity (SE) or recall, specificity (SP) or precision, area under the curve (AUC), F1 and Kappa scores are used to evaluate the classification performance. Tables2 and3 present a brief overview of articles that used fundus images for DR classification, and articles that classify DR on fundus images using novel preprocessing techniques, respectively. Table4 lists the recent DR classification studies that used OCT and OCTA images. In the following subsections, we provide the details of ML and DL aspects and evaluate the performance of prior studies in terms of quantitative metrics.

3.2.1. Machine Learning Approaches In this review, 9 out of 93 classification-based studies employed machine learning approaches and 1 article used un-ML method for detecting and grading DR. Hence, in this section, we present an evaluation over various ML-based feature extraction and decision- making techniques that have been employed in the selected primary studies to construct DR detection models. In general, six major distinct ML algorithms were used in these studies. These are: principal component analysis (PCA) [70,71], linear discriminant analysis (LDA)- based feature selection [71], spatial invariant feature transform (SIFT) [71], support vector machine (SVM) [16,71–73], k nearest neighbor (KNN) [72] and random forest (RF) [74]. In addition to the widely used ML methods, some studies such as [75] presented a pure ML model with an accuracy of over 80% including distributed Stochastic Neighbor Embedding (t-SNE) for image dimensionality reduction in combination with ML Bagging Ensemble Classifier (ML-BEC). ML-BEC improves classification performance by using the feature bagging technique with a low computational time. Ali et al. [57] focused on five fundamental ML models, named sequential minimal optimization (SMO), logistic (Lg), multi-layer perceptron (MLP), logistic model tree (LMT), and simple logistic (SLg) in the classification level. This study proposed a novel preprocessing method in which the Region of Interest (ROI) of lesions is segmented with the clustering-based method and K-Means; then, Ali et al. [57] extracted features of the histogram, wavelet, grey scale co-occurrence, and run-length matrixes (GLCM and GLRLM) from the segmented ROIs. This method outperformed previous models with an average accuracy of 98.83% with the five ML models. However, an ML model such as SLg performs well; the required classification time is 0.38 with Intel Core i3 1.9 gigahertz (GHz) CPU, 64-bit Windows 10 operating system and 8 gigabytes (GB) memory. This processing time is higher than previous studies. We can also use ML method for OCT and OCTA for DR detection. Recently, LiU et al. [76] deployed four ML models of logistic regression (LR), logistic regression regular- ized with the elastic net penalty (LR-EN), support vector machine (SVM), and the gradient boosting tree named XGBoost with over 246 OCTA wavelet features and obtained ACC, SE, and SP of 82%, 71%, and 77%, respectively. This study, despite inadequate results, has the potential to reach higher scores using model optimization and fine-tuning hyper parameters. These studies show a lower overall performance if using a small number of feature types and simple ML models are used. Dimensionality reduction is an application of ML models which can be added in the decision layer of CAD systems [77,78]. The ML methods in combination with DL networks can have a comparable perfor- mance with the pure DL models. Narayanan et al. [78] applied a SVM model for the classification of features obtained from the state of art DNNs that are optimized with PCA [78]. This provided an accuracy of 85.7% on preprocessed images. In comparison with methods such as AlexNet, VGG, ResNet, and Inception-v3, the authors report an ACC of 99.5%. In addition, they also found that this technique is more applicable with considerably less computational cost. J. Imaging 2021, 7, 165 10 of 26

3.2.2. Deep Learning Approaches This section gives an overview of DL algorithms that have been used. Depending on the imaging system, image resolution, noise level, and contrast as well as the size of the dataset, the methods can vary. Some studies propose customized networks such as the work done by Gulshan et al. [69], Gargeya et al. [68], Rajalakshmi et al. [79], Riaz et al. [80]. These networks have lower performance outcomes than the state of art networks such as VGG, ResNet, Inception, and DenseNet but the fewer layers make them more generalized, suitable for training with small datasets, and computationally efficient. Quellec et al. [81] applied L2 regularization over the best performed DCNN in the KAGGLE competition for DR detection named o-O. Another example of customized networks is the model proposed by Sayres et al. [82], which showed 88.4%, 91.5%, 94.8% for ACC, SE, and SP, respectively, over a small subset of 2000 images obtained from the EyePACS database. However, the performance of this network is lower than the results obtained from Mansour et al. [72] which used a larger subset of the EyePACS images (35,126 images). Mansour et al. [72] deployed more complex architectures such as the AlexNet on the extracted features of LDA and PCA that generated better results than Sayres et al. [82] with 97.93%, 100%, and 93% ACC, SE, and SP, respectively. Such DCNNs should be used with large datasets since the large number of images used in the training reduces errors. If a deep architecture is applied for a small number of observations, it might cause overfitting in which the performance over the test data is not as well as expected on the training data. On the other hand, the deepness of networks does not always guarantee higher performance, meaning that they might face problems such as vanishing or exploding gradient which will have to be addressed by redesigning the network to simpler architectures. Furthermore, the deep networks extract several low and high-level features. As these image features get more complicated, it becomes more difficult to interpret. Sometimes, high-level attributes are not clinically meaningful. For instance, the high-level attributes may refer to an existing bias in all images belonging to a certain class, such as light intensity and similar vessel patterns, that are not considered as a sign of DR but the DCNN will consider them as critical features. Consequently, this fact makes the output predictions erroneous. In the scope of DL-based classification, Hua et al. [83] designed a DL model named Trilogy of Skip-connection Deep Networks (Tri-SDN) over the pretrained base model ResNet50 that applies skip connection blocks to make the tuning faster yielding to ACC and SP of 90.6% and 82.1%, respectively, which is considerably better than the values of 83.3% and 64.1% compared with the situation when skip connection blocks are not used. There are additional studies that do not focus on proposing new network architectures but enhance the preprocessing step. The study done by Pao et al. [84] presents bi-channel customized CNN in which an image enhancement technique known as unsharp mask is used. The enhanced images and entropy images are used as the inputs of a CNN with 4 convolutional layers with results of 87.83%, 77.81%, 93.88% over ACC, SE, and SP. These results are all higher than the case of analysis without preprocessing (81.80% 68.36%, 89.87%, respectively). Shankar et al. [85] proposed another approach to preprocessing using Histogram- based segmentation to extract regions containing lesions on fundus images. As the classification step, this article utilized the Synergic DL (SDL) model and the results indicated that the presented SDL model offers better results than popular DCNNs on MESSIDOR 1 database in terms of ACC, SE, SP. Furthermore, classification is not limited to the DR detection and DCNNs can be applied to detect the presence of DR-related lesions such as the study reported by Wang et al. They cover twelve lesions in their study: MA, IHE, superficial retinal hemorrhages (SRH), Ex, CWS, venous abnormalities (VAN), IRMA, NV at the disc (NVD), NV elsewhere (NVE), pre-retinal FIP, VPHE, and tractional retinal detachment (TRD) with average precision and AUC 0.67 and 0.95, respectively; however, features such as VAN have low individual detection accuracy. This study provides essential steps for DR detection based on the J. Imaging 2021, 7, 165 11 of 26

presence of lesions that could be more interpretable than DCNNs which act as black boxes [86–88]. There are explainable backpropagation-based methods that produce heatmaps of the lesions associated with DR such as the study done by Keel et al. [89], which highlights Ex, HE, and vascular abnormalities in DR diagnosed images. These methods have limited per- formance providing generic explanations which might be inadequate as clinically reliable. Tables2–4 briefly summarizes previous studies on DR classification with DL methods.

Table 2. Classification-based studies in DR detection using fundus imaging.

Pre- Dataset Accuracy Sensitivity AUC Author, Year Grading Details Processing Method Specificity DCNN: Abràmoff, IDx-DR MESSIDOR 2 Detect RDR and vtDR No NA 96.80% 87.00% 0.98 2016 [66] X2.1. ML: RF EyePACS, Chandrakumar, DRIVE, Grade DR based on ICDR Yes DCNN STARE and NA NA NA 2016 [90] STARE scale DRIVE: 94%

Colas, 2016 EyePACS Grade DR based on ICDR No DCNN NA 96.20% 66.60% 0.94 [91] scale Detect DR based on ICDR Gulshan, 2016 EyePACS, EyePACS: EyePACS: EyePACS: scale, RDR and referable Yes DCNN NA [69] MESSIDOR 2 97.5% 93.4% 0.99 DME Wong, 2016 EYEPACS, Detect RDR, Referable No DCNN NA 90% 98% 0.99 [92] MESSIDOR 2 DME (RDME) EyePACS, Gargeya, 2017 EyePACS: EyePACS: EyePACS: MESSIDOR 2, Detect DR or non-DR Yes DCNN NA 94% 98% 0.97 [68] eOphtha ML: t-SNE Somasundaram, DIARETDB1 Detect PDR, NPDR No and NA NA NA NA 2017 [76] ML-BEC Grade DR with the Davis DCNN: Takahashi, Jichi Medical grading scale (NPDR, No Modified 81% NA NA NA 2017 [50] University severe DR, PDR) GoogLeNet Detect RDR, vtDR, RDR: 90.5% RDR: 91.6% RDR: 0.93 SiDRP No DCNN NA Ting, 2017 [52] glaucoma, AMD vtDR: 100% vtDR: 91.1% vtDR: 0.95 EyePACS, DCNN: L2- Quellec, 2017 Grade DR based on ICDR Yes regularized NA 94.60% 77% 0.955 [81] eOphta, DIARETDB 1 o-O DCNN Weakly supervised network to classify image and MESSIDOR MESSIDOR Wang, 2017 EyePACS, Grade DR based on ICDR Yes 1: RDR: NA NA 1: RDR: [93] MESSIDOR 1 scale extract high resolution 91.1% 0.957 image patches containing a lesion Vision Quest Benson, 2018 DCNN: Biomedical Grade DR based on ICDR Yes NA 90% 90% 0.95. [94] Inception v3 database scale + scars detection High- Chakrabarty, Resolution Detect DR Yes DCNN 91.67% 100% 100% F1 score: 1 2018 [95] Fundus (HRF) Images Multiple Costa, 2018 Instance MESSIDOR 1 Grade DR based on ICDR No NA NA NA 0.9 [96] scale Learning (MIL) DCNN: Multi- MA, HE, CWS, Ex sieving F1 score: Dai, 2018 [97] DIARETDB1 Yes 96.10% 87.80% 99.70% detection CNN(image 0.93 to text mapping)

Dutta, 2018 Mild NPDR, Moderate DCNN: EyePACS Yes 86.30% NA NA NA [98] NPDR, Severe NPDR, VGGNet PDR Kwasigroch, DCNN: EyePACS Grade DR based on ICDR Yes 81.70% 89.50% 50.50% NA 2018 [99] scale VGG D UWF DCNN, Levenkova, (Ultra-Wide Detect CWS, MA, HE, Ex No NA NA NA 0.80 SVM 2018 [78] Field) J. Imaging 2021, 7, 165 12 of 26

Table 2. Cont.

Dataset Pre- Accuracy Sensitivity AUC Author, Year Grading Details Processing Method Specificity DCNN, ML: Mansour, 2018 AlexNet, EyePACS Grade DR based on ICDR Yes 97.93% 100% 0.93 NA [72] scale LDA, PCA, SVM, SIFT Smartphone- Rajalakshmi, Detect DR and vtDR DR: 95.8% DR: 80.2% based imaging Grade DR based on ICDR No DCNN NA NA 2018 [7] vtDR: 99.1% vtDR: 80.4% device scale DCNN: Robiul Islam, APTOS 2019 Grade DR based on ICDR Yes 91.32% NA NA NA 2018 [100] scale VGG16 Zhang, 2018 DCNN: EyePACS Grade DR based on ICDR Yes NA 61% 84% 0.83 [101] scale Resnet-50 Zhang, 2018 Kappa score: EyePACS Grade DR based on ICDR No DCNN 82.10% 76.10% 0.855 [102] scale 0.66 7 FOV images DCNN: Arcadu, 2019 2 step grading based on No NA 66% 77% 0.68 [103] of RIDE and ETDRS Inception v3 RISE datasets DCNN: RDR: Kitwe Central Ensemble of Bellemo, 2019 Hospital, Grade DR based on ICDR No Adapted NA 92.25% RDR: RDR: 0.973 vtDR: 89.04% vt DR: 0.934 [104] scale VGGNet & Zambia 99.42% Resenet Chowdhury, DCNN: EyePACS Grade DR based on ICDR Yes 2 Class: NA NA NA 2019 [105] scale Inception v3 61.3% Probabilistic Govindaraj, Almost 90% Almost 97% F1 score: MESSIDOR 1 Detect DR Yes Neural 98% from chart from chart almost 0.97 2019 [106] Network Aravind Eye Quadratic weighted K Hospital and Aravind: Aravind: Gulshan, 2019 Grade DR based on ICDR scores: No DCNN NA 88.9% 92.2% [107] Sankara scale Aravind: SN: 92.1% SN: 95.2% Nethralaya, 0.85 India SN: 0.91 EyePACS, DCNN: Kappa score: Hathwar, 2019 Detect DR Yes NA 94.30% 95.50% [108] IDRID Xception-TL 0.88 Detect DR grade and DCNN: DR grade: He, 2019 [109] IDRID Yes NA NA NA DME risk AlexNet 65% Kyung Hee University Grade DR based on ICDR DCNN: Hua, 2019 [83] No 90.60% 96.50% 82.10% 0.88 Medical scale Tri-SDN Center DCNN: Inception Jiang, 2019 Beijing v3, Integrated Integrated Integrated Tongren Eye DR or Non-DR Yes Resnet152 0.946 [110] model: model: model: Center and 88.21% 85.57% 90.85% Inception- Resnet-v2 DCNN: Attention IDRID, DR: 92.6%, DR: 92.0%, DR: 0.96 Grade DR based on ICDR No network NA Li, 2019 [111] MESSIDOR 1 DME: 91.2% DME: 70.8% DME: 0.92 scale based on ResNet50 Shanghai Zhongshan Hospital (SZH) and Grade DR based on ICDR DCNN: Li, 2019 [61] Shanghai First Yes 93.49% 96.93% 93.45% 0.9905 scale Inception v3 People’s Hospital (SFPH), China, MESSIDOR 2 Metan, 2019 DCNN: EyePACS Grade DR based on ICDR Yes 81% NA NA NA [112] scale ResNet Saneikai Tsukazaki Hospital and Nagasawa, Detect PDR Yes DCNN: NA PDR: 94.7% PDR: 97.2% PDR: 0.96 Tokushima VGG-16 2019 [113] University Hospital, Japan DCNN: Ensemble of Qummar, 2019 (Resnet50, F1 score: EyePACS Grade DR based on ICDR Yes Inception 80.80% 51.50% 86.72% [114] 0.53 scale v3, Xception, Dense121, Dense169) J. Imaging 2021, 7, 165 13 of 26

Table 2. Cont.

Dataset Pre- Accuracy Sensitivity AUC Author, Year Grading Details Processing Method Specificity Thailand national DR Ruamviboonsuk, screening Grade DR based on ICDR No DCNN NA DR: 96.8% DR: 95.6% DR: 0.98 2019 [115] program and detect RDME dataset Detect DR based on Sahlsten, 2019 DCNN: Private dataset multiple grading systems, Yes NA 89.60% 97.40% 0.98 [70] Inception-v3 RDR and DME Sayres, 2019 EyePACS Grade DR based on ICDR No DCNN 88.40% 91.50% 94.80% NA [82] Sengupta, EyePACS, Grade DR based on ICDR Yes DCNN: 90. 4% 90% 91.94% NA 2019 [116] MESSIDOR 1 scale Inception-v3 SiDRP, SiMES, SINDI, SCES, Detect DR: Ting, 2019 BES, AFEDS, Grade DR based on ICDR Yes DCNN NA NA NA 0.86 [117] CUHK, DMP scale Melb, with 2 RDR: 0.96 FOV Zeng, 2019 DCNN: EyePACS Grade DR based on ICDR Yes NA 82.2% 70.7% 0.95 [118] scale Inception v3 ML: SMO, Lg, MLP, LMT, Lg MLP: MLP: 0.916 Bahawal employed 73.73% LMT: 0.919 Victoria on selected LMT: 73.00 Grade DR based on ICDR Yes NA NA SLg: 0.921 Ali, 2020 [57] Hospital, post- SLg: 73.07 scale SMO: 0.878 SMO: 68.60 Pakistan. optimized Lg: 0.923 hybrid Lg: 72.07% feature datasets EyePACS, MESSIDOR 2, Araujo, 2020 IDRID, DMR, Kappa score: SCREEN-DR, Grade DR based on ICDR Yes DCNN NA NA NA EyePAC: [119] DR1, scale 0.74 DRIMDB, HRF EyePACS, MESSIDOR 1, 2, eOphta, UoA-DR from DCNN: the University Chetoui, 2020 Grade DR based on ICDR Yes Inception- 97.90% 95.80% 97.10% 98.60% [26] of Auckland scale ResNet research, v2 IDRID, STARE, DIARETDB0, 1 DCNN: Elswah, 2020 Grade DR based on ICDR NN: 88% IDRID Yes ResNet 50 + SVM: 65% NA NA NA [74] scale NN or SVM DR Debrecen dataset DCNN Gadekallu, collection of 20 DR or Non-DR Yes ML: PCA + 97% 92% 95% NA 2020 [71] features of Firefly MESSIDOR 1 ML: PCA+ grey wolf Gadekallu, DR Debrecen Detect DR Yes optimiza- 97.30% 91% 97% NA 2020 [120] dataset tion (GWO) + DNN MESSIDOR 1, Gayathri, 2020 Wavelet MESSIDOR MESSIDOR MESSIDOR EyePACS, Grade DR based on ICDR NA NA scale Transform, 1: 99.75% 1: 99.8% 1: 99.9% [121] DIARETDB0 SVM, RF MA: 89.4% MA: 85.5% MA: 90.7% MA: 0.94 Image-wise label the DCNN: HE: 98.9% HE: 100% HE: 98.6% HE: 1 Jiang, 2020 Ex: 92.8% Ex: 93.3% Ex: 92.7% Ex: 0.97 MESSIDOR 1 Yes ResNet 50 presence of MA, HE, Ex, CWS: 88.6% CWS: 94.6% CWS: 86.8% CWS: 0.97 [122] based CWS Normal: Normal: Normal: Normal: 94.2% 93.9% 94.4% 0.98 DCNN: APTOS 2019, Kappa score: Lands, 2020 Grade DR based on ICDR Ye DensNet 93% NA NA APTOS 2015 0.8 [123] scale 169 EyePACS, Ludwig, 2020 APTOS, DCNN: MESSIDOR MESSIDOR MESSIDOR Detect RDR Yes NA [10] MESSIDOR 2, DenseNet201 2: 87% 2: 80% 2: 0.92 EYEGO Majumder, EyePACS, Grade DR based on ICDR Yes CNN 88.50% NA NA NA 2020 [15] APTOS 2019 scale J. Imaging 2021, 7, 165 14 of 26

Table 2. Cont.

Dataset Pre- Accuracy Sensitivity AUC Author, Year Grading Details Processing Method Specificity Memari, 2020 MESSIDOR 1, Detect DR Yes DCNN NA NA NA NA [124] HEI-MED DCNN: Narayanan, Detect and grade DR AlexNet, APTOS 2019 Yes ResNe, 98.4% NA NA 0.985 2020 [125] based on ICDR scale VGG16, Inception v3 CNN: Grade DR based on ICDR bichannel Pao, 2020 [84] EyePACS Yes 87.83% 77.81% 93.88% 0.93 scale customized CNN ResNet-50 for extraction and SVM, SVM: 99%, SVM: 99%, Paradisa, 2020 DIARETDB 1 Grade DR based on ICDR Yes NA NA [73] scale RF, KNN, KNN: 100% KNN: 100% and XGBoost as classifiers DCNN: Patel, 2020 EyePACS Grade DR based on ICDR Yes MobileNet 91.29% NA NA NA [126] scale v2 EyePACS, EyePACS: EyePACS: EyePAC: NA Yes DCNN NA Riaz, 2020 [80] MESSIDOR 2 94.0% 97.0% 0.98 DCNN: Samanta, 2020 EyePACS Grade DR based on ICDR Yes DenseNet121 84.1% NA NA NA [127] scale based Country: Country: Country: EyePACS, EyePACS: EyePACS: EyePACS: Serener, 2020 MESSIDOR 1, DCNN: 65% 17% 89% Grade DR based on ICDR Yes NA [128] eOphta, HRF, scale ResNet 18 Continent: Continent: Continent: IDRID EyePACS + EyePACS + EyePACS + HRF: 80% HRF: 80% HRF: 80% Grade DR to non-DR, Shaban, 2020 APTOS moderate DR, and severe Yes DCNN 88% 87% 94% 0.95 [129] DR DCNN: Histogram- Shankar, 2020 MESSIDOR 1 Grade DR based on ICDR Yes based 99.28% 98.54% 99.38% NA [85] scale segmenta- tion + SDL DCNN: Hierarchical Singh, 2020 IDRID, F1 score: Yes Ensemble of 96.12% 96.32% 95.84% MESSIDOR 1 Grade DME in 3 levels 0.96 [130] CNNs (HE-CNN) DCNN: Thota, 2020 EyePACS NA Yes 74% 80.0% 65.0% 0.80 [131] VGG16 2 Eye MA: 99.7% HE: 98.4% Wang, 2020 hospitals, Grading: Grading: Grading: DIARETDB1, MA, HE, EX Yes DCNN EX: 98.1% [132] 80.58% 95.77% 0.93 EyePACS, Grading: IDRID 91.79% DCNN: Multi-task Kappa score: Grade DR severity based network Wang, 2020 Shenzhen, on ICDR scale and detect using Grading: Guangdong, No NA NA NA 0.80 [133] MA, IHE, SRH, HE, CWS, channel- DR feature: China VAN, IRMA, NVE, NVD, based PFP, VPH, TRD attention 0.64 blocks Classify to retinal tear & DCNN: Zhang, 2020 3 Hospitals in F1 score: retinal detachment, DR Yes Inception- 93.73% 91.22% 96.19% [134] China 0.93 and pathological myopia ResNetv2 EyePACS, MESSIDOR 1, eOphta, CHASEDB 1, U-Net + Abdelmaksoud, Yes 95.10% 86.10% 86.80% 0.91 2021 [135] HRF, IDRID, SVM STARE, DIARETDB0, 1 Three FOV: Bora, 2021 DCNN: · EyePACS Grade DR based on ICDR No NA NA NA 0 79 [115] scale Inception v3 One FOV: 0·70 J. Imaging 2021, 7, 165 15 of 26

Table 2. Cont.

Dataset Pre- Accuracy Sensitivity AUC Author, Year Grading Details Processing Method Specificity APTOS: DCNN: Gangwar, 2021 APTOS 2019, 82.18%MES Grade DR based on ICDR Yes Inception NA NA NA [136] MESSIDOR 1 scale ResNet v2 SIDOR 1: 72.33% DCNN: DDR, MobileNet 1 F1 score: MESSIDOR 1, Grade DR based on ICDR No with MESSIDOR MESSIDOR MESSIDOR MESSIDOR He, 2021 [137] 1: 92.1% 1: 89.2% 1: 91% EyePACS scale attention 1: 0.89 blocks

National DCNN: Taiwan Inception v4 University Detect DR: Detect DR: Detect DR: Hsieh, 2021 Detect any DR, RDR and for any DR 90.7% 92.2% 89.5% Hospital Yes 0.955 [32] PDR and RDR RDR: 90.0% RDR: 99.2% RDR: 90.1% (NTUH), and ResNet PDR: 99.1% PDR: 90.9% PDR: 99.3% Taiwan, for PDR EyePACS DCNN: customized highly F1 score: Khan, 2021 EyePACS Grade DR based on ICDR Yes 85% 55.6% 91.0% [138] scale nonlinear 0.59 scale- invariant network 7 FOV fundus images of Catholic DCNN: Oh, 2021 [2] Detect DR Yes 83.38% 83.38% 83.41% 0.915 Kwandong ResNet 34 University, South Korea MESSIDOR, DCNN: EyePACS: EyePACS: EyePACS: EyePACS: Saeed, 2021 Grade DR based on ICDR No [139] EyePACS scale ResNet GB 99.73% 96.04% 99.81% 0.98 EyePACS, images from Detect RDR with lesion-based segmentation Peking Union Wang, 2021 of PHE, VHE, NV, CWS, DCNN: EyePACS: EyePACS: EyePACS: Medical No NA [140] FIP, IHE, IRMA and MA, Inception v3 90.60% 80.70% 0.943 College then staging based on Hospital, ICDR scale China DCNN: Multichannel- Wang. 2021 based GAN RDR: 93.2%, MESSIDOR 1 Grade DR based on ICDR Yes DR Grading: RDR: 92.6% RDR: 91.5% RDR: 0.96 with semi [141] scale 84.23% super- vision Characteristics and evaluation of DR grading methods. In this table, the methods that have no preprocessing or common preprocessing are mentioned. In addition to the abbreviations described earlier, this table contains new abbreviations: Singapore integrated Diabetic Retinopathy Screening Program (SiDRP) between 2014 and 2015 (SiDRP 14–15), Singapore Malay Eye Study (SIMES), Singapore Indian Eye Study (SINDI), Singapore Chinese Eye Study (SCES), Beijing Eye Study (BES), African American Eye Study (AFEDS), Chinese University of Hong Kong (CUHK), and Diabetes Management Project Melbourne (DMP Melb) and Generative Adversarial Network (GAN).

Table 3. Classification-based studies in DR detection using a special preprocessing on fundus images for DR grading in ICDR scale.

Author, Year Dataset Pre-Processing Technique Method Accuracy DRIVE, STARE, DIARETDB0, Yes, Contrast optimization Image processing NA Datta, 2016 [142] DIARETDB1 Original image: 81.8% Lin, 2018 [143] EyePACS Yes, Convert to entropy images DCNN Entropy images: 86.1% Mukhopadhyay, 2018 [144] Prasad Eye Institute, India Yes, Local binary patterns ML: Decision tree, KNN KNN: 69.8% Pour, 2020 [145] MESSIDOR 1, 2, IDRID Yes, CLAHE DCNN: EfficientNet-B5 NA

Ramchandre, 2020 [146] APTOS 2019 Yes, Image augmentation with DCNN: EfficientNetb3, EfficientNetb3: 91.4% AUGMIX SEResNeXt32x4d SEResNeXt32x4d: 85.2% DCNN: Hyperparameter Shankar, 2020 [85] MESSIDOR 1 Yes, CLAHE Tuning Inception-v4 99.5% (HPTI-v4) DRIVE, STARE, MESSIDOR 1, Yes, Image contrast enhancement DCNN: InceptionResNet Bhardwaj, 2021 [147] 93.3% DIARETDB1, IDRID, ROC and OD localization v2 Yes, Adaptive histogram ML: SVM + KNN + Binary IDRID 98.1% Bilal, 2021 [16] equalization and contrast Tree stretching Yes, Optic disc location, fundus Elloumi, 2021 [148] DIARETDB1 ML: SVM, RF, KNN 98.4% image partitioning AHE: Adaptive Histogram Equalization, CLAHE: Contrast Limited Adaptive Histogram Equalization. J. Imaging 2021, 7, 165 16 of 26

Table 4. Classification-based studies in DR detection using OCT and OCTA.

Author, Grading Dataset Preprocessing Method Accuracy Sensitivity Specificity AUC Year Details

OCTA images, ML: Vessel segmentation, Local Eladawi, University of Detect DR Yes 97.3% 97.9% 96.4% 0.97 2018 [149] feature extraction, Louisville, USA SVM Islam, 2019 Kermani OCT dataset NA Yes DCNN: DenseNet 201 98.6% 0.986 0.995 NA [150] Le, 2020 No DCNN: VGG16 87.3% 83.8% 90.8% 0.97 [151] Private OCTA dataset Grade DR OCT. OCTA, clinical and demographical Detect mild Sandhu, and data, University of Yes ML: RF 96.0% 100.0% 94.0% 0.96 2020 [75] moderate Louisville Clinical DR Center, USA Logistic Regression (LR), LR regularized Liu, 2021 LR-EN: LR-EN: LR-EN: Detect DR Yes LR-EN: 0.83 [77] Private OCTA dataset with the elastic net 80.0% 82.0% 84.0% (LR-EN), SVM and XGBoost

3.3. Diabetic Retinopathy Lesion Segmentation The state-of-the-art DR classification machines [68,69] identify referable DR identifica- tion without directly taking lesion information into account. Therefore, their predictions lack clinical interpretation, despite their high accuracy. This black box nature of DCNNs is the major problem that makes them unsuitable for clinical application [86,152,153] and has made the topic of eXplainable AI (XAI) of major importance [153]. Recently, visualization techniques such as gradient-based XAI have been widely used for evaluating networks. However, these methods with generic heatmaps only highlight the major contributing lesions and hence are not suitable for detection of DR with multiple lesions and severity. Thus, some studies focused on the lesion-based DR detection instead. In general, we found 20 papers that do segmentation of the lesions, such as MA (10 articles), Ex (9 articles) and IHE, VHE, PHE, IRMA, NV, CWS. In the following sections, we discuss the general segmentation approaches. The implementation details of each article are accessible in Tables5 and6 based on its imaging type.

3.3.1. Machine Learning and Un-Machine Learning Approaches In general, using ML methods with a high processing speed, low computational cost, and interpretable decisions is preferred to DCNNs. However, the automatic detection of subtle lesions such as MA did not reach acceptable values. In this review, we collected 2 pure ML-involved models and 6 un-ML methods. As reported in a study by Ali Shah at el. [154], they detected MA using color, Hessian and curvelet-based feature extraction and achieved a SE of 48.2%. Huang et al. [155] focused on localizing NV through using the Extreme Learning Machine (ELM). This study applied Standard deviation, Gabor, differential invariant, and anisotropic filters for this purpose and with the final classifier applying ELM. This network performed as well as an SVM with lower computational time (6111 s vs. 6877 s) with a PC running the Microsoft Windows 7 operating system with a Pentium Dual-Core E5500 CPU and 4 GB memory. For the segmentation task, the preprocessing step had a fundamental rule which had a direct effect on the outputs. The preprocessing techniques varied depending on the lesion type and the dataset properties. Orlando et al. applied a combination of DCNN extracted features and manually designed features using image illumination correction, CLAHE contrast enhancement, and color equalization. Then, this high dimensionality feature vector was fed into an RF classifier to detect lesions and achieved an AUC score of 0.93, which is comparable with some DCNN models [81,137,141]. Some studies used un-ML methods for detection of exudates such as that of Kaur et al. [156], who proposed a pipeline consisting of a vessel and optic disk removal step and used a dynamic thresholding method for detection of CWS and Ex. Prior to this study, Imani et al. [157] also did the same process with the focus on Ex on a smaller dataset. In J. Imaging 2021, 7, 165 17 of 26

their study, they employed additional morphological processing and smooth edge removal to reduce the detection of CWS as Ex. This article reported the SE and SP of 89.1% and 99.9% and had an almost similar performance compared to Kaur’s results with 94.8% and 99.8% for SE and SP, respectively. Further description of the recent studies on lesion segmentation with ML approach can be found in Tables5 and6.

3.3.2. Deep Learning Approaches Recent works show that DCNNs can produce promising results in automated DR lesion segmentation. DR lesion segmentation is mainly focused on fundus imaging. How- ever, some studies apply a combination of fundus and OCT. Holmberg et al. [158] proposed a retinal layer extraction pipeline to measure retinal thickness with Unet. Furthermore, Yukun Guo et al. [159] applied DCNNs for avascular zone segmentation from OCTA images and received the accuracy of 87.0% for mild to moderate DR and 76.0% for severe DR. Other studies mainly focus on DCNNS applied to fundus images which give a clear view of existing lesions on the surface of the retina. Other studies such as Lam et al. [160] deployed state of the art DCNNS to detect the existence of DR lesions in image patches using AlexNet, ResNet, GoogleNet, VGG16, and Inception v3 achieving 98.0% accuracy on a subset of 243 fundus images obtained from EyePACS. Wang et al. [28] also applied Inception v3 as the feature map in combination with FCN 32 s as the segmentation part. They reported SE values of 60.7%, 49.5%, 28.3%, 36.3%, 57.3%, 8.7%, 79.8%, and 0.164 over PHE, Ex, VHE, NV, CWS, FIP, IHE, MA, respectively. Quellec et al. [81] focused on four lesions CWS, Ex, HE, and MA using a predefined DCNN architecture named o-O solution and reported the values of 62.4%, 52.2%, 44.9%, and 31.6% over CWS, Ex, HE, and MA for SE, respectively, which shows a slightly better performance for CWS and Ex than Wang et al. [140] and considerably better on MA than Wang et al. [141]. On the other hand, Wang et al. [141] performed better in HE detection. Further details of these article and others can be found in the Tables5 and6.

Table 5. Segmentation-based studies in DR detection using fundus images.

Segmentation Dataset Preprocessing AUC Author, Year Considered Lesions Method Sensitivity/Specificity Dynamic decision thresholding, DIARETDB1, morphological Imani, 2016 [157] Ex Yes 89.01%/99.93% 0.961 HEI-MED, eOphta feature extraction, smooth edge removal Curvelet transform Shah, 2016 [154] ROC MA Yes and rule-based 48.2%/NA NA classifier DIARETDB1: CWS: 62.4%/NA DCNN: o-O EyePACS, eOphta, CWS, Ex, HE, MA Yes Ex: 55.2%/NA EyePACS: 0.955 Quellec, 2017 [81] solution DIARETDB1 HE: 44.9%/NA MA: 31.6%/NA MESSIDOR 1, NV Yes ELM ACC: 89.2% Huang, 2018 [155] DIARETDB0, 1 NA/NA STARE, eOphta, MESSIDOR 1, Dynamic decision Kaur, 2018 [156] Ex, CWS Yes 94.8%/99.80% ACC: 98.43% DIARETDB1, thresholding private dataset DCNN: AlexNet, VGG16, GoogLeNet, EyePACS: 0.99 Lam, 2018 [160] EyePACS, eOphta Ex, MA, HE, NV NA NA/NA ResNet, and ACC: 98.0% Inception-v3 Benzamin, 2018 IDRID Ex Yes DCNN ACC: 96.6% [161] 98.29%/41.35% eOphtha, Orlando, 2018 [162] DIARETDB1, MA, HE Yes ML: RF NA/NA 0.93 MESSIDOR 1 DCNN: Two level Eftekhari, 2019 [163] ROC, eOphta MA Yes CNN, thresholded NA/NA ROC: 0.660 probability map Blood vessels, optic DCNN: AlexNet, AlexNet: 0.94 Wu, 2019 [164] HRF Yes GoogleNet, NA/NA disc and other ACC: 95.45% regions Resnet50, VGG19 J. Imaging 2021, 7, 165 18 of 26

Table 5. Cont.

Dataset Preprocessing Segmentation AUC Author, Year Considered Lesions Method Sensitivity/Specificity Ex: 0.889 MA: 0.525 IDRID Ex, MA, HE, CWS Yes DCNN: Global and Yan, 2019 [165] local Unet NA/NA HE: 0.703 CWS: 0.679 Qiao, 2020 [166] IDRID MA Yes DCNN 98.4%/97.10% ACC: 97.8% PHE: 60.7%/90.9% Detect RDR with Ex: 49.5%/87.4% EyePACS, images lesion-based VHE: 28.3%/84.6% from Peking Union segmentation of DCNN: Inception NV: 36.3%/83.7% Wang, 2021 [141] No NA Medical College PHE, VHE, NV, v3 and FCN 32s CWS: 57.3%/80.1% Hospital CWS, FIP, IHE, Ex, FIP: 8.7%/78.0% MA IHE: 79.8%/57.7% MA: 16.4%/49.8% MA, IHE, VHE, DCNN: Transfer Wei, 2021 [63] EyePACS PHE, Ex, CWS, FIP, Yes learning from NA/NA NA NV Inception v3 Ex: 87.55%/NA IOU: Ex:0.84 DCNN: Enhanced MA: 59.33%/NA MA: 0.56 Xu, 2021 [167] IDRID Ex, MA, HE, CWS Yes Unet named FFUnet HE: 73.42%/NA HE: 0.73 CWS: 79.33%/NA CWS: 0.75 ICDR scale: International Clinical Diabetic Retinopathy scale, RDR: Referable DR, vtDR: vision threatening DR, PDR: Proliferative DR, NPDR: Non-Proliferative DR, MA: Microaneurysm, Ex: hard Exudate, CWS: Cotton Wool Spot, HE: Hemorrhage, FIP: Fibrous Proliferation, VHE: Vitreous Hemorrhage, PHE: Preretinal Hemorrhage, NV: Neovascularization.

Table 6. Segmentation-based studies in DR detection using OCT, OCTA images.

Segmentation Dataset Considered Pre-Processing Sensitivity AUC Author, Year Lesions Method Specificity ACC:Control: Control: 100.0% Control: 84.0% 89.0% Diabetes Diabetes without Diabetes without UW-OCTA DR: 99.0% Mild DR: 77.0% Mild without DR: 79% Guo, 2018 [159] Avascular area Yes DCNN Mild to private dataset to moderate DR: to moderate DR: moderate DR: 99.0% Severe DR: 85.0% Severe DR: 87% Severe DR: 100.0% 68.0% 76.0% OCT and OCTA 12 different ElTanboly, 2018 images of retinal layers & No SVM NA NA ACC: 97.0% [168] University of segmented Louisville OCTA plexuses Statistical analysis and SD-OCT images extraction of ElTanboly, 2018 features such as of 12 distinct retinal Yes NA NA ACC: 73.2% [168] KentuckyLions layers tortuosity, Eye Center reflectivity, and thickness for 12 retinal layers 12 layers; OCT images of quantifies the Sandhu, 2018 DCNN: 2 Stage University of Yes 92.5% 95.0% ACC: 93.8% [169] reflectivity, deep CNN Louisville, USA curvature, and thickness DCNN: On OCT: Retinal OCT from layer Helmholtz Segment retinal segmentation Holmberg, 2020 IOU: Zentrum thickness map, No with Unet NA NA on OCT: 0.94 [158] München, Grade DR based On fundus: Self Fundus from on ICDR scale supervised EyePACS learning, ResNet50

4. Conclusions Recent studies for DR detection are mainly focused on automated methods known as CAD systems. In the scope of the CAD system for DR, there are two major approaches known as first classification and staging DR severity and second segmentation of lesions such as MA, HE, Ex, CWS associated with DR. The DR databases are categorized into public databases (36 out of 43) and private databases (7 out of 43). These databases contain fundus and OCT retinal images, and among these two imaging modalities, fundus photos are used in 86.0% of the published studies. Several public large fundus datasets are available online. The images might have been J. Imaging 2021, 7, 165 19 of 26

taken with different systems that affect image quality. Furthermore, some of the image-wise DR labels can be erroneous. The image databases that provide lesion annotations constitute only a small portion of the databases that require considerable resources for pixel-wise annotation. Hence, some of them contain fewer images than image-wise labeled databases. Furthermore, Lesion annotations requires intra-annotator agreement and high annotation precision. These factors make the dataset error sensitive, and its quality evaluation might become complicated. The DR classification needs a standard grading system validated by clinicians. ETDRS is the gold standard grading system proposed for DR progression grading. Since this grading type needs fine detail evaluation and access to all 7 FOV fundus images, these issues make the use of ETDRS limited. Thus, ICDR with less precise scales is applicable for 1 FOV images to detect the DR severity levels. The classification and grading DR can be divided into two main approaches, namely, ML-based and DL-based classification. The ML/DL-based DR detection has a generally better performance than ML/DL-based DR grading using the ICDR scale which needs to extract higher-level features associated with each level of DR [57,71]. The evaluation results proved that the DCNN architectures can achieve higher performance scores when large databases are used [72]. There is a trade-off between the performance on one side and the architecture complexity, processing time, and the lack of interpretability over the network’s decisions and extracted features on the other side. Thus, some recent works have proposed semi-DCNN models containing both DL-based and ML-based models acting as classifier or feature extractor [71,72]. The use of regularization techniques is another solution to reduce the complexity of DCNN models [81]. The second approach for CAD-related studies in DR is pixel-wise lesion segmentation or image-wise lesion detection. The main lesions of DR are MA, Ex, HE, CWS. These lesions have a different detection difficulty which directly affects the performance of the proposed pipeline. Among these lesions, the annotation of MA is more challenging [28,167]. Since this lesion is difficult to detect and is the main sign of DR in early stages, some studies focused on the pixel-wise segmentation of this lesion with DCNNs and achieved high enough scores [166]. Although some of the recent DCNN-based works exhibit high performance in term of the standard metrics, the lack of interpretability may not be sufficiently valid for real-life clinical applications. This interpretability brings into the picture the concept of XAI. Explainability studies aim to show the features that influence the decision of a model the most. Singh et al. [87] have reviewed the currently used explainability methods. There is also the need for a large fundus database with high precision annotation of all associated DR lesions to help in designing more robust pipelines with high performance.

Author Contributions: Conceptualization, V.L. and J.J.B.; methodology, V.L. and J.J.B.; dataset constitution, H.K. and A.S.; writing—original draft preparation, H.K., A.S. and J.J.B.; writing—review and editing, V.L.; project administration, V.L. and J.J.B. All authors have read and agreed to the published version of the manuscript. Funding: This research was partly supported by a DISCOVERY grant to VL from the Natural Sciences and Engineering Research Council of Canada. Institutional Review Board Statement: Not applicable. Study does not involve humans or animals. Informed Consent Statement: Study did not involve humans or animals and therefore informed consent is not applicable. Data Availability Statement: This is review of the published articles, and all the articles are publicly available. Acknowledgments: A.S. acknowledges MITACS, Canada, for the award of a summer internship. Conflicts of Interest: The authors have no conflict of interest. J. Imaging 2021, 7, 165 20 of 26

References 1. Steinmetz, J.D.; A Bourne, R.R.; Briant, P.S.; Flaxman, S.R.; Taylor, H.R.B.; Jonas, J.B.; Abdoli, A.A.; Abrha, W.A.; Abualhasan, A.; Abu-Gharbieh, E.G.; et al. Causes of blindness and vision impairment in 2020 and trends over 30 years, and prevalence of avoidable blindness in relation to VISION 2020: The Right to Sight: An analysis for the Global Burden of Disease Study. Lancet Glob. Health 2021, 9, e144–e160. [CrossRef] 2. Oh, K.; Kang, H.M.; Leem, D.; Lee, H.; Seo, K.Y.; Yoon, S. Early detection of diabetic retinopathy based on deep learning and ultra-wide-field fundus images. Sci. Rep. 2021, 11, 1897. [CrossRef][PubMed] 3. Early Treatment Diabetic Retinopathy Study Research Group. Grading Diabetic Retinopathy from Stereoscopic Color Fundus Photographs-An Extension of the Modified Airlie House Classification: ETDRS Report Number 10. Ophthalmology 1991, 98, 786–806. [CrossRef] 4. Horton, M.B.; Brady, C.J.; Cavallerano, J.; Abramoff, M.; Barker, G.; Chiang, M.F.; Crockett, C.H.; Garg, S.; Karth, P.; Liu, Y.; et al. Practice Guidelines for Ocular Telehealth-Diabetic Retinopathy, Third Edition. Telemed. E-Health 2020, 26, 495–543. [CrossRef] [PubMed] 5. Solomon, S.D.; Goldberg, M.F. ETDRS Grading of Diabetic Retinopathy: Still the Gold Standard? Ophthalmic Res. 2019, 62, 190–195. [CrossRef][PubMed] 6. Wilkinson, C.; Ferris, F.; Klein, R.; Lee, P.; Agardh, C.D.; Davis, M.; Dills, D.; Kampik, A.; Pararajasegaram, R.; Verdaguer, J.T. Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology 2003, 110, 1677–1682. [CrossRef] 7. Rajalakshmi, R.; Prathiba, V.; Arulmalar, S.; Usha, M. Review of retinal cameras for global coverage of diabetic retinopathy screening. Eye 2021, 35, 162–172. [CrossRef] 8. Qureshi, I.; Ma, J.; Abbas, Q. Recent Development on Detection Methods for the Diagnosis of Diabetic Retinopathy. Symmetry 2019, 11, 749. [CrossRef] 9. Chandran, A.; Mathai, A. Diabetic Retinopathy for the Clinician; Jaypee Brothers: Chennai, India, 2009; Volume 1, p. 79. 10. Ludwig, C.A.; Perera, C.; Myung, D.; Greven, M.A.; Smith, S.J.; Chang, R.T.; Leng, T. Automatic Identification of Referral- Warranted Diabetic Retinopathy Using Deep Learning on Mobile Phone Images. Transl. Vis. Sci. Technol. 2020, 9, 60. [CrossRef] 11. Hsu, W.; Pallawala, P.M.D.S.; Lee, M.L.; Eong, K.-G.A. The role of domain knowledge in the detection of retinal hard exudates. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, Kauai, HI, USA, 8–14 December 2001; Volume 2. 12. Teo, Z.L.; Tham, Y.-C.; Yu, M.C.Y.; Chee, M.L.; Rim, T.H.; Cheung, N.; Bikbov, M.M.; Wang, Y.X.; Tang, Y.; Lu, Y.; et al. Global Prevalence of Diabetic Retinopathy and Projection of Burden through 2045. Ophthalmology 2021.[CrossRef] 13. Derwin, D.J.; Selvi, S.T.; Singh, O.J.; Shan, P.B. A novel automated system of discriminating Microaneurysms in fundus images. Biomed. Signal Process. Control 2020, 58, 101839. [CrossRef] 14. Sivaprasad, S.; Raman, R.; Conroy, D.; Wittenberg, R.; Rajalakshmi, R.; Majeed, A.; Krishnakumar, S.; Prevost, T.; Parameswaran, S.; Turowski, P.; et al. The ORNATE India Project: United Kingdom–India Research Collaboration to tackle visual impairment due to diabetic retinopathy. Eye 2020, 34, 1279–1286. [CrossRef][PubMed] 15. Majumder, S.; Elloumi, Y.; Akil, M.; Kachouri, R.; Kehtarnavaz, N. A deep learning-based smartphone app for real-time detection of five stages of diabetic retinopathy. In Real-Time Image Processing and Deep Learning; International Society for Optics and Photonics: Bellingham, WA, USA, 2020; Volume 11401, p. 1140106. [CrossRef] 16. Bilal, A.; Sun, G.; Li, Y.; Mazhar, S.; Khan, A.Q. Diabetic Retinopathy Detection and Classification Using Mixed Models for a Disease Grading Database. IEEE Access 2021, 9, 23544–23553. [CrossRef] 17. Qian, X.; Kang, H.; Li, R.; Lu, G.; Du, Z.; Shung, K.K.; Humayun, M.S.; Zhou, Q. In Vivo Visualization of Eye Vasculature Using Super-Resolution Ultrasound Microvessel Imaging. IEEE Trans. Biomed. Eng. 2020, 67, 2870–2880. [CrossRef] 18. Ouahabi, A.; Taleb-Ahmed, A. Deep learning for real-time semantic segmentation: Application in ultrasound imaging. Pattern Recognit. Lett. 2021, 144, 27–34. [CrossRef] 19. Leopold, H.; Zelek, J.; Lakshminarayanan, V. Deep Learning Methods Applied to Retinal Image Analysis in Signal Processing and Machine Learning for Biomedical Big Data. Sejdic, E., Falk, T., Eds.; CRC Press: Boca Raton, FL, USA, 2018; p. 329. [CrossRef] 20. Sengupta, S.; Singh, A.; Leopold, H.A.; Gulati, T.; Lakshminarayanan, V. Ophthalmic diagnosis using deep learning with fundus images—A critical review. Artif. Intell. Med. 2020, 102, 101758. [CrossRef][PubMed] 21. Leopold, H.; Sengupta, S.; Singh, A.; Lakshminarayanan, V. Deep Learning on Optical Coherence Tomography for Ophthalmology. In State-of-the-Art in Neural Networks; Elsevier: New York, NY, USA, 2021. 22. Hormel, T.T.; Hwang, T.S.; Bailey, S.T.; Wilson, D.J.; Huang, D.; Jia, Y. Artificial intelligence in OCT angiography. Prog. Retin. Eye Res. 2021, 100965. [CrossRef][PubMed] 23. Methley, A.M.; Campbell, S.; Chew-Graham, C.; McNally, R.; Cheraghi-Sohi, S. PICO, PICOS and SPIDER: A comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews. BMC Health Serv. Res. 2014, 14, 1–10. [CrossRef] 24. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Altman, D.; Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med. 2009, 6, e1000097. [CrossRef] 25. Khan, S.M.; Liu, X.; Nath, S.; Korot, E.; Faes, L.; Wagner, S.K.; A Keane, P.; Sebire, N.J.; Burton, M.J.; Denniston, A.K. A global review of publicly available datasets for ophthalmological imaging: Barriers to access, usability, and generalisability. Lancet Digit. Health 2021, 3, e51–e66. [CrossRef] J. Imaging 2021, 7, 165 21 of 26

26. Chetoui, M.; Akhloufi, M.A. Explainable end-to-end deep learning for diabetic retinopathy detection across multiple datasets. J. Med. Imaging 2020, 7, 7–25. [CrossRef] 27. Somaraki, V.; Broadbent, D.; Coenen, F.; Harding, S. Finding Temporal Patterns in Noisy Longitudinal Data: A Study in Diabetic Retinopathy. In Advances in Data Mining. Applications and Theoretical Aspects; Springer: New York, NY, USA, 2010; Volume 6171, pp. 418–431. 28. Zhou, Y.; Wang, B.; Huang, L.; Cui, S.; Shao, L. A Benchmark for Studying Diabetic Retinopathy: Segmentation, Grading, and Transferability. IEEE Trans. Med Imaging 2021, 40, 818–828. [CrossRef] 29. Drive-Grand Challenge Official Website. Available online: https://drive.grand-challenge.org/ (accessed on 23 May 2021). 30. Kauppi, T.; Kalesnykiene, V.; Kamarainen, J.; Lensu, L.; Sorri, I. DIARETDB0: Evaluation Database and Methodology for Diabetic Retinopathy Algorithms. Mach Vis Pattern Recognit Res Group, Lappeenranta Univ Technol Finland. 2006, pp. 1–17. Available online: http://www.siue.edu/~{}sumbaug/RetinalProjectPapers/DiabeticRetinopathyImageDatabaseInformation.pdf (accessed on 25 May 2021). 31. Kauppi, T.; Kalesnykiene, V.; Kamarainen, J.-K.; Lensu, L.; Sorri, I.; Raninen, A.; Voutilainen, R.; Uusitalo, H.; Kalviainen, H.; Pietila, J. The diaretdb1 diabetic retinopathy database and evaluation protocol. In Proceedings of the British Machine Vision Conference 2007, Coventry, UK, 10–13 September 2007; pp. 61–65. [CrossRef] 32. Hsieh, Y.-T.; Chuang, L.-M.; Jiang, Y.-D.; Chang, T.-J.; Yang, C.-M.; Yang, C.-H.; Chan, L.-W.; Kao, T.-Y.; Chen, T.-C.; Lin, H.-C.; et al. Application of deep learning image assessment software VeriSee™ for diabetic retinopathy screening. J. Formos. Med. Assoc. 2021, 120, 165–171. [CrossRef] 33. Giancardo, L.; Meriaudeau, F.; Karnowski, T.P.; Li, Y.; Garg, S.; Tobin, K.W.; Chaum, E. Exudate-based diabetic macular edema detection in fundus images using publicly available datasets. Med. Image Anal. 2012, 16, 216–226. [CrossRef][PubMed] 34. Alipour, S.H.M.; Rabbani, H.; Akhlaghi, M.; Mehridehnavi, A.; Javanmard, S.H. Analysis of foveal avascular zone for grading of diabetic retinopathy severity based on curvelet transform. Graefe’s Arch. Clin. Exp. Ophthalmol. 2012, 250, 1607–1614. [CrossRef] 35. Alipour, S.H.M.; Rabbani, H.; Akhlaghi, M. Diabetic Retinopathy Grading by Digital Curvelet Transform. Comput. Math. Methods Med. 2012, 2012, 1–11. [CrossRef] 36. Esmaeili, M.; Rabbani, H.; Dehnavi, A.; Dehghani, A. Automatic detection of exudates and optic disk in retinal images using curvelet transform. IET Image Process. 2012, 6, 1005–1013. [CrossRef] 37. Prentasic, P.; Loncaric, S.; Vatavuk, Z.; Bencic, G.; Subasic, M.; Petkovi´c,T. Diabetic retinopathy image database(DRiDB): A new database for diabetic retinopathy screening programs research. In Proceedings of the 2013 8th International Symposium on Image and Signal Processing and Analysis (ISPA), Trieste, Italy, 4–6 September 2013; pp. 711–716. 38. Decencière, E.; Cazuguel, G.; Zhang, X.; Thibault, G.; Klein, J.-C.; Meyer, F.; Marcotegui, B.; Quellec, G.; Lamard, M.; Danno, R.; et al. TeleOphta: Machine learning and image processing methods for teleophthalmology. IRBM 2013, 34, 196–203. [CrossRef] 39. Odstrcilik, J.; Kolar, R.; Budai, A.; Hornegger, J.; Jan, J.; Gazarek, J.; Kubena, T.; Cernosek, P.; Svoboda, O.; Angelopoulou, E. Retinal vessel segmentation by improved matched filtering: Evaluation on a new high-resolution fundus image database. IET Image Process. 2013, 7, 373–383. [CrossRef] 40. Hu, Q.; Abràmoff, M.D.; Garvin, M.K. Automated Separation of Binary Overlapping Trees in Low-Contrast Color Retinal Images. In Medical Image Computing and Computer-Assisted Intervention; Lecture Notes in Computer Science; Mori, K., Sakuma, I., Sato, Y., Barillot, C., Navab, N., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8150. [CrossRef] 41. Pires, R.; Jelinek, H.F.; Wainer, J.; Valle, E.; Rocha, A. Advancing Bag-of-Visual-Words Representations for Lesion Classification in Retinal Images. PLoS ONE 2014, 9, e96814. [CrossRef] 42. Sevik, U.; Köse, C.; Berber, T.; Erdöl, H. Identification of suitable fundus images using automated quality assessment methods. J. Biomed. Opt. 2014, 19, 046006. [CrossRef] 43. Alipour, S.H.M.; Rabbani, H.; Akhlaghi, M. A new combined method based on curvelet transform and morphological operators for automatic detection of foveal avascular zone. Signal Image Video Process. 2014, 8, 205–222. [CrossRef] 44. Decencière, E.; Zhang, X.; Cazuguel, G.; Lay, B.; Cochener, B.; Trone, C.; Gain, P.; Ordóñez-Varela, J.-R.; Massin, P.; Erginay, A.; et al. Feedback on a publicly distributed image database: The MESSIDOR database. Image Anal. Ster. 2014, 33, 231. [CrossRef] 45. Bala, M.P.; Vijayachitra, S. Early detection and classification of microaneurysms in retinal fundus images using sequential learning methods. Int. J. Biomed. Eng. Technol. 2014, 15, 128. [CrossRef] 46. Srinivasan, P.P.; Kim, L.; Mettu, P.S.; Cousins, S.W.; Comer, G.M.; Izatt, J.A.; Farsiu, S. Fully automated detection of diabetic macular edema and dry age-related macular degeneration from optical coherence tomography images. Biomed. Opt. Express 2014, 5, 3568–3577. [CrossRef] 47. Kaggle.com. Available online: https://www.kaggle.com/c/diabetic-retinopathy-detection/data (accessed on 26 May 2021). 48. People.duke.edu Website. Available online: http://people.duke.edu/~{}sf59/software.html (accessed on 26 May 2021). 49. Holm, S.; Russell, G.; Nourrit, V.; McLoughlin, N. DR HAGIS—A fundus image database for the automatic extraction of retinal surface vessels from diabetic patients. J. Med. Imaging 2017, 4, 014503. [CrossRef] 50. Takahashi, H.; Tampo, H.; Arai, Y.; Inoue, Y.; Kawashima, H. Applying artificial intelligence to disease staging: Deep learning for improved staging of diabetic retinopathy. PLoS ONE 2017, 12, e0179790. [CrossRef] 51. Rotterdam Ophthalmic Data Repository. re3data.org. Available online: https://www.re3data.org/repository/r3d (accessed on 22 June 2021). J. Imaging 2021, 7, 165 22 of 26

52. Ting, D.S.W.; Cheung, C.Y.-L.; Lim, G.; Tan, G.S.W.; Quang, N.D.; Gan, A.; Hamzah, H.; Garcia-Franco, R.; Yeo, I.Y.S.; Lee, S.Y.; et al. Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images From Multiethnic Populations With Diabetes. JAMA 2017, 318, 2211–2223. [CrossRef] 53. Porwal, P.; Pachade, S.; Kamble, R.; Kokare, M.; Deshmukh, G.; Sahasrabuddhe, V.; Meriaudeau, F. Indian diabetic retinopathy image dataset (IDRiD): A database for diabetic retinopathy screening research. Data 2018, 3, 25. [CrossRef] 54. Gholami, P.; Roy, P.; Parthasarathy, M.K.; Lakshminarayanan, V. OCTID: Optical coherence tomography image database. Comput. Electr. Eng. 2020, 81, 106532. [CrossRef] 55. Abdulla, W.; Chalakkal, R.J. University of Auckland Diabetic Retinopathy (UoA-DR) Database-End User Licence Agreement. Available online: https://auckland.figshare.com/articles/journal_contribution/UoA-DR_Database_Info/5985208 (accessed on 28 May 2021). 56. Kaggle.com. Available online: https://www.kaggle.com/c/aptos2019-blindness-detection (accessed on 23 May 2021). 57. Ali, A.; Qadri, S.; Mashwani, W.K.; Kumam, W.; Kumam, P.; Naeem, S.; Goktas, A.; Jamal, F.; Chesneau, C.; Anam, S.; et al. Machine Learning Based Automated Segmentation and Hybrid Feature Analysis for Diabetic Retinopathy Classification Using Fundus Image. Entropy 2020, 22, 567. [CrossRef][PubMed] 58. Díaz, M.; Novo, J.; Cutrín, P.; Gómez-Ulla, F.; Penedo, M.G.; Ortega, M. Automatic segmentation of the foveal avascular zone in ophthalmological OCT-A images. PLoS ONE 2019, 14, e0212364. [CrossRef] 59. ODIR-2019. Available online: https://odir2019.grand-challenge.org/ (accessed on 22 June 2021). 60. Li, T.; Gao, Y.; Wang, K.; Guo, S.; Liu, H.; Kang, H. Diagnostic assessment of deep learning algorithms for diabetic retinopathy screening. Inf. Sci. 2019, 501, 511–522. [CrossRef] 61. Li, F.; Liu, Z.; Chen, H.; Jiang, M.; Zhang, X.; Wu, Z. Automatic Detection of Diabetic Retinopathy in Retinal Fundus Photographs Based on Deep Learning Algorithm. Transl. Vis. Sci. Technol. 2019, 8, 4. [CrossRef] 62. Benítez, V.E.C.; Matto, I.C.; Román, J.C.M.; Noguera, J.L.V.; García-Torres, M.; Ayala, J.; Pinto-Roa, D.P.; Gardel-Sotomayor, P.E.; Facon, J.; Grillo, S.A. Dataset from fundus images for the study of diabetic retinopathy. Data Brief. 2021, 36, 107068. [CrossRef] 63. Wei, Q.; Li, X.; Yu, W.; Zhang, X.; Zhang, Y.; Hu, B.; Mo, B.; Gong, D.; Chen, N.; Ding, D.; et al. Learn to Segment Retinal Lesions and Beyond. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 7403–7410. 64. Noor-Ul-Huda, M.; Tehsin, S.; Ahmed, S.; Niazi, F.A.; Murtaza, Z. Retinal images benchmark for the detection of diabetic retinopathy and clinically significant macular edema (CSME). Biomed. Tech. Eng. 2018, 64, 297–307. [CrossRef] 65. Ohsugi, H.; Tabuchi, H.; Enno, H.; Ishitobi, N. Accuracy of deep learning, a machine-learning technology, using ultra–wide-field fundus ophthalmoscopy for detecting rhegmatogenous retinal detachment. Sci. Rep. 2017, 7, 9425. [CrossRef] 66. Abràmoff, M.D.; Lou, Y.; Erginay, A.; Clarida, W.; Amelon, R.; Folk, J.C.; Niemeijer, M. Improved Automated Detection of Diabetic Retinopathy on a Publicly Available Dataset Through Integration of Deep Learning. Investig. Ophthalmol. Vis. Sci. 2016, 57, 5200–5206. [CrossRef][PubMed] 67. Kafieh, R.; Rabbani, H.; Hajizadeh, F.; Ommani, M. An Accurate Multimodal 3-D Vessel Segmentation Method Based on Brightness Variations on OCT Layers and Curvelet Domain Fundus Image Analysis. IEEE Trans. Biomed. Eng. 2013, 60, 2815–2823. [CrossRef] 68. Gargeya, R.; Leng, T. Automated Identification of Diabetic Retinopathy Using Deep Learning. Ophthalmology 2017, 124, 962–969. [CrossRef] 69. Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J.; et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA 2016, 316, 2402–2410. [CrossRef][PubMed] 70. Sahlsten, J.; Jaskari, J.; Kivinen, J.; Turunen, L.; Jaanio, E.; Hietala, K.; Kaski, K. Deep Learning Fundus Image Analysis for Diabetic Retinopathy and Macular Edema Grading. Sci. Rep. 2019, 9, 10750. [CrossRef] 71. Gadekallu, T.R.; Khare, N.; Bhattacharya, S.; Singh, S.; Maddikunta, P.K.R.; Ra, I.-H.; Alazab, M. Early Detection of Diabetic Retinopathy Using PCA-Firefly Based Deep Learning Model. Electron. 2020, 9, 274. [CrossRef] 72. Mansour, R.F. Deep-learning-based automatic computer-aided diagnosis system for diabetic retinopathy. Biomed. Eng. Lett. 2017, 8, 41–57. [CrossRef][PubMed] 73. Paradisa, R.H.; Sarwinda, D.; Bustamam, A.; Argyadiva, T. Classification of Diabetic Retinopathy through Deep Feature Extraction and Classic Machine Learning Approach. In Proceedings of the 2020 3rd International Conference on Information and Communications Technology (ICOIACT), Yogyakarta, Indonesia, 24–25 November 2020; pp. 377–381. 74. Elswah, D.K.; Elnakib, A.A.; Moustafa, H.E.-D. Automated Diabetic Retinopathy Grading using Resnet. In Proceedings of the National Radio Science Conference, NRSC, Cairo, Egypt, 8–10 September 2020; pp. 248–254. 75. Sandhu, H.S.; Elmogy, M.; Sharafeldeen, A.; Elsharkawy, M.; El-Adawy, N.; Eltanboly, A.; Shalaby, A.; Keynton, R.; El-Baz, A. Automated Diagnosis of Diabetic Retinopathy Using Clinical Biomarkers, Optical Coherence Tomography, and Optical Coherence Tomography Angiography. Am. J. Ophthalmol. 2020, 216, 201–206. [CrossRef][PubMed] 76. Somasundaram, S.K.; Alli, P. A Machine Learning Ensemble Classifier for Early Prediction of Diabetic Retinopathy. J. Med. Syst. 2017, 41, 201. 77. Liu, Z.; Wang, C.; Cai, X.; Jiang, H.; Wang, J. Discrimination of Diabetic Retinopathy From Optical Coherence Tomography Angiography Images Using Machine Learning Methods. IEEE Access 2021, 9, 51689–51694. [CrossRef] J. Imaging 2021, 7, 165 23 of 26

78. Levenkova, A.; Kalloniatis, M.; Ly, A.; Ho, A.; Sowmya, A. Lesion detection in ultra-wide field retinal images for diabetic retinopathy diagnosis. In Medical Imaging 2018: Computer-Aided Diagnosis; International Society for Optics and Photonics: Bellingham, WA, USA, 2018; Volume 10575, p. 1057531. 79. Rajalakshmi, R.; Subashini, R.; Anjana, R.M.; Mohan, V. Automated diabetic retinopathy detection in smartphone-based fundus photography using artificial intelligence. Eye 2018, 32, 1138–1144. [CrossRef] 80. Riaz, H.; Park, J.; Choi, H.; Kim, H.; Kim, J. Deep and Densely Connected Networks for Classification of Diabetic Retinopathy. Diagnostics 2020, 10, 24. [CrossRef] 81. Quellec, G.; Charrière, K.; Boudi, Y.; Cochener, B.; Lamard, M. Deep image mining for diabetic retinopathy screening. Med Image Anal. 2017, 39, 178–193. [CrossRef] 82. Sayres, R.; Taly, A.; Rahimy, E.; Blumer, K.; Coz, D.; Hammel, N.; Krause, J.; Narayanaswamy, A.; Rastegar, Z.; Wu, D.; et al. Using a Deep Learning Algorithm and Integrated Gradients Explanation to Assist Grading for Diabetic Retinopathy. Ophthalmology 2019, 126, 552–564. [CrossRef] 83. Hua, C.-H.; Huynh-The, T.; Kim, K.; Yu, S.-Y.; Le-Tien, T.; Park, G.H.; Bang, J.; Khan, W.A.; Bae, S.-H.; Lee, S. Bimodal learning via trilogy of skip-connection deep networks for diabetic retinopathy risk progression identification. Int. J. Med. Inform. 2019, 132, 103926. [CrossRef][PubMed] 84. Pao, S.-I.; Lin, H.-Z.; Chien, K.-H.; Tai, M.-C.; Chen, J.-T.; Lin, G.-M. Detection of Diabetic Retinopathy Using Bichannel Convolutional Neural Network. J. Ophthalmol. 2020, 2020, 1–7. [CrossRef][PubMed] 85. Shankar, K.; Sait, A.R.W.; Gupta, D.; Lakshmanaprabu, S.; Khanna, A.; Pandey, H.M. Automated detection and classification of fundus diabetic retinopathy images using synergic deep learning model. Pattern Recognit. Lett. 2020, 133, 210–216. [CrossRef] 86. Singh, A.; Sengupta, S.; Lakshminarayanan, V. Explainable Deep Learning Models in Medical Image Analysis. J. Imaging 2020, 6, 52. [CrossRef] 87. Singh, A.; Sengupta, S.; Rasheed, M.A.; Jayakumar, V.; Lakshminarayanan, V. Uncertainty aware and explainable diagnosis of retinal disease. In Medical Imaging 2021: Imaging Informatics for Healthcare, Research, and Applications; International Society for Optics and Photonics: Bellingham, WA, USA, 2021; Volume 11601, p. 116010J. 88. Singh, A.; Jothi Balaji, J.; Rasheed, M.A.; Jayakumar, V.; Raman, R.; Lakshminarayanan, V. Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis. Clin. Ophthalmol. 2021, 15, 2573–2581. [CrossRef] 89. Keel, S.; Wu, J.; Lee, P.Y.; Scheetz, J.; He, M. Visualizing Deep Learning Models for the Detection of Referable Diabetic Retinopathy and Glaucoma. JAMA Ophthalmol. 2019, 137, 288–292. [CrossRef] 90. Chandrakumar, T.; Kathirvel, R. Classifying Diabetic Retinopathy using Deep Learning Architecture. Int. J. Eng. Res. 2016, 5, 19–24. [CrossRef] 91. Colas, E.; Besse, A.; Orgogozo, A.; Schmauch, B.; Meric, N. Deep learning approach for diabetic retinopathy screening. Acta Ophthalmol. 2016, 94.[CrossRef] 92. Wong, T.Y.; Bressler, N.M. Artificial Intelligence With Deep Learning Technology Looks Into Diabetic Retinopathy Screening. JAMA 2016, 316, 2366–2367. [CrossRef][PubMed] 93. Wang, Z.; Yin, Y.; Shi, J.; Fang, W.; Li, H.; Wang, X. Zoom-in-Net: Deep Mining Lesions for Diabetic Retinopathy Detection. In Proceedings of the Transactions on Petri Nets and Other Models of Concurrency XV, Quebec City, QC, Canada, 11–13 September 2017; pp. 267–275. 94. Benson, J.; Maynard, J.; Zamora, G.; Carrillo, H.; Wigdahl, J.; Nemeth, S.; Barriga, S.; Estrada, T.; Soliz, P. Transfer learning for diabetic retinopathy. Image Process. 2018, 70, 105741Z. 95. Chakrabarty, N. A Deep Learning Method for the detection of Diabetic Retinopathy. In Proceedings of the 2018 5th IEEE Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON), Gorakhpur, India, 2–4 November 2018; pp. 1–5. 96. Costa, P.; Galdran, A.; Smailagic, A.; Campilho, A. A Weakly-Supervised Framework for Interpretable Diabetic Retinopathy Detection on Retinal Images. IEEE Access 2018, 6, 18747–18758. [CrossRef] 97. Dai, L.; Fang, R.; Li, H.; Hou, X.; Sheng, B.; Wu, Q.; Jia, W. Clinical Report Guided Retinal Microaneurysm Detection With Multi-Sieving Deep Learning. IEEE Trans. Med. Imaging 2018, 37, 1149–1161. [CrossRef][PubMed] 98. Dutta, S.; Manideep, B.C.; Basha, S.M.; Caytiles, R.D.; Iyengar, N.C.S.N. Classification of Diabetic Retinopathy Images by Using Deep Learning Models. Int. J. Grid Distrib. Comput. 2018, 11, 99–106. [CrossRef] 99. Kwasigroch, A.; Jarzembinski, B.; Grochowski, M. Deep CNN based decision support system for detection and assessing the stage of diabetic retinopathy. In Proceedings of the 2018 International Interdisciplinary PhD Workshop (IIPhDW), Swinouj´scie,´ Poland, 9–12 May 2018; pp. 111–116. [CrossRef] 100. Islam, M.R.; Hasan, M.A.M.; Sayeed, A. Transfer Learning based Diabetic Retinopathy Detection with a Novel Preprocessed Layer. In Proceedings of the 2020 IEEE Region 10 Symposium (TENSYMP), Dhaka, Bangladesh, 5–7 June 2020; pp. 888–891. 101. Zhang, S.; Wu, H.; Wang, X.; Cao, L.; Schwartz, J.; Hernandez, J.; Rodríguez, G.; Liu, B.J.; Murthy, V. The application of deep learning for diabetic retinopathy prescreening in research eye-PACS. Imaging Inform. Healthc. Res. Appl. 2018, 10579, 1057913. [CrossRef] 102. Fang, M.; Zhang, X.; Zhang, W.; Xue, J.; Wu, L. Automatic classification of diabetic retinopathy based on convolutional neural networks. In Proceedings of the 2018 International Conference on Image and Video Processing, and Artificial Intelligence, Shanghai, China, 15–17 August 2018; Volume 10836, p. 1083608. J. Imaging 2021, 7, 165 24 of 26

103. Arcadu, F.; Benmansour, F.; Maunz, A.; Willis, J.; Haskova, Z.; Prunotto, M. Deep learning algorithm predicts diabetic retinopathy progression in individual patients. NPJ Digit. Med. 2019, 2, 92. [CrossRef] 104. Bellemo, V.; Lim, Z.W.; Lim, G.; Nguyen, Q.D.; Xie, Y.; Yip, M.Y.T.; Hamzah, H.; Ho, J.; Lee, X.Q.; Hsu, W.; et al. Artificial intelligence using deep learning to screen for referable and vision-threatening diabetic retinopathy in Africa: A clinical validation study. Lancet Digit. Health 2019, 1, e35–e44. [CrossRef] 105. Chowdhury, M.M.H.; Meem, N.T.A. A Machine Learning Approach to Detect Diabetic Retinopathy Using Convolutional Neural Network; Springer: Singapore, 2019; pp. 255–264. 106. Govindaraj, V.; Balaji, M.; Mohideen, T.A.; Mohideen, S.A.F.J. Eminent identification and classification of Diabetic Retinopathy in clinical fundus images using Probabilistic Neural Network. In Proceedings of the 2019 IEEE International Conference on Intelligent Techniques in Control, Optimization and Signal Processing (INCOS), Tamilnadu, India, 11–13 April 2019; pp. 1–6. 107. Gulshan, V.; Rajan, R.; Widner, K.; Wu, D.; Wubbels, P.; Rhodes, T.; Whitehouse, K.; Coram, M.; Corrado, G.; Ramasamy, K.; et al. Performance of a Deep-Learning Algorithm vs Manual Grading for Detecting Diabetic Retinopathy in India. JAMA Ophthalmol. 2019, 137, 987–993. [CrossRef][PubMed] 108. Hathwar, S.B.; Srinivasa, G. Automated Grading of Diabetic Retinopathy in Retinal Fundus Images using Deep Learning. In Proceedings of the 2019 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), Kuala Lumpur, Malaysia, 17–19 September 2019; pp. 73–77. 109. He, J.; Shen, L.; Ai, X.; Li, X. Diabetic Retinopathy Grade and Macular Edema Risk Classification Using Convolutional Neural Networks. In Proceedings of the 2019 IEEE International Conference on Power, Intelligent Computing and Systems (ICPICS), Shenyang, China, 12–14 July 2019; pp. 463–466. 110. Jiang, H.; Yang, K.; Gao, M.; Zhang, D.; Ma, H.; Qian, W. An Interpretable Ensemble Deep Learning Model for Diabetic Retinopathy Disease Classification. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC, Berlin, Germany, 23–27 July 2019; 2019, pp. 2045–2048. 111. Li, X.; Hu, X.; Yu, L.; Zhu, L.; Fu, C.-W.; Heng, P.-A. CANet: Cross-Disease Attention Network for Joint Diabetic Retinopathy and Diabetic Macular Edema Grading. IEEE Trans. Med. Imaging 2020, 39, 1483–1493. [CrossRef] 112. Metan, A.C.; Lambert, A.; Pickering, M. Small Scale Feature Propagation Using Deep Residual Learning for Diabetic Reti-nopathy Classification. In Proceedings of the 2019 IEEE 4th International Conference on Image, Vision and Computing (ICIVC), Xiamen, China, 5–7 July 2019; pp. 392–396. 113. Nagasawa, T.; Tabuchi, H.; Masumoto, H.; Enno, H.; Niki, M.; Ohara, Z.; Yoshizumi, Y.; Ohsugi, H.; Mitamura, Y. Accuracy of ultrawide-field fundus ophthalmoscopy-assisted deep learning for detecting treatment-naïve proliferative diabetic retinopathy. Int. Ophthalmol. 2019, 39, 2153–2159. [CrossRef][PubMed] 114. Qummar, S.; Khan, F.G.; Shah, S.; Khan, A.; Shamshirband, S.; Rehman, Z.U.; Khan, I.A.; Jadoon, W. A Deep Learning Ensemble Approach for Diabetic Reti-nopathy Detection. IEEE Access 2019, 7, 150530–150539. [CrossRef] 115. Bora, A.; Balasubramanian, S.; Babenko, B.; Virmani, S.; Venugopalan, S.; Mitani, A.; Marinho, G.D.O.; Cuadros, J.; Ruamviboon- suk, P.; Corrado, G.S.; et al. Predicting the risk of developing diabetic retinopathy using deep learning. Lancet Digit. Health 2021, 3, e10–e19. [CrossRef] 116. Sengupta, S.; Singh, A.; Zelek, J.; Lakshminarayanan, V. Cross-domain diabetic retinopathy detection using deep learning. Appl. Mach. Learn. 2019, 11139, 111390V. [CrossRef] 117. Ting, D.S.W.; Cheung, C.Y.; Nguyen, Q.; Sabanayagam, C.; Lim, G.; Lim, Z.W.; Tan, G.S.W.; Soh, Y.Q.; Schmetterer, L.; Wang, Y.X.; et al. Deep learning in estimating prevalence and systemic risk factors for diabetic retinopathy: A multi-ethnic study. npj Digit. Med. 2019, 2, 24. [CrossRef] 118. Zeng, X.; Chen, H.; Luo, Y.; Ye, W. Automated Diabetic Retinopathy Detection Based on Binocular Siamese-Like Convolutional Neural Network. IEEE Access 2019, 7, 30744–30753. [CrossRef] 119. Araújo, T.; Aresta, G.; Mendonça, L.; Penas, S.; Maia, C.; Carneiro, Â.; Mendonça, A.M.; Campilho, A. DR|GRADUATE: Uncertainty-aware deep learning-based diabetic retinopathy grading in eye fundus images. Med. Image Anal. 2020, 63, 101715. [CrossRef][PubMed] 120. Gadekallu, T.R.; Khare, N.; Bhattacharya, S.; Singh, S.; Maddikunta, P.K.R. Deep neural networks to predict diabetic reti-nopathy. J. Ambient. Intell. Hum. Comput. 2020.[CrossRef] 121. Gayathri, S.; Krishna, A.K.; Gopi, V.P.; Palanisamy, P. Automated Binary and Multiclass Classification of Diabetic Retinopathy Using Haralick and Multiresolution Features. IEEE Access 2020, 8, 57497–57504. [CrossRef] 122. Jiang, H.; Xu, J.; Shi, R.; Yang, K.; Zhang, D.; Gao, M.; Ma, H.; Qian, W. A Multi-Label Deep Learning Model with Interpretable Grad-CAM for Diabetic Retinopathy Classification. In Proceedings of the 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; Volume 2020, pp. 1560–1563. 123. Lands, A.; Kottarathil, A.J.; Biju, A.; Jacob, E.M.; Thomas, S. Implementation of deep learning based algorithms for diabetic retinopathy classification from fundus images. In Proceedings of the 2020 4th International Conference on Trends in Electronics and Informatics (ICOEI)(48184), Tirunelveli, India, 15–17 June 2020; pp. 1028–1032. 124. Memari, N.; Abdollahi, S.; Ganzagh, M.M.; Moghbel, M. Computer-assisted diagnosis (CAD) system for Diabetic Retinopathy screening using color fundus images using Deep learning. In Proceedings of the 2020 IEEE Student Conference on Research and Development (SCOReD), Batu Pahat, Malaysia, 27–29 September 2020; pp. 69–73. J. Imaging 2021, 7, 165 25 of 26

125. Narayanan, B.N.; Hardie, R.C.; De Silva, M.S.; Kueterman, N.K. Hybrid machine learning architecture for automated detection and grading of retinal images for diabetic retinopathy. J. Med. Imaging 2020, 7, 034501. [CrossRef] 126. Patel, R.; Chaware, A. Transfer Learning with Fine-Tuned MobileNetV2 for Diabetic Retinopathy. In Proceedings of the 2020 International Conference for Emerging Technology (INCET), Belgaum, India, 5–7 June 2020; pp. 1–4. 127. Samanta, A.; Saha, A.; Satapathy, S.C.; Fernandes, S.L.; Zhang, Y.-D. Automated detection of diabetic retinopathy using convolutional neural networks on a small dataset. Pattern Recognit. Lett. 2020, 135, 293–298. [CrossRef] 128. Serener, A.; Serte, S. Geographic variation and ethnicity in diabetic retinopathy detection via deep learning. Turkish J. Electr. Eng. Comput. Sci. 2020, 28, 664–678. [CrossRef] 129. Shaban, M.; Ogur, Z.; Mahmoud, A.; Switala, A.; Shalaby, A.; Abu Khalifeh, H.; Ghazal, M.; Fraiwan, L.; Giridharan, G.; Sandhu, H.; et al. A convolutional neural network for the screening and staging of diabetic retinopathy. PLoS ONE 2020, 15, e0233514. [CrossRef] 130. Singh, R.K.; Gorantla, R. DMENet: Diabetic Macular Edema diagnosis using Hierarchical Ensemble of CNNs. PLoS ONE 2020, 15, e0220677. [CrossRef] 131. Thota, N.B.; Reddy, D.U. Improving the Accuracy of Diabetic Retinopathy Severity Classification with Transfer Learning. In Proceedings of the 2020 IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS), Springfield, MA, USA, 9–12 August 2020; pp. 1003–1006. [CrossRef] 132. Wang, X.-N.; Dai, L.; Li, S.-T.; Kong, H.-Y.; Sheng, B.; Wu, Q. Automatic Grading System for Diabetic Retinopathy Diagnosis Using Deep Learning Artificial Intelligence Software. Curr. Eye Res. 2020, 45, 1550–1555. [CrossRef] 133. Wang, J.; Bai, Y.; Xia, B. Simultaneous Diagnosis of Severity and Features of Diabetic Retinopathy in Fundus Photography Using Deep Learning. IEEE J. Biomed. Health Inform. 2020, 24, 3397–3407. [CrossRef][PubMed] 134. Zhang, W.; Zhao, X.J.; Chen, Y.; Zhong, J.; Yi, Z. DeepUWF: An Automated Ultra-Wide-Field Fundus Screening System via Deep Learning. IEEE J. Biomed. Health Inform. 2021, 25, 2988–2996. [CrossRef][PubMed] 135. Abdelmaksoud, E.; El-Sappagh, S.; Barakat, S.; AbuHmed, T.; Elmogy, M. Automatic Diabetic Retinopathy Grading System Based on Detecting Multiple Retinal Lesions. IEEE Access 2021, 9, 15939–15960. [CrossRef] 136. Gangwar, A.K.; Ravi, V. Diabetic Retinopathy Detection Using Transfer Learning and Deep Learning. In Evolution in Computational Intelligence; Springer: Singapore, 2020; pp. 679–689. [CrossRef] 137. He, A.; Li, T.; Li, N.; Wang, K.; Fu, H. CABNet: Category Attention Block for Imbalanced Diabetic Retinopathy Grading. IEEE Trans. Med. Imaging 2021, 40, 143–153. [CrossRef] 138. Khan, Z.; Khan, F.G.; Khan, A.; Rehman, Z.U.; Shah, S.; Qummar, S.; Ali, F.; Pack, S. Diabetic Retinopathy Detection Using VGG-NIN a Deep Learning Architecture. IEEE Access 2021, 9, 61408–61416. [CrossRef] 139. Saeed, F.; Hussain, M.; Aboalsamh, H.A. Automatic Diabetic Retinopathy Diagnosis Using Adaptive Fine-Tuned Convolu-tional Neural Network. IEEE Access 2021, 9, 41344–44359. [CrossRef] 140. Wang, Y.; Yu, M.; Hu, B.; Jin, X.; Li, Y.; Zhang, X.; Zhang, Y.; Gong, D.; Wu, C.; Zhang, B.; et al. Deep learning-based detection and stage grading for optimising diagnosis of diabetic retinopathy. Diabetes/Metab. Res. Rev. 2021, 37, 3445. [CrossRef] 141. Wang, S.; Wang, X.; Hu, Y.; Shen, Y.; Yang, Z.; Gen, M.; Lei, B. Diabetic Retinopathy Diagnosis Using Multichannel Generative Adver-sarial Network with Semisupervision. IEEE Trans. Autom. Sci. Eng. 2021, 18, 574–585. [CrossRef] 142. Datta, N.S.; Dutta, H.S.; Majumder, K. Brightness-preserving fuzzy contrast enhancement scheme for the detection and clas- sification of diabetic retinopathy disease. J. Med. Imaging 2016, 3, 014502. [CrossRef] 143. Lin, G.-M.; Chen, M.-J.; Yeh, C.-H.; Lin, Y.-Y.; Kuo, H.-Y.; Lin, M.-H.; Chen, M.-C.; Lin, S.D.; Gao, Y.; Ran, A.; et al. Transforming Retinal Photographs to Entropy Images in Deep Learning to Improve Automated Detection for Diabetic Retinopathy. J. Ophthalmol. 2018, 2018, 1–6. [CrossRef][PubMed] 144. Panigrahi, P.K.; Mukhopadhyay, S.; Pratiher, S.; Chhablani, J.; Mukherjee, S.; Barman, R.; Pasupuleti, G. Statistical classifiers on local binary patterns for optical diagnosis of diabetic retinopathy. Nanophotonics 2018, 10685, 106852Y. [CrossRef] 145. Pour, A.M.; Seyedarabi, H.; Jahromi, S.H.A.; Javadzadeh, A. Automatic Detection and Monitoring of Diabetic Retinopathy Using Efficient Convolutional Neural Networks and Contrast Limited Adaptive Histogram Equalization. IEEE Access 2020, 8, 136668–136673. [CrossRef] 146. Ramchandre, S.; Patil, B.; Pharande, S.; Javali, K.; Pande, H. A Deep Learning Approach for Diabetic Retinopathy detection using Transfer Learning. In Proceedings of the 2020 IEEE International Conference for Innovation in Technology (INOCON), Bangluru, India, 6–8 November 2020; pp. 1–5. 147. Bhardwaj, C.; Jain, S.; Sood, M. Deep Learning–Based Diabetic Retinopathy Severity Grading System Employing Quadrant Ensemble Model. J. Digit. Imaging 2021.[CrossRef] 148. Elloumi, Y.; Ben Mbarek, M.; Boukadida, R.; Akil, M.; Bedoui, M.H. Fast and accurate mobile-aided screening system of moderate diabetic retinopathy. In Proceedings of the Thirteenth International Conference on Machine Vision, Rome, Italy, 2–6 November 2020; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 2021; Volume 11605, p. 116050U. 149. Eladawi, N.; Elmogy, M.; Fraiwan, L.; Pichi, F.; Ghazal, M.; Aboelfetouh, A.; Riad, A.; Keynton, R.; Schaal, S.; El-Baz, A. Early Diagnosis of Diabetic Retinopathy in OCTA Images Based on Local Analysis of Retinal Blood Vessels and Foveal Avascular Zone. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 3886–3891. J. Imaging 2021, 7, 165 26 of 26

150. Islam, K.T.; Wijewickrema, S.; O’Leary, S. Identifying Diabetic Retinopathy from OCT Images using Deep Transfer Learning with Artificial Neural Networks. In Proceedings of the 2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS), Cordoba, Spain, 5–7 June 2019; pp. 281–286. [CrossRef] 151. Le, D.; Alam, M.N.; Lim, J.I.; Chan, R.P.; Yao, X. Deep learning for objective OCTA detection of diabetic retinopathy. Ophthalmic Technol. 2020, 11218, 112181P. [CrossRef] 152. Singh, A.; Sengupta, S.; Mohammed, A.R.; Faruq, I.; Jayakumar, V.; Zelek, J.; Lakshminarayanan, V. What is the Optimal Attribution Method for Explainable Ophthalmic Disease Classification? In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2020; pp. 21–31. 153. Singh, A.; Mohammed, A.R.; Zelek, J.; Lakshminarayanan, V. Interpretation of deep learning using attributions: Application to ophthalmic diagnosis. Appl. Mach. Learn. 2020, 11511, 115110A. [CrossRef] 154. Shah, S.A.A.; Laude, A.; Faye, I.; Tang, T.B. Automated microaneurysm detection in diabetic retinopathy using curvelet transform. J. Biomed. Opt. 2016, 21, 101404. [CrossRef][PubMed] 155. Huang, H.; Ma, H.; van Triest, H.J.W.; Wei, Y.; Qian, W. Automatic detection of neovascularization in retinal images using extreme learning machine. Neurocomputing 2018, 277, 218–227. [CrossRef] 156. Kaur, J.; Mittal, D. A generalized method for the segmentation of exudates from pathological retinal fundus images. Biocybern. Biomed. Eng. 2018, 38, 27–53. [CrossRef] 157. Imani, E.; Pourreza, H.-R. A novel method for retinal exudate segmentation using signal separation algorithm. Comput. Methods Programs Biomed. 2016, 133, 195–205. [CrossRef] 158. Holmberg, O.; Köhler, N.D.; Martins, T.; Siedlecki, J.; Herold, T.; Keidel, L.; Asani, B.; Schiefelbein, J.; Priglinger, S.; Kortuem, K.U.; et al. Self-supervised retinal thickness prediction enables deep learning from unlabelled data to boost classification of diabetic retinopathy. Nat. Mach. Intell. 2020, 2, 719–726. [CrossRef] 159. Guo, Y.; Camino, A.; Wang, J.; Huang, D.; Hwang, T.; Jia, Y. MEDnet, a neural network for automated detection of avascular area in OCT angiography. Biomed. Opt. Express 2018, 9, 5147–5158. [CrossRef] 160. Lam, C.; Yu, C.; Huang, L.; Rubin, D. Retinal Lesion Detection With Deep Learning Using Image Patches. Investig. Opthalmology Vis. Sci. 2018, 59, 590–596. [CrossRef] 161. Benzamin, A.; Chakraborty, C. Detection of hard exudates in retinal fundus images using deep learning. In Proceedings of the 2018 Joint 7th International Conference on Informatics, Electronics & Vision (ICIEV) and 2018 2nd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), Kitakyushu, Japan, 25–29 June 2018; pp. 465–469. 162. Orlando, J.I.; Prokofyeva, E.; del Fresno, M.; Blaschko, M. An ensemble deep learning based approach for red lesion detection in fundus images. Comput. Methods Programs Biomed. 2018, 153, 115–127. [CrossRef] 163. Eftekhari, N.; Pourreza, H.-R.; Masoudi, M.; Ghiasi-Shirazi, K.; Saeedi, E. Microaneurysm detection in fundus images using a two-step convolutional neural network. Biomed. Eng. Online 2019, 18, 1–16. [CrossRef] 164. Wu, Q.; Cheddad, A. Segmentation-based Deep Learning Fundus Image Analysis. In Proceedings of the 2019 Ninth International Conference on Image Processing Theory, Tools and Applications (IPTA), Istanbul, Turkey, 6–9 November 2019; pp. 1–5. 165. Yan, Z.; Han, X.; Wang, C.; Qiu, Y.; Xiong, Z.; Cui, S. Learning Mutually Local-Global U-Nets For High-Resolution Retinal Lesion Segmentation In Fundus Images. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 597–600. [CrossRef] 166. Qiao, L.; Zhu, Y.; Zhou, H. Diabetic Retinopathy Detection Using Prognosis of Microaneurysm and Early Diagnosis System for Non-Proliferative Diabetic Retinopathy Based on Deep Learning Algorithms. IEEE Access 2020, 8, 104292–104302. [CrossRef] 167. Xu, Y.; Zhou, Z.; Li, X.; Zhang, N.; Zhang, M.; Wei, P. FFU-Net: Feature Fusion U-Net for Lesion Segmentation of Diabetic Ret-inopathy. Biomed. Res. Int. 2021, 2021, 6644071. [PubMed] 168. ElTanboly, A.H.; Palacio, A.; Shalaby, A.M.; Switala, A.E.; Helmy, O.; Schaal, S.; El-Baz, A. An automated approach for early detection of diabetic retinopathy using SD-OCT images. Front. Biosci. Elit. 2018, 10, 197–207. 169. Sandhu, H.S.; Eltanboly, A.; Shalaby, A.; Keynton, R.S.; Schaal, S.; El-Baz, A. Automated diagnosis and grading of diabetic reti-nopathy using optical coherence tomography. Investig. Ophthalmol. Vis. Sci. 2018, 59, 3155–3160. [CrossRef][PubMed]