applied sciences

Article Intelligent Evaluation of in Videos Based on an Automated Cover Test

Yang Zheng 1,2,3, Hong Fu 3,*, Ruimin Li 1,2,3, Wai-Lun Lo 3, Zheru Chi 4, David Dagan Feng 5, Zongxi Song 1 and Desheng Wen 1

1 Xi’an Institute of Optics and Precision Mechanics of CAS, Xi’an 710119, China; [email protected] (Y.Z.); [email protected] (R.L.); [email protected] (Z.S.); [email protected] (D.W.) 2 University of Chinese Academy of Sciences, Beijing 100049, China 3 Department of Computer Science, Chu Hai College of Higher Education, Hong Kong 999077, China; [email protected] 4 Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China; [email protected] 5 School of Computer Science, The University of Sydney, Sydney 2006, Australia; [email protected] * Correspondence: [email protected], Tel.: +852-29727250

 Received: 30 December 2018; Accepted: 12 February 2019; Published: 20 February 2019 

Abstract: Strabismus is a common vision disease that brings about unpleasant influence on vision, as well as life quality. A timely diagnosis is crucial for the proper treatment of strabismus. In contrast to manual evaluation, well-designed automatic evaluation can significantly improve the objectivity, reliability, and efficiency of strabismus diagnosis. In this study, we have proposed an innovative intelligent evaluation system of strabismus in digital videos, based on the cover test. In particular, the video is recorded using an infrared camera, while the subject performs automated cover tests. The video is then fed into the proposed algorithm that consists of six stages: (1) eye region extraction, (2) iris boundary detection, (3) key frame detection, (4) pupil localization, (5) deviation calculation, and (6) evaluation of strabismus. A database containing cover test data of both strabismic subjects and normal subjects was established for experiments. Experimental results demonstrate that the deviation of strabismus can be well-evaluated by our proposed method. The accuracy was over 91%, in the horizontal direction, with an error of 8 diopters; and it was over 86% in the vertical direction, with an error of 4 diopters.

Keywords: intelligent evaluation; automated cover tests; deviation of strabismus; pupil localization

1. Introduction Strabismus is the misalignment of the eyes, that is, one or both eyes may turn inward, outward, upward, or downward. It is a common ophthalmic disease with an estimated prevalence of 4%, in adulthood [1], 65% of which develops in childhood [2]. Strabismus could have serious consequences on vision, especially for children [3,4]. When the eyes are misaligned, the eyes look in different directions, leading to the perception of two images of the same object, a condition called diplopia. If strabismus is left untreated in childhood, the brain eventually suppresses or ignores the image of the weaker eye, resulting in or permanent vision loss. Longstanding eye misalignment might also impair the development of depth perception or the ability to see in 3D. In addition, patients with paralytic strabismus might turn their face or head to overcome the discomfort and preserve the for the paralyzed extraocular muscle, which might lead to a skeletal deformity in children, such as scoliosis. More importantly, it has been shown that people with strabismus show higher levels of anxiety and depression [5,6] and report a low self-image, self-esteem, and

Appl. Sci. 2019, 9, 731; doi:10.3390/app9040731 www.mdpi.com/journal/applsci Appl. Sci. 2019, 9, 731 2 of 16 self-confidence [7,8], which brings adverse impact on a person’s life, including education, employment, and social communication [9–14]. Thus, timely quantitative evaluation of strabismus is essential, in order to get a suitable treatment for strabismus. More specifically, accurate measurement of the deviation in strabismus is crucial in planning surgery and other treatments. Currently, several tests need to be performed, usually, to diagnose strabismus in a clinical context [15]. For example, the corneal light reflex is conducted by directly observing the displacement of the reflected image of light from the center of the pupil. Maddox rod is a technique that utilizes filters and distorting lenses for quantifying eye turns. Another way to detect and measure an eye turn is to conduct a cover test, which is the most commonly used technique. All these methods require conduction and interpretation by the clinician or ophthalmologist, which is subjective to some extent. Taking the cover test as an example, the cover procedures and assessments are conducted manually, in the existing clinical systems, and well-trained specialists are needed for the test. Therefore, this limits the effect of strabismus assessment in two aspects [16,17]. With respect to cover procedure, the cover is given manually, so the covering time and speed of occluder movement depend on the experience of the examiners and can change from time to time. These variations of the cover may influence the assessment results. With respect to assessment, the response of subject is evaluated subjectively, which leads to more uncertainties and limitations in the final assessment. First, the direction of eye movement, the decision of whether or not moving and the responding speed for recovery, rely on the observation and determination of the examiners. The variances of assessment results, among examiners, cannot be avoided. Second, the strabismus angle has to be measured with the aid of a prism, in a separate step and by trial-and-error. This strings out the diagnosis process. Being aware of these clinical disadvantages, researchers are trying to find novel ways to improve the process of strabismus assessment. With the development of computer technology, image acquisition technology, etc., researchers have made some efforts to utilize new technologies and resources to aid diagnostics. Here, we give a brief review on the tools and methodologies that support the detection and diagnosis of strabismus. These methods can be summarized into two categories, namely the image-based or video-based method, and the eye-tracking based method. The image-based or video-based method uses image processing techniques to achieve success in diagnosing strabismus [18–22]. Helveston [18] proposed a store-and-forward telemedicine consultation technique that uses a digital camera and a computer to obtain patient images and then transmits them by email, so the diagnosis and treatment plan could be determined by the experts, according to the images. This was an early attempt to apply new resources to aid the diagnosis of strabismus. Yang [19] presented a computerized method of measuring binocular alignment, using a selective wavelength filter and an infrared camera. Automated image analysis showed an excellent agreement with the traditional PCT (prism and alternate cover test). However, the subjects who had an extreme proportion that fell out of the normal variation range, could not be examined using this system, because of its software limitations. Then in [20], they implemented an automatic strabismus examination system that used an infrared camera and liquid crystal shutter glasses to simulate a cover test and a digital video camera, to detect the deviation of eyes. Almedia et al. [21] proposed a four-stage methodology for automatic detection of strabismus in digital images, through the Hirschberg test: (1) finding the region of the eyes; (2) determining the precise location of eyes; (3) locating the limbus and the brightness; and (4) identifying strabismus. Finally, it achieved a 94% accuracy in classifying individuals with or without strabismus. However, the Hirschberg test was less precise compared to other methods like the cover test. Then in [22], Almeida presented a computational methodology to automatically diagnose strabismus through digital videos featuring a cover test, using only a workstation computer to process these videos. This method was recognized to diagnose strabismus with an accuracy value of 87%. However, the effectiveness of the method was considered only for the horizontal strabismus and it could not distinguish between the manifest strabismus and the latent strabismus. Appl. Sci. 2019, 9, 731 3 of 16

The eye-tracking technique was also applied for strabismus examination [23–27]. Quick and Boothe [23] presented a photographic method, based on corneal light reflection for the measurement of binocular misalignment, which allowed for the measurement of eye alignment errors to fixation targets presented at any distance, throughout the subject’s field of gaze. Model and Eizenman [24] built up a remote two-camera gaze-estimation system for the AHT (Automated Hirschberg Test) to measure binocular misalignment. However, the accuracy of the AHT procedure has to be verified with a larger sample of subjects, as it was studied on only five healthy infants. In [25], Pulido proposed a new method prototype to study the eye movement where gaze data were collected using the Tobii eye tracker, to conduct ophthalmic examination, including strabismus, by calculating the angles of deviation. However, the thesis focused on the development of the new method to provide repeatability, objectivity, comprehension, relevance, and independence and lacked an evaluation of patients. In [26], Chen et al. developed an eye-tracking-aided digital system for strabismus diagnosis. The subject’s eye alignment condition was effectively investigated by intuitively analyzing gaze deviations, but only a strabismic person and a normal person were asked to participate in the experiments. Later, in [27], Chen et al developed a more effective eye-tracking system to acquire gaze data for strabismus recognition. Particularly, they proposed a gaze deviation image to characterize eye-tracking data and then leveraged the Convolutional Neural Networks to generate features from gaze deviation image, which finally led to an effective strabismus recognition. However, the performance of the proposed method could be further evaluated with more gaze data, especially data with different strabismus types. In this study, we have proposed an intelligent evaluation system for strabismus. Intelligent evaluation of strabismus, which could also be termed an automatic strabismus assessment, assesses strabismus without ophthalmologists. We developed a set of intelligent evaluation systems in digital videos based on a cover test, in which an automatic stimulus module, controlled by chips, was used to generate the cover action of the occluder; the synchronous tracking module was used to monitor and record the movement of eyes; and the algorithm module was used to analyze the data and generate the measurement results of strabismus. The rest of paper is organized as follows. A brief introduction of the system is given in Section2. The methodology exploited for strabismus evaluation is described in detail in Section3. Then, in Section4, the results achieved by our methodology are presented, and in Section5, some conclusions are drawn and a prospect of future work is discussed.

2. System Introduction In our work, we have developed a set of intelligent evaluation systems of strabismus, in which the subject needs to sit on the chair with his chin on the chin rest and fixate on the target. The cover tests are automatically performed and finally a diagnosis report is generated, after a short while. The system, as shown in Figure1, can be divided into three parts, i.e., the automated stimulus module for realizing the cover test, the video acquisition module for motion capture, and the algorithm module for detection and measurement of strabismus. More details of the system have been presented in our previous work [28]. Appl. Sci.Appl.2019 Sci. ,20199, 731, 9, x FOR PEER REVIEW 4 of 16 4 of 16

FigureFigure 1. 1.The The proposed proposed intelligentintelligent evalua evaluationtion system system of ofstrabismus. strabismus.

2.1. Hardware2.1. Hardware Design Design The automatedThe automated stimulus stimulus module module is basedis based on aon stepper a stepper motor motor connected connected to to the the controller, controller, a controla circuit,control which circuit, makes which the clinical makes coverthe clinical test automated. cover test Theautomated. stepper The motor stepper used inmotor the proposedused in the system is iHSV57-30-10-36,proposed system producedis iHSV57-30-10-36, by JUST produced MOTION by CONTROLJUST MOTION (Shenzhen, CONTROL China). (Shenzhen, The China). occluder is hand-madeThe occluder cardboard, is hand-made 65 mm cardboard, wide, 300 65 mm millimet high,ers and wide, 5 mm 300 thick. millimeters The subject’s high, and sight 5 millimeters is completely thick. The subject’s sight is completely blocked when the occluder occludes the eye so that our blocked when the occluder occludes the eye so that our method can properly simulate the clinical method can properly simulate the clinical cover test. XC7Z020, a Field Programmable Gate Array cover test. XC7Z020, a Field Programmable Gate Array (FPGA), is the core of the control circuit. (FPGA), is the core of the control circuit. The communication between the upper computer and the The communicationFPGA is via a Universal between Serial the upperBus (USB). computer The moto andr rotates the FPGA at a isparticular via a Universal speed in Seriala particular Bus (USB). The motordirection, rotates clockwise at a or particular counterclockwise, speed in to a drive particular the left direction, and right movement clockwise of or the counterclockwise, occluder on the to driveslider, the left once and the rightservo movementsystem receives of the the occludercontrol signals on the from slider, the FPGA. once the servo system receives the control signalsAs for fromthe motion-capture the FPGA. module, the whole process of the test is acquired by the high-speed cameraAs for theRER-USBFHD05MT motion-capture with module, a 1280 the × 720 whole resolution process at of60 thefps. testA near-infrared is acquired led by array the high-speedwith a cameraspectrum RER-USBFHD05MT of 850 nm and a with near-infrared a 1280 × lens720 resolutionwere selected at to 60 compensate fps. A near-infrared for the infrared led array light with a spectrumillumination of 850 and nm separately and anear-infrared reduce the noise lens from were the selectedvisible light. to compensateAMCap is used for to the perform infrared the light control of the camera, such as the configuration of the frame rate and resolution, exposure time, the illumination and separately reduce the noise from the visible light. AMCap is used to perform the start and end of a recording, and so on. control of the camera, such as the configuration of the frame rate and resolution, exposure time, the Being ready to execute the strabismus evaluation, the subject is told to sit in front of the startworkbench and end of with a recording, chin on the and chin so rest on. and fixate on the given target. The target is a cartoon character displayedBeing ready on toa MATLAB execute the GUI strabismus on Monitor evaluation, 2, for the thepurpose subject of isattracting told to sitattention, in front especially of the workbench for withchildren. chin on theThe chinexperimenter rest and sends fixate a on code the “0h07” given (the target. specific The code target for is automatic a cartoon cover character test) to displayed the on asystem, MATLAB and GUIthe stimulus on Monitor module 2, reacts for the to begin purpose the process of attracting of the cover attention, test. Meanwhile, especially the for video children. The experimenteracquisition application sends a codeAMCap “0h07” automatically (the specific starts code recording. for automatic When the cover cover test) test toends, the AMCap system, and the stimulusstops recording module and reacts saves tothe begin video thein a processpredefined of di therectory. cover Then test. the Meanwhile, video is fed theinto videothe proposed acquisition applicationintelligent AMCap algorithm automatically performing starts the recording. strabismus When evaluation. the cover Finally, test ends, a report AMCap is stopsgenerated recording automatically which contains the presence, type, and degree of strabismus. and saves the video in a predefined directory. Then the video is fed into the proposed intelligent algorithm2.2. Data performing Collection the strabismus evaluation. Finally, a report is generated automatically which contains the presence, type, and degree of strabismus. In the cover test, the examiner covers one of the subject’s eyes and uncovers it, repeatedly, to see 2.2. Datawhether Collection the non-occluded eye moves or not. If the movement of the uncovered eye can be observed, the subject is thought to have strabismus. The cover test can be divided into three subtypes—the In the cover test, the examiner covers one of the subject’s eyes and uncovers it, repeatedly, to see whether the non-occluded eye moves or not. If the movement of the uncovered eye can be observed, the subject is thought to have strabismus. The cover test can be divided into three subtypes—the unilateral cover test, the alternating cover test, and the prism alternate cover test. The unilateral cover test is used principally to reveal the presence of a strabismic deviation. If the occlusion time is Appl. Sci. 2019, 9, 731 5 of 16 Appl. Sci. 2019, 9, x FOR PEER REVIEW 5 of 16 extended,unilateral it iscover also test, called the thealternating cover-uncover cover test, test and [29 ].the The prism alternating alternate cover cover testtest. is The used unilateral to quantify cover the test is used principally to reveal the presence of a strabismic deviation. If the occlusion time is deviation [30]. The prism alternate cover test is known as the gold standard test to obtain the angle of extended, it is also called the cover-uncover test [29]. The alternating cover test is used to quantify ocular misalignment [31]. In our proposed system, we sequentially performed the unilateral cover the deviation [30]. The prism alternate cover test is known as the gold standard test to obtain the test, the alternate cover test, and the cover-uncover test, for each subject, to check the reliability of angle of ocular misalignment [31]. In our proposed system, we sequentially performed the unilateral the assessment. cover test, the alternate cover test, and the cover-uncover test, for each subject, to check the reliability of Thethe assessment. protocol of the cover tests is as follows. Once the operator sends out the code “0h07”, the automaticThe stimulus protocol moduleof the cover waits tests for 6is sas to follows. let the applicationOnce the operator “AMCap” sends react, out andthe code then “0h07”, the occlusion the operationautomatic begins. stimulus The module occluder waits is initially for 6 s to held let the in application front of the “AMCap” left eye. The react, first and is then the unilateral the occlusion cover testoperation for the left begins. eye—the The occluder occluder is movesinitially away held fromin front the of left the eye,left eye. waiting The first for 1is s, the then unilateral moves cover back to covertest thefor leftthe eyeleft eye—the for 1 s. This occluder process moves is repeated away from for three the left times. eye, Thewaiting unilateral for 1 s, cover then moves test for back the rightto eyecover is the the same left eye as thatfor 1 ofs. This the leftprocess eye. is When repeated this fo procedurer three times. ends, The theunilateral occluder cover is test at the for positionthe right of occludingeye is the the same right as eye. that Thenof the the left alternate eye. When cover this test procedure begins. ends, Theoccluder the occluder moves is at to the the position left to cover of theoccluding left eye for the 1 right s and eye. then Then moves the toalternate the right cover to cover test begins. the right The eye. occluder This ismoves considered to the left as oneto cover round, andthe it left needs eye to for be 1 repeateds and then for moves three to rounds. the right Finally, to cover the the cover-uncover right eye. This test is considered is performed as forone bothround, eyes. Theand only it needs difference to be fromrepeated the abovefor three unilateral rounds. coverFinally, test the is cover-uncover that the time oftest the is occluder’sperformed for occluding both eyeseyes. is increased The only todifference 2 s. Eventually, from the the above occluder unilateral returns cover to the test initial is that position. the time of the occluder’s occludingWe cooperated eyes is increased with the Hongto 2 s. Eventually Kong Association, the occluder of Squint returns and to Double the initial Vision position. Sufferers to collect strabismicWe cooperated data. In total with 15 the members Hong Kong of the Association association of Squint consented and Double to participate Vision Sufferers in our experiments. to collect In additionstrabismic to data. the 15In adulttotal 15 subjects, members 4 children of the associat were invitedion consented to join ourto participate study. The in adults our experiments. and children, includingIn addition both to the male 15 andadult female, subjects, were 4 children within were the invited age ranges to join of our 25 study. to 63 The years adults and and 7 to children, 10 years, including both male and female, were within the age ranges of 25 to 63 years and 7 to 10 years, respectively. The camera was configured to capture a resolution of 1280 × 720 pixels at a frame rate of respectively. The camera was configured to capture a resolution of 1280 × 720 pixels at a frame rate 60 fps. The distance between the target and eyes was 33 cm. If wearing corrective lenses, the subject of 60 fps. The distance between the target and eyes was 33 cm. If wearing corrective lenses, the subject was requested to perform the tests twice, once wearing it and once without it. After ethics approval was requested to perform the tests twice, once wearing it and once without it. After ethics approval and informed consent, the 19 subjects followed the data acquisition procedure introduced above, to and informed consent, the 19 subjects followed the data acquisition procedure introduced above, to participateparticipate in in the the experiments. experiments. Finally, Finally, 24 24 samples samples werewere collected,collected, five five of of which which were were with with glasses. glasses. 3. Methodology 3. Methodology ToTo assess assess the the strabismus, strabismus, it isit necessaryis necessary to determineto determine the the extent extent of unconsciousof unconscious movement movement of eyesof wheneyes applying when applying the cover the test. cover To test. meet To the meet requirement, the requirement, a method a method consisting consis of sixting stages of six is stages proposed, is asproposed, shown in Figureas shown2. (1)in Figure The video 2. (1) data The arevideo first data processed are first processed to extract to the extract eye regions, the eye toregions, get ready to forget the ready following for the procedures. following procedures. (2) The iris (2) measure The iris measure and template and template is detected is detected to obtain to obtain its size its for size the furtherfor the calculations further calculations and segment and regionsegment for region the template for the matching.template matching. (3) The key (3) frameThe key is detectedframe is to locatedetected the position to locate at the which position the at stimuli which occur. the stimuli (4) The occur. pupil (4) localization The pupil localization is performed is performed to identify to the coordinatesidentify the of thecoordinates pupil location. of the pupil (5) Having location. detected (5) Having the key detected frame the and key pupil, frame the and deviation pupil, ofthe eye movementsdeviation of can eye be movements calculated. can (6) be This calculated. is followed (6) Th byis the is followed strabismus by the detection strabismus stage detection that can stage obtain thethat prism can diopterobtain the of prism misalignment diopter of and misalignment classify the and type classify of strabismus. the type of Detailsstrabismus. of these Details stages of these of the methodstagesare of the described method below.are described below.

Figure 2. The workflow of the proposed method for the assessment of strabismus. Appl. Sci. 2019, 9, x FOR PEER REVIEW 6 of 16

Figure 2. The workflow of the proposed method for the assessment of strabismus.

3.1. Eye Region Extraction Appl. Sci. 2019, 9, 731 6 of 16 At this stage, a fixed sub-image containing the eye regions, while excluding regions of no interest (like nose and hair), is extracted to reduce the search space for the subsequent steps. In our system, 3.1.the positions Eye Region of Extraction the subject and the camera remain the same so that the data captured by the system showAt a high this stage, degree a fixedof consistency, sub-image that containing is, half of the the eye face regions, from the while tip excludingof the nose regions to the hair of no occupies interest (likethe middle nose and area hair), of the is image. extracted This to information, reduce the search known space as a priori, for the together subsequent with steps. the anthropometric In our system, therelations, positions can ofbe theused subject to quickly and the identify camera the remain rough theeye sameregions. so thatThe theboundary data captured of the eye by regions the system can showbe defined a high as degree of consistency, that is, half of the face from the tip of the nose to the hair occupies the middle area of the image.p =0.2×W,p This information, = 0.4 × known H, w = as 0.6 a × priori, W, h= together 0.3 × H, with the anthropometric(1) relations, can be used to quickly identify the rough eye regions. The boundary of the eye regions can bewhere defined W and as H are the width and height of the image, w and h are the width and height of the eye regions, and (p,p) defines the top-left position of the eye regions, respectively, as shown in Figure px = 0.2 × W, py = 0.4 × H, w = 0.6 × W, h = 0.3 × H, (1) 3. where W and H are the width and height of the image, w and h are the width and height of the eye Thus, the eye regions are extracted, and the right and left eye can be distinguished by dividing regions,the area andinto twopx, pparts,y defines of which the top-leftthe area position with smaller of the x eye coordinate regions, respectively,corresponds as to shown the right in Figureeye and3. vice versa, by comparing the x coordinates of the left upper corner of both eye regions.

Figure 3. The rough localization of eye regions of the imageimage in our database. Half of the face occupies the middle of the imageimage in ourour database.database. The eye regionregion is highlighted by the red window, which is slightly smaller than the actual eyeeye region, in order to obtain a clearerclearer display. The right and left eye region cancan bebe obtainedobtained byby dividingdividing thethe eyeeye regionregion intointo twotwo parts.parts. Thus, the eye regions are extracted, and the right and left eye can be distinguished by dividing 3.2. Iris Measure and Template Detection the area into two parts, of which the area with smaller x coordinate corresponds to the right eye and vice versa,During by this comparing stage, the the measure x coordinates and oftemplate the left upperof the corneriris are of detected. both eye regions.To achieve this, it is necessary to locate the iris boundary, particularly, the boundary of iris and sclera. The flowchart of 3.2.this Irisstage Measure is shown and in Template Figure Detection4. (1) First, the eye image is converted from RGB to grayscale. (2) Then the Haar-likeDuring this feature stage, is theapplied measure to the and grayscale template imag of thee to iris detect are detected. the exact Toeye achieve region this,with it the is necessaryobjective toof locatefurther the narrowing iris boundary, the area particularly, of interest. the This boundary feature ofextraction iris and sclera.depends The on flowchart the local offeature this stage of the is showneye, that in is, Figure the eye4. (1) socket First, appears the eye much image darker is converted in grayscale from RGBthan tothe grayscale. area of skin (2) below Then the it. The Haar-like width featureof the rectangle is applied window to the grayscaleis set to be image approximately to detect thethe exacthorizontal eye region length with of the the eye, objective while the of further height narrowingof the window the areais variable of interest. within This a certain feature range. extraction The maximum depends respon on these local of this feature step ofcorresponds the eye, that to is,the the eye eye and socket the skin appears area below much it. darker (3) The in Gaussian grayscale filter than is the applied area of to skin the belowresult it.of (2), The with width the of goal the rectangleof smoothing window and isreducing set to be noise. approximately (4) Then, thethe horizontalcanny method length is ofapplied the eye, as while an edge-highlighting the height of the windowtechnique. is variable(5) The circular within aHough certain transform range. The is maximum employed responseto locate of the this iris step boundary, corresponds due toto theits eyecircular and character. the skin area In performing below it. (3)this The step, Gaussian only the filtervertical is appliedgradients to (4) the are result taken of for (2), locating with the the goal iris ofboundary smoothing [32]. and This reducing is based noise.on the (4)consideration Then, the canny that the method eyelid isedge applied map aswill an corrupt edge-highlighting the circular technique.iris boundary (5) edge The map, circular because Hough the transformupper and lowe is employedr iris regions to locate are usually the iris occluded boundary, by the due eyelids to its circularand the eyelashes. character. The In performing eyelids are usually this step, horizontally only the vertical aligned. gradients Therefore, (4) the are technique taken for of locating excluding the iris boundary [32]. This is based on the consideration that the eyelid edge map will corrupt the circular iris boundary edge map, because the upper and lower iris regions are usually occluded by the eyelids and the eyelashes. The eyelids are usually horizontally aligned. Therefore, the technique of Appl. Sci. 2019, 9, 731 7 of 16

Appl. Sci. 2019, 9, x FOR PEER REVIEW 7 of 16 excluding the horizontal edge map reduces the influence of the eyelids and makes the circle localization the horizontal edge map reduces the influence of the eyelids and makes the circle localization more more accurate and efficient. (6) Subsequently, we segment a region with dimensions of 1.2× radius, accurate and efficient. (6) Subsequently, we segment a region with dimensions of 1.2x radius, horizontally, and 1.5× radius, vertically, on both sides from the iris center in the original gray image. horizontally, and 1.5x radius, vertically, on both sides from the iris center in the original gray image. The radius and iris center used in this step are the radius and center coordinates of the iris region The radius and iris center used in this step are the radius and center coordinates of the iris region detected in the last step. These values were chosen so that a complete iris region could be segmented detected in the last step. These values were chosen so that a complete iris region could be segmented without any damage. This region will be used as a template for the next stage. without any damage. This region will be used as a template for the next stage.

Figure 4. The flowchart of iris boundary detection. The result of the exact eye region detection is Figure 4. The flowchart of iris boundary detection. The result of the exact eye region detection is marked by the red window on the original grayscale image. The red circle represents the iris boundary. marked by the red window on the original grayscale image. The red circle represents the iris Theboundary. above operations are applied on the right and left eye regions, respectively, in a ten-frame interval, and ten pairs of iris radius values are extracted. The interval should be chosen to meet two The above operations are applied on the right and left eye regions, respectively, in a ten-frame conditions. First, the radius should be accurately determined in the interval. Second, the interval interval, and ten pairs of iris radius values are extracted. The interval should be chosen to meet two should not influence the next stage because the segmented region will be used for template matching. conditions. First, the radius should be accurately determined in the interval. Second, the interval By the end of the interval, the iris radius value with the largest frequency is determined as the final iris should not influence the next stage because the segmented region will be used for template matching. radius. Thus, we have the right iris and left iris, with the radius of R and R , respectively. By the end of the interval, the iris radius value with the largest frequencyr isl determined as the final 3.3.iris radius. Key Frame Thus, Detection we have the right iris and left iris, with the radius of R and R, respectively.

3.3. KeyAt thisFrame stage, Detection the key frame detection is performed with the template matching technique on the eye region. The cover test is essentially a stimulus-response test. What we are interested in is whetherAt this the stage, eye movements the key frame occur detection when a is stimulus performed occurs. with In the the template system, matching an entire processtechnique of testson the is recordedeye region. in The the cover video, test which is essent containsially nearlya stimulus-response 3000 frames at test. a length What ofwe about are interested 50 s. We examinedin is whether all framesthe eye betweenmovements two occur stimuli. when The a stimulus stimuli we occurs. focused In the on system, are the unilateralan entire process cover test of tests for theis recorded left and rightin the eye, video, the which alternating contains cover nearly test for 3000 left frames and right at a eye, length and of the about cover-uncover 50 s. We examined test for the all left frames and rightbetween eye. two In total, stimuli. 18 stimuliThe stimuli are obtained we focused with on 6 typesare the of unilateral stimuli for cover 3 rounds. test for The the useful left and information right eye, accountsthe alternating for about cover two-fifth test for ofleft the and video. right Therefore,eye, and the it iscover-uncover more efficient test for for the the algorithm left and toright discard eye. theseIn total, redundant 18 stimuli information. are obtained The with key 6 types frame of detection stimuli for is for 3 rounds. the purpose The useful of finding information the frame accounts where thefor stimulusabout two-fifth occurs. of the video. Therefore, it is more efficient for the algorithm to discard these redundantThe right information. iris region The segmented key frame in Section detection 3.2 is usedfor the as purpose a template, of andfinding the templatethe frame matching where the is stimulus occurs. applied to the eye regions. The thresholds TH1, TH2 are set for the right eye region and the left region, The right iris region segmented in Section 3.2 is used as a template, and the template matching respectively, and TH2 is smaller than TH1, as the right iris region is used as a template. The iris region TH TH is detectedapplied to if the matchingeye regions. result The is thresholds bigger than the, threshold. are set In for the the nearby right regioneye region of the and iris, the there left TH TH , mayregion, be respectively, many matching and points is which smaller present than the sameas the iris. right The iris repeated region is point used can as a be template. removed The by usingiris region the distance is detected constraint. if the matching Therefore, result the is number bigger ofthan the the matching threshold. template In the isnearby consistent region with of the actualiris, there number may ofbeirises. many Thematching frame number,points which the number presentof the iris same detected iris. The in the repeated right eye point region, can thebe numberremoved of by the using iris detected the distance in the leftconstraint. region are Theref savedore, in memory.the number Then of we the search matching the memory template to find is theconsistent position with of the the stimulus. actual number Taking of the irises. unilateral The frame cover number, test for the number left eye asof aniris example, detected thein the number right ofeye iris region, detected the isnumber [1 1], separately, of the iris beforedetected the in left the eye left is region covered are and saved then in [1 memory. 0], after coveringThen we thesearch left eye.the memory Therefore, to wefind can the use position the state of the change stimulus. from [1Taki 1] tong [1the 0] unilateral to determine cover the test corresponding for the left eye frame as an of theexample, stimuli. the The number correspondence of iris detected between is [1 1], state separa changestely, andbefore stimulus the left is eye shown is covered in Table and1. Thus,then [1 the 0], frameafter covering number the of the left eighteen eye. Therefore, stimulus we can can be use obtained. the state change from [1 1] to [1 0] to determine the corresponding frame of the stimuli. The correspondence between state changes and stimulus is shown in Table 1. Thus, the frame number of the eighteen stimulus can be obtained.

Appl. Sci. 2019, 9, x FOR PEER REVIEW 8 of 16

Table 1. State changes and stimulus.

State change Stimuli [1 1]  [1 0] Covering the left eye in unilateral cover test [1 1]  [0 1] Covering the right eye in unilateral cover test [0 1]  [1 0] Uncovering the right eye in alternate cover test [1 0]  [0 1] Uncovering the left eye in alternate cover test [1 0]  [1 1] Uncovering the left eye in cover-uncover test Appl. Sci. 2019, 9, 731 8 of 16 [0 1]  [1 1] Uncovering the right eye in cover-uncover test

3.4. Pupil Localization Table 1. State changes and stimulus.

The pupil localizationState Change process is used to locate the pupil, Stimuli which is the dark region of the eye controlling the light[1 entrance. 1] → [1 0] The flowchart Covering of this stage the left is shown eye in unilateral in Figure cover 5. (1) test First, the eye image is converted into grayscale.[1 1] → [0 (2) 1] The Haar-like Covering rectangle the rightfeature, eye insame unilateral as that cover in Section test 3.2, is applied to narrow the eye region.[0 1] → (3)[1 0] Then another Uncovering Haar-like the feature, right eye the in center-surround alternate cover test feature with the variable inner radius[1 of 0] →r and[0 1] outer radius Uncoveringof 3r, is applied the left to eye the in detected alternate exact cover testeye region of step 2. [1 0] → [1 1] Uncovering the left eye in cover-uncover test This feature makes [0use 1] →of [1the 1] pupil being Uncovering darker than the the right surrounding eye in cover-uncover area. Therefore, test the region corresponding to the maximum response of the Haar feature is a superior estimate of the iris region. The3.4. Pupilcenter Localization coordinates and radius of the Haar feature is obtained and a region can be segmented with a dimension of 1.2x radius, horizontally and vertically, on both sides from the center of the detected region,The to pupil makelocalization sure the whole process pupil is is used in the to se locategment. the Then pupil, we which perform is the the dark following region techniques. of the eye (4)controlling Gaussian the filtering light entrance. is used to The reduce flowchart noise of and this smooth stage is the shown image in Figurewhile 5preserving. (1) First, thethe eye edges. image (5) Theis converted morphology into grayscale.opening operation (2) The Haar-like is applied rectangle to elim feature,inate small same objects, as that separate in Section small 3.2, isobjects applied at slenderto narrow locations, the eye and region. restore (3) Thenothers. another (6) The Haar-like edges are feature, detected the through center-surround the Canny feature filter, withand the variablecontour point inner is radius obtained. of r and outer radius of 3r, is applied to the detected exact eye region of step 2. ThisGiven feature a makesset of candidate use of the contour pupil being points darker of the thanpupi thel, the surrounding next step of area.the algorithm Therefore, is to the find region the bestcorresponding fitting ellipse. to the (7) maximum We applied response the Random of the Sample Haar feature Consensus is a superior (RANSAC) estimate paradigm of the irisfor region.ellipse Thefitting center [33]. coordinates RANSAC andis an radius effective of the technique Haar feature for ismodel obtained fitting and in a regionthe presence can be segmentedof a large withbut unknowna dimension percentage of 1.2× radius, of outliers, horizontally in a measurement and vertically, sample. on both In our sides application, from the centerinliers ofare the all detectedof those contourregion, topoints make that sure correspond the whole to pupil the ispupil in the contou segment.r and Thenoutliers we are perform contour the points following that correspond techniques. (4)to other Gaussian contours, filtering like is the used upper to reduce and noisethe lower and smootheyelid. After the image the necessary while preserving number the of edges.iterations, (5) The an ellipsemorphology is fit to opening the largest operation consensus is applied set, and to its eliminate center is small considered objects, to separate be the center small objectsof the pupil. at slender The framelocations, number and and restore pupil others. center (6) coordinates The edges are are stored detected in memory. through the Canny filter, and the contour point is obtained.

Figure 5. The flowchart flowchart of pupil localization. Steps 1 an andd 2 are omitted here since these two steps are the same as Steps 1 1 and and 2 2 of of Section Section 3.2,3.2, and and Step Step 2 2 represents represents the the detected detected exact exact eye eye region region of of Step Step 2. 2. In (3) The segmented pupil region is marked by the red window on the detected exact region. In In (4), (4), (5), andand (6),(6), thethe presentedpresented images images are are enlarged enlarged and and bilinearly bilinearly interpolated interpolated for for a gooda good display. display. In (7),In (7), an anellipse ellipse is fitis tofit theto the contour contour of theof the pupil pupil and and the the red red ellipse ellipse and and the the cross cross separately separately marks marks the the result result of offitting fitting and and the the center center of theof the pupil. pupil. Given a set of candidate contour points of the pupil, the next step of the algorithm is to find the 3.5. Deviation Calculation best fitting ellipse. (7) We applied the Random Sample Consensus (RANSAC) paradigm for ellipse fitting [33]. RANSAC is an effective technique for model fitting in the presence of a large but unknown percentage of outliers, in a measurement sample. In our application, inliers are all of those contour points that correspond to the pupil contour and outliers are contour points that correspond to other contours, like the upper and the lower eyelid. After the necessary number of iterations, an ellipse is fit to the largest consensus set, and its center is considered to be the center of the pupil. The frame number and pupil center coordinates are stored in memory.

3.5. Deviation Calculation In order to analyze the eye movement, the deviation of the eyes during the stimulus process needs to be calculated. During a stimulus process, the position of the eye remains motionless before Appl. Sci. 2019, 9, x FOR PEER REVIEW 9 of 16

Appl. Sci. 2019, 9, 731 9 of 16 In order to analyze the eye movement, the deviation of the eyes during the stimulus process needs to be calculated. During a stimulus process, the position of the eye remains motionless before thethe stimulus stimulus occurs; occurs; after after stimuli, stimuli, the the eye eyeresponds. responds. TheThe responseresponse cancan bebe divideddivided intointo two scenarios. ForFor the the simpler simpler case, case, the the eyeball eyeball remains remains still. still. ForFor thethe moremore complexcomplex case,case, thethe eyeballeyeball startsstarts toto movemove afterafter a a short short while while and and then then stops stops moving moving and and keeps keeps still still until untilthe the nextnext stimuli.stimuli. BasedBased onon thethe statisticsstatistics inin the the database, database, the the eyes eyes complete complete the the movement movement within within 17 17 frames frames andand thethe durationduration ofof thethe movementmovement isis about about 3 3 frames. frames. TheThe schematic schematic of of deviation deviation calculation calculation isis shownshown inin FigureFigure6 .6. The The pupil pupil position position data data within within the the regionregion from from the the 6 6 frames frames before before the the detected detected key key frame, frame, to to 6060 framesframes afterafter thethe keykey frames,frames, areare selectedselected asas a a data data matrix. matrix. Each Each row row of of the the matrix matrix correspondscorresponds toto thethe frameframe number,number, thethe x,x, yy coordinatecoordinate ofof thethe pupilpupil center. center. Next, Next, an an iterative iterative process process isis appliedapplied toto filterfilter outout thethe singularsingular valuesvalues of pupil detection. TheThe current current line line of of the the matrix matrix is is subtracted subtracted from from the the previous previous line line ofof thethe matrix,matrix, andand thethe frame number ofof the the previous previous line, line, the the difference difference between between thethe frameframe numbersnumbers ∆Δf,f, andand thethe differencedifference betweenbetween thethe ∆x ∆f. v coordinatescoordinates∆ Δx,x,∆ Δyy are are retained. retained. If If∆ x > ∆f.v,, where where v v isis thethe statisticalstatistical maximummaximum ofof thethe offsetoffset ofof pupil positionposition of of two two adjacent adjacent frames, frames, then then this this frame fram ise consideredis considered to be to wrongbe wrong and and the correspondingthe corresponding row inrow the in original the original matrix matrix is deleted. is deleted. This process This process iterates iterates until no until rows no arerows deleted are deleted or the or number the number of loops of exceedsloops exceeds 10. Finally, 10. Finally, we use we the use average the average of the coordinates of the coordinates of the first of the five first rows five of rows the reserved of the reserved matrix matrix as the starting position and the average of the last five rows as the ending position, thus, as the starting position and the average of the last five rows as the ending position, thus, obtaining the obtaining the deviation of each stimulus, as expressed by the equations: deviation of each stimulus, as expressed by the equations: () = | |; (x) Devp = xe − xs ; (2) () (2) (y) = | |, Devp = ye − ys , where x and y are the ending positions of the eye in a stimulus, the x and y are the starting () () where xe and y are the ending positions of the eye in a stimulus, the xs and y are the starting positions positions of ethe eye, and Dev , Dev are the horizontal and verticals deviations in pixels, (x) (y) ofrespectively. the eye, and Devp , Devp are the horizontal and vertical deviations in pixels, respectively.

FigureFigure 6. 6. TheThe schematicschematic ofof deviationdeviation calculation. (1) (1) Th Thee input input data data consist consist of of the the interval interval from from 6 6frames frames before before the the key frame to 60 frames after th thee key frame. The frame pointed to by the red arrow √ representsrepresents the the key key frame frame detected, detected, and and the the symbol symbol “ דה“” “√” below the image indicates the abnormality oror normality normality of of the the pupil pupil detection. detection. (2) (2) WithWith the the false falseframe framefiltering filtering completed,completed, thethe abnormalabnormal framesframes areare deleted deleted while while the the normal normal frames frames are are reserved. reserved. TheThe averageaverage ofof thethe pupilpupil locationslocations ofof thethe firstfirst fivefive framesframes is is calculated calculated as as the the starting starting position, position, whilewhile thethe averageaverage ofof thethe pupilpupil locationslocations ofof thethe lastlast fivefive framesframes is is the the ending ending position. position. (3) (3) The The deviations deviations inin pixelspixels areare calculated.calculated. ForFor anan intuitiveintuitive show,show, thethe startingstarting position position represented represented by by the the green green dot dot and and the the ending ending positionposition representedrepresented byby thethe redred dotdot areare matchedmatched into into one one eye eye image, image, indicating indicating thethe size size of of the the image. image. “dx”,“dx”, “dy”“dy” representrepresent thethe deviationdeviation inin horizontalhorizontal and and vertical vertical directions, directions, respectively. respectively.

3.6. Strabismus Detection Appl. Sci. 2019, 9, 731 10 of 16

3.6.Appl. StrabismusSci. 2019, 9, xDetection FOR PEER REVIEW 10 of 16

Obtained deviation of each stimulus,stimulus, the deviation value in pixel DevDevp cancan bebe convertedconverted intointo prismprism dioptersdiopters DevDev∆∆,, whichwhich isis calculatedcalculated outout usingusing thethe equation:equation:

  =DEmm .. , (3) Dev∆ =∆ · dpMM · Dev p, (3) DEp

where DE is the value of the mean diameter of iris boundary of adult patients and DE = where DEmm is the value of the mean diameter of iris boundary of adult patients and DEmm = 11 mm [34], DE is the diameter value of the iris boundary detected in pixels, dpMM is a constant 11 mm [34], DEp is the diameter value of the iris boundary detected in pixels, dpMM is a constant in in millimeter conversion for prismatic diopters (∆/mm) and dpMM = 15∆ [35]. Finally, we have the millimeter conversion for prismatic diopters (∆/mm) and dpMM = 15∆ [35]. Finally, we have the deviation Dev∆ expressed in diopter. deviation Dev∆ expressed in diopter. The subject’s deviation values are calculated separately for different cover tests. The subject’s The subject’s deviation values are calculated separately for different cover tests. The subject’s deviation value for each test is the average of the deviations calculated for both eyes. These values deviation value for each test is the average of the deviations calculated for both eyes. These values are are used to detect the presence or absence of strabismus. used to detect the presence or absence of strabismus. The types of the strabismus can first be classified as manifest strabismus or latent strabismus. The types of the strabismus can first be classified as manifest strabismus or latent strabismus. According to the direction of deviation, it can be further divided into—horizontal eso-tropia (phoria), According to the direction of deviation, it can be further divided into—horizontal eso-tropia (phoria), exo-tropia (phoria), vertical hyper-tropia (phoria), or hypo-tropia (phoria). The flowchart of the exo-tropia (phoria), vertical hyper-tropia (phoria), or hypo-tropia (phoria). The flowchart of the judgment of strabismus type is shown in Figure 7. If the eyes move in the unilateral cover test, the judgment of strabismus type is shown in Figure7. If the eyes move in the unilateral cover test, the subject will be considered to have manifest strabismus and the corresponding type can be determined subject will be considered to have manifest strabismus and the corresponding type can be determined too, so it is unnecessary to consider the alternate cover test and the cover-uncover test and assessment too, so it is unnecessary to consider the alternate cover test and the cover-uncover test and assessment ends. Nevertheless, if the eye movement does not occur in the unilateral test stage, there is still the ends. Nevertheless, if the eye movement does not occur in the unilateral test stage, there is still possibility of latent strabismus, in spite of the exclusion of the manifest squint. We proceed to explore the possibility of latent strabismus, in spite of the exclusion of the manifest squint. We proceed to the eye movement in the alternating cover test and cover-uncover test. If the eye movement is explore the eye movement in the alternating cover test and cover-uncover test. If the eye movement is observed, the subject is determined with and then the specific type of strabismus is observed, the subject is determined with heterophoria and then the specific type of strabismus is given given on the basis of the direction of eye movement. If not, the subject is diagnosed as normal. on the basis of the direction of eye movement. If not, the subject is diagnosed as normal.

Figure 7. The flowchart of the judgment of strabismus types. Figure 7. The flowchart of the judgment of strabismus types. 4. Experimental Results 4. Experimental Results In this section, the validation results of the intelligent strabismus evaluation method are presented, includingIn this the section, results ofthe iris validation measure, keyresults frame of detection, the intelligent pupil localization,strabismus andevaluation the measurement method are of thepresented, deviation including of strabismus. the results In order of toiris verify measure, the effectiveness key frame of detection, the proposed pupil automated localization, methods, and the groundmeasurement truths ofof deviationsthe deviation in prism of strabismus. diopters were In or providedder to verify by manually the effectiveness observing of and the calculating proposed theautomated deviations methods, of eyes the for ground all samples. truths The of measuresdeviations of in the prism automated diopters methods were provided have been by comparedmanually withobserving the ground and calculating truths. the deviations of eyes for all samples. The measures of the automated methods have been compared with the ground truths.

4.1. Results of Iris measure Appl. Sci. 2019, 9, 731 11 of 16

Appl. Sci. 2019, 9, x FOR PEER REVIEW 11 of 16 4.1. Results of Iris Measure With the eye regions extracted, an accuracy of 100% was achieved in detecting the iris measure. The rangeWith theof values eye regions that defines extracted, the anminimum accuracy an ofd 100%maximum was achievedradius size in detectingfor Hough the transform iris measure. was empiricallyThe range of identified values that to definesbe between the minimum 28 and 45 and pixels, maximum for our radius database. size for Due Hough to our transform strategy wasfor choosingempirically the identified radius with to be the between largest 28frequency and 45 pixels, in the forinterval, our database. the radius Due of tothe our iris strategy could be for accurately choosing obtainedthe radius even with if the there largest were frequency individual in thedifferences interval, theor errors. radius An of the example iris could of the be accuratelyiris measure obtained in 10 consecutiveeven if there frames were individual is shown in differences Figure 8. orAs errors.can be Anseen example from the of figure, the iris in measure an interval in 10 of consecutive 10 frames, thereframes were is shown 8 frames in Figuredetected8. Aswith can a radius be seen of from “33” theand figure,2 frames in detected an interval with of a 10 radius frames, of “34”, there so were the radius8 frames of detectedthe iris was with determined a radius of to “33” be 33. and 2 frames detected with a radius of “34”, so the radius of the iris was determined to be 33.

Figure 8. An example of the iris measure in 10 consecutive frames. The iris boundary located by Figure 8. An example of the iris measure in 10 consecutive frames. The iris boundary located by the the methodology is marked by the red circle. The radius of the iris was determined to be 33 pixels, methodology is marked by the red circle. The radius of the iris was determined to be 33 pixels, according to this strategy. according to this strategy. 4.2. Results of Key Frame Detection 4.2. Results of Key Frame Detection In order to measure the accuracy of the key frame detection, the key frames of all samples observed and labeled,In order manually, to measure were the regarded accuracy as theof the ground key truths.frame Thedetection, distance theD (keyf) of frames the key detectedof all samples frame () observedfp and the and manual labeled, ground manually, truths fwereg, was regarded calculated as usingthe ground the equation: truths. The distance D of the key detected frame f and the manual ground truths f, was calculated using the equation: (f) D () = fp − fg (4) D =f f (4) The accuracy of the key frame detection could be measured by calculating the percentage of the ( ) key frames for which the distance DD()f was was within a threshold in the frames. The accuracy of the key frame detection for each cover test was given, as shown in TableTable2 2..

Table 2. The accuracy of the key frame detection for cover tests.

UnilateralUnilateral Cover Cover Test Test Alternating Alternating Cover Cover Test Test Cover-UncoverCover-uncover Test Test () D(f) ≤ 2 93.1%93.1% 62.5% 62.5% 85.4%85.4% (f) () D ≤ 4 97.9%97.9% 88.2% 88.2% 89.6%89.6% (f) () D ≤ 6 97.9%97.9% 97.2% 97.2% 91.0%91.0%

Taking the unilateral cover test as an example, the detection accuracy was 93.1%, 97.9%, and Taking the unilateral cover test as an example, the detection accuracy was 93.1%, 97.9%, and 97.9%, at a distance of within 2, 4, and 6 frames, separately. As we can see, the detection rate in the 97.9%, at a distance of within 2, 4, and 6 frames, separately. As we can see, the detection rate in the alternating cover test was lower than that in others within the 2 and 4 frames intervals. This could be alternating cover test was lower than that in others within the 2 and 4 frames intervals. This could be attributed to the phantom effects which might occur with the rapid motion of the occluder. It might attributed to the phantom effects which might occur with the rapid motion of the occluder. It might interfere with the detection in the related frames, as the residual color of the trace left by the occluder interfere with the detection in the related frames, as the residual color of the trace left by the occluder merges with the color of the eyes. The movement of the occluder between two eyes brings more merges with the color of the eyes. The movement of the occluder between two eyes brings more perturbation than that on one side. The detection rate appears good results for each cover test when perturbation than that on one side. The detection rate appears good results for each cover test when the interval is set within 6 frames. As the deviation calculation method (Section 3.5) relaxes the reliance the interval is set within 6 frames. As the deviation calculation method (Section 3.5) relaxes the on key frame detection, our method could get a promising result. reliance on key frame detection, our method could get a promising result.

4.3. Results of Pupil Localization Appl. Sci. 2019, 9, 731 12 of 16

4.3. Results of Pupil Localization The accuracy of the proposed pupil detection algorithm was tested on static eye images on the dataset we built. The dataset consists of 5795 eye images with a resolution of 300 × 150 pixels for samples without wearing corrective lenses and 400 × 200 pixels for samples wearing lenses. All images were from our video database. The pupil location was manually labeled as the ground truth data for analysis. (E) In order to appreciate the accuracy of the pupil detection algorithm, the Euclidean distance Dp (x) between the detected and the manually labeled pupil coordinates, as well as the distance Dp and (y) Dp on both axes of the coordinate system was calculated for the entire dataset. The detection rate measured in individual directions had a certain reference value, as the mobility of eyes has two degrees of freedom. The accuracy of the pupil localization could be measured by calculating the percentage of the eye pupil images for which the pixel error was lower than a threshold in pixels. We compared our pupil detection method with the classical Starburst [33] algorithm and circular Hough transform (CHT) [36]. The performance of pupil localization with different algorithms is illustrated in Table3. (E) (E) The accuracy rates of the following statistical indicators were used: “Dp < 5” and “Dp < 10” (x) (y) corresponded to the detection rate, at 5 and 10 pixels, in Euclidean distance; “Dp < 4” or “Dp < 2” represented the percentage of the eye pupil images for which the pixel error was lower than 4 pixels in horizontal direction or 2 pixels in vertical direction.

Table 3. Performance of pupil localization with different algorithms on our dataset.

(E) (E) (x) (y) Method Dp < 5 Dp < 10 Dp < 4 Dp < 2 Starburst [33] 27.0% 44.2% 39.9% 27.8% CHT [36] 84.6% 85.0% 84.6% 83.4% Ours 86.6% 94.3% 90.6% 80.7%

As we can see, the performance of the Starburst algorithm was much poorer, which was due to the detection of the pupil contour points, using the iterative feature-based technique. In this step, the candidate feature points that belonged to the contour points were determined along the rays that extended radially away from the starting point, until a threshold ∅ = 20 was exceeded. For our database, there were many disturbing contour points detected, especially the limbus. This could cause the final stage to find the best-fitting ellipse for a subset of candidate feature points by using the RANSAC method to misfit the limbus. The performance of the CHT was acceptable, but it was highly dependent on the estimate of the range of the radius of the pupil. There might have been overlaps between the radius of the pupil and the limbus for different samples, which made the algorithm invalid for some samples. While our method shows a good detection result overall. Actually, the overall detection rate was an average result. Poor detection in some samples had lowered the overall performance. Listed below, are some cases in locating the pupil, as shown in Figure9. Some correct detection results are shown in Figure9a, which shows that our algorithm could get a good detect effect in most cases, even if there was interference from glasses. Some typical examples of errors are described in Figure9b. The errors could be attributed to the following factors—(1) a large part of the pupil was covered by the eyelids so that the exposed pupil contour, together with a part of eyelid contour, were fitted to an ellipse when the model was fitted, as shown in set 1 of Figure9b; (2) the pupil was extremely small, so the model fitting was prone to be similar to the result discussed in factor 1, as shown in set 2 of Figure9b; (3) the difference within the iris region was not so apparent that the canny filter could not get a good edge of the pupil, thus, leading to poor results, as shown in set 3 of Figure9b; (4) the failure detection caused by the phantom effects when the fast-moving occluder was close to the eyeball, as shown in set 4 of Figure9b. Appl. Sci. 2019, 9, 731 13 of 16 Appl. Sci. 2019, 9, x FOR PEER REVIEW 13 of 16

Figure 9. The pupil detection cases: (a) Some examples of correctly-detected images; (b) some typical errors. Figure 9. The pupil detection cases: (a) Some examples of correctly-detected images; (b) some typical 4.4.errors. Results of the Deviation Measurement For analyzing the accuracy of the deviation calculated by the proposed method, the deviation 4.4. Results of the Deviation Measurement of each stimulus was calculated as ground truth, by manually determining the starting position and endingFor positionanalyzing of the each accuracy excitation of forthe alldeviation samples, calc andulated labeling by the the proposed pupil position method, of the the corresponding deviation of eachframes, stimulus and then was calculating calculated the as strabismusground truth, degrees by manually in prism diopters. determining The deviations the starting of theposition automated and endingmethod position were compared of each withexcitation the ground for all truths. samples, The accuracyand labeling of the deviationthe pupil measurementposition of the was correspondingmeasured by calculating frames, and the then percentage calculating of deviations the strabismus for which degrees the in error prism of the diopters. deviation The detected deviations and ofmanual the automated ground truths method was were lower compared than a threshold. with the ground truths. The accuracy of the deviation measurementThe accuracy was measured rate using differentby calcul indicatorsating the percentage are shown inof Tablesdeviations4 and 5for. For which example, the error “ error of

(∆) < < < Unilateral 70.1% 88.2% 94.4% Alternate 71.5% 93.8% 96.5% Uncover 68.8% 86.1% 88.2%

Appl. Sci. 2019, 9, x FOR PEER REVIEW 14 of 16 Appl. Sci. 2019, 9, 731 14 of 16

Figure 10. Figure 10. TheThe correlationcorrelation ofof thethe deviationdeviation betweenbetween thethe groundground truthtruth andand predictedpredicted results.results. TheThe firstfirst row shows the horizontal axis for different cover tests, and the second row shows the vertical direction. row shows the horizontal axis for different cover tests, and the second row shows the vertical 5. Conclusionsdirection. and Future Work

5. ConclusionIn this paper, and weFuture proposed Work and validated an intelligent measurement method for strabismus deviation in digital videos, based on the cover test. The algorithms were applied to video recordings In this paper, we proposed and validated an intelligent measurement method for strabismus by near-infrared cameras, while the subject performed the cover test for a diagnosis of strabismus. deviation in digital videos, based on the cover test. The algorithms were applied to video recordings In particular, we focused on the automated algorithms for the identification of the extent to which by near-infrared cameras, while the subject performed the cover test for a diagnosis of strabismus. In the eyes involuntarily move when a stimulus occurs. We validated the proposed method using the particular, we focused on the automated algorithms for the identification of the extent to which the manual ground truth of deviations in prism diopters, from our database. Experimental results suggest eyes involuntarily move when a stimulus occurs. We validated the proposed method using the that our automated system can perform a high accuracy of evaluation of strabismus deviation. manual ground truth of deviations in prism diopters, from our database. Experimental results Although the proposed intelligent evaluation system for strabismus could achieve a satisfying suggest that our automated system can perform a high accuracy of evaluation of strabismus accuracy, there are still some aspects to be further improved in our future work. First, for the acquisition deviation. of data, there are obvious changes in the video brightness, due to the cover of the occluder. For example, Although the proposed intelligent evaluation system for strabismus could achieve a satisfying almost half of the light was blocked when one eye was covered completely. This might bring a accuracy, there are still some aspects to be further improved in our future work. First, for the perturbation for the algorithm, especially for the pupil detection. Therefore, our system needed to be acquisition of data, there are obvious changes in the video brightness, due to the cover of the occluder. further upgraded to reduce this interference. Second, the subjects were required to remain motionless For example, almost half of the light was blocked when one eye was covered completely. This might while the cover test is performed. In fact, a slight movement of the head that is not detectable to bring a perturbation for the algorithm, especially for the pupil detection. Therefore, our system humans will cause a certain deviation in the detection of eye position, thus, reducing the accuracy of needed to be further upgraded to reduce this interference. Second, the subjects were required to the final evaluation. To develop a fine eye localization, eliminating slight movements would improve remain motionless while the cover test is performed. In fact, a slight movement of the head that is not the result. Additionally, our system can also be used for an automatic diagnosis of strabismus, in detectable to humans will cause a certain deviation in the detection of eye position, thus, reducing the future. the accuracy of the final evaluation. To develop a fine eye localization, eliminating slight movements Authorwould Contributions:improve the result.Y.Z. and Additionally, H.F. conceived our and system designed can thealso experiments. be used for Y.Z.an automatic and R.L. performed diagnosis the of experiments.strabismus, Y.Z.in the and future. H.F. analyzed the data and wrote the manuscript. Y.Z., H.F., W.-L.L., Z.C., D.D.F., Z.S. and D.W. contributed to revising the manuscript. Author Contributions: Y.Z. and H.F. conceived and designed the experiments. Y.Z. and R.L. performed the Funding: The research was fully funded by a grant from the Research Grants Council of the Hong Kong Special Administrativeexperiments. Y.Z. Region, and H.F. China analyzed (Project the Reference data and No.: wrote UGC/FDS13/E01/17). the manuscript. Y.Z., H.F., W.L.L, Z.C. D.F., Z.S. and D.W. contributed to revising the manuscript. Acknowledgments: We would like to thank the Hong Kong Association of Squint and Double Vision Sufferers forFunding: their cooperation The research in collectingwas fully funded the experimental by a grant data.from the Research Grants Council of the Hong Kong Special ConflictsAdministrative of Interest: Region,The China authors (Project declare Reference no conflict No. of : UGC/FDS13/E01/17). interest. Acknowledgments: We would like to thank the Hong Kong Association of Squint and Double Vision Sufferers for their cooperation in collecting the experimental data. Appl. Sci. 2019, 9, 731 15 of 16

References

1. Beauchamp, G.R.; Black, B.C.; Coats, D.K.; Enzenauer, R.W.; Hutchinson, A.K.; Saunders, R.A. The management of strabismus in adults-I. clinical characteristics and treatment. J. AAPOS 2003, 7, 233–240. [CrossRef] 2. Graham, P.A. Epidemiology of strabismus. Br. J. Ophthalmol. 1974, 58, 224–231. [CrossRef][PubMed] 3. Castanes, M.S. Major review: The underutilization of vision screening (for amblyopia, optical anomalies and strabismus) among preschool age children. Binocul. Vis. Strabismus Q. 2003, 18, 217–232. [PubMed] 4. Mojon-Azzi, S.M.; Kunz, A.; Mojon, D.S. The perception of strabismus by children and adults. Graefes Arch. Clin. Exp. Ophthalmol. 2010, 249, 753–757. [CrossRef][PubMed] 5. Jackson, S.; Harrad, R.A.; Morris, M.; Rumsey, N. The psychosocial benefits of corrective surgery for adults with strabismus. Br. J. Ophthalmol. 2006, 90, 883–888. [CrossRef][PubMed] 6. Klauer, T.; Schneider, W.; Bacskulin, A. Psychosocial correlates of strabismus and squint surgery in adults. J. Psychosom. Res. 2000, 48, 251–253. 7. Menon, V.; Saha, J.; Tandon, R.; Mehta, M.; Sudarshan, S. Study of the Psychosocial Aspects of Strabismus. J. Pediatr. Ophthalmol. Strabismus 2002, 39, 203–208. [CrossRef][PubMed] 8. Merrill, K.; Satterfield, D.; O’Hara, M. Strabismus surgery on the elderly and the effects on disability. J. AAPOS 2009, 14, 196–198. [CrossRef][PubMed] 9. Bez, Y.; Coskun, E.; Erol, K.; Cingu, A.K.; Eren, Z.; Topcuoglu, V.; Ozerturk, Y.; Erol, M.K. Adult strabismus and social phobia: A case-controlled study. J. Am. Assoc. Pediatr. Ophthalmol. Strabismus 2009, 13, 249–252. [CrossRef][PubMed] 10. Nelson, B.A.; Gunton, K.B.; Lasker, J.N.; Nelson, L.B.; Drohan, L.A. The psychosocial aspects of strabismus in teenagers and adults and the impact of surgical correction. J. Am. Assoc. Pediatr. Ophthalmol. Strabismus 2008, 12, 72–76.e1. [CrossRef][PubMed] 11. Egrilmez, E.D.; Pamukçu, K.; Akkin, C.; Palamar, M.; Uretmen, O.; Köse, S. Negative social bias against children with strabismus. Acta Ophthalmol. Scand. 2003, 81, 138–142. 12. Mojon-Azzi, S.M.; Mojon, D.S. Strabismus and employment: The opinion of headhunters. Acta Ophthalmol. 2009, 87, 784–788. [CrossRef][PubMed] 13. Mojon-Azzi, S.M.; Potnik, W.; Mojon, D.S. Opinions of dating agents about strabismic subjects’ ability to find a partner. Br. J. Ophthalmol. 2008, 92, 765–769. [CrossRef][PubMed] 14. McBain, H.B.; Au, C.K.; Hancox, J.; MacKenzie, K.A.; Ezra, D.G.; Adams, G.G.; Newman, S.P. The impact of strabismus on quality of life in adults with and without diplopia: a systematic review. Surv. Ophthalmol. 2014, 59, 185–191. [CrossRef][PubMed] 15. Douglas, G.H. The Oculomotor Functions & Neurology CD-ROM. Available online: http://www.opt.indiana. edu/v665/CD/CD_Version/CONTENTS/TOC.HTM (accessed on 1 September 2018). 16. Anderson, H.A.; Manny, R.E.; Cotter, S.A.; Mitchell, G.L.; Irani, J.A. Effect of Examiner Experience and Technique on the Alternate Cover Test. Optom. Vis. Sci. 2010, 87, 168–175. [CrossRef] 17. Hrynchak, P.K.; Herriot, C.; Irving, E.L. Comparison of alternate cover test reliability at near in non-strabismus between experienced and novice examiners. Ophthalmic Physiol. Opt. 2010, 30, 304–309. [CrossRef][PubMed] 18. Helveston, E.M.; Orge, F.H.; Naranjo, R.; Hernandez, L. Telemedicine: Strabismus e-consultation. J. Am. Assoc. Pediatr. Ophthalmol. Strabismus 2001, 5, 291–296. [CrossRef] 19. Yang, H.K.; Seo, J.-M.; Hwang, J.-M.; Kim, K.G. Automated Analysis of Binocular Alignment Using an Infrared Camera and Selective Wavelength Filter. Investig. Ophthalmol. Vis. Sci. 2013, 54, 2733–2737. [CrossRef] 20. Min, W.S.; Yang, H.K.; Hwang, J.M.; Seo, J.M. The Automated Diagnosis of Strabismus Using an Infrared Camera. In Proceedings of the 6th European Conference of the International Federation for Medical and Biological Engineering, Dubrovnik, Croatia, 7–11 September 2014; Volume 45, pp. 142–145. 21. De Almeida, J.D.S.; Silva, A.C.; De Paiva, A.C.; Teixeira, J.A.M.; Paiva, A. Computational methodology for automatic detection of strabismus in digital images through Hirschberg test. Comput. Biol. Med. 2012, 42, 135–146. [CrossRef] 22. Valente, T.L.A.; De Almeida, J.D.S.; Silva, A.C.; Teixeira, J.A.M.; Gattass, M. Automatic diagnosis of strabismus in digital videos through cover test. Comput. Methods Prog. Biomed. 2017, 140, 295–305. [CrossRef] Appl. Sci. 2019, 9, 731 16 of 16

23. Quick, M.W.; Boothe, R.G. A photographic technique for measuring horizontal and vertical eye alignment throughout the field of gaze. Investig. Ophthalmol. Vis. Sci. 1992, 33, 234–246. 24. Model, D.; Eizenman, M. An Automated Hirschberg Test for Infants. IEEE Trans. Biomed. Eng. 2011, 58, 103–109. [CrossRef][PubMed] 25. Pulido, R.A. Ophthalmic Diagnostics Using Eye Tracking Technology. Master’s Thesis, KTH Royal Institute of Technology, Stockholm, Sweden, 2012. 26. Chen, Z.; Fu, H.; Lo, W.L.; Chi, Z. Eye-Tracking Aided Digital System for Strabismus Diagnosis. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics 2016, Budapest, Hungry, 9–12 October 2016. 27. Chen, Z.; Fu, H.; Lo, W.-L.; Chi, Z. Strabismus Recognition Using Eye-Tracking Data and Convolutional Neural Networks. J. Healthc. Eng. 2018, 2018, 1–9. [CrossRef][PubMed] 28. Zheng, Y.; Fu, H.; Li, B.; Lo, W.L.; Wen, D. An Automatic Stimulus and Synchronous Tracking System for Strabismus Assessment Based on Cover Test. In Proceedings of the International Conference on Intelligent Informatics and Biomedical Sciences, Bangkok, Thailand, 21–24 October 2018; Volume 3, pp. 123–127. 29. Barnard, N.A.S.; Thomson, W.D. A quantitative analysis of eye movements during the cover test—A preliminary report. Ophthalmic Physiol. Opt. 1995, 15, 413–419. [CrossRef] 30. Peli, E.; McCormack, G. Dynamics of Cover Test Eye Movements. Optom. Vis. Sci. 1983, 60, 712–724. [CrossRef] 31. Wright, K.; Spiegel, P. Pediatric Ophthalmology and Strabismus, 2nd ed.; Springer Science and Business Media: Berlin, Germany, 2013. 32. Wildes, R.; Asmuth, J.; Green, G.; Hsu, S.; Kolczynski, R.; Matey, J.; McBride, S. A system for automated iris recognition. In Proceedings of the Second IEEE Workshop on Applications of Computer Vision 1994, Sarasota, FL, USA, 5–7 December 1994; pp. 121–128. 33. Winfield, D.; Li, D.; Parkhurst, D.J. Starburst: A hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)–Workshops, San Diego, CA, USA, 21–23 September 2005. 34. Khng, C.; Osher, R.H. Evaluation of the relationship between corneal diameter and lens diameter. J. Cataract Refract. Surg. 2008, 34, 475–479. [CrossRef][PubMed] 35. Schwartz, G.S. The Eye Exam: A Complete Guide; Slack Incorporated: Thorofare, NJ, USA, 2006. 36. Cherabit, N.; Djeradi, A.; Chelali, F.Z. Circular Hough Transform for Iris localization. Sci. Technol. 2012, 2, 114–121. [CrossRef]

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).