<<

sensors

Letter Assessment of Camouflage Effectiveness Based on Perceived Color Difference and Gradient Magnitude

Xueqiong Bai, Ningfang Liao * and Wenmin Wu National Laboratory of Colour Science and Engineering, School of and Photonics, Beijing Institute of Technology, Beijing 100081, ; [email protected] (X.B.); [email protected] (W.W.) * Correspondence: [email protected]; Tel.: +86-136-7121-0649

 Received: 22 June 2020; Accepted: 16 August 2020; Published: 19 August 2020 

Abstract: We propose a new model to assess the effectiveness of camouflage in terms of perceived color difference and gradient magnitude. The “image color similarity index” (ICSI) and gradient magnitude similarity deviation (GMSD) were employed to analyze color and texture differences, respectively, between background and camouflage images. entropy theory was used to calculate weights for each metric, yielding an overall camouflage effectiveness metric. During the analysis process, both spatial and color of the human (HVS) were considered, to mimic real-world observations. Subjective tests were used to compare our proposed method with previous methods, and our results confirmed the validity of assessing camouflage effectiveness based on perceived color difference and gradient magnitude.

Keywords: image processing; color appearance; camouflage effectiveness

1. Introduction Camouflage, which serves to blend objects into the background by using similar colors and , has many applications in the fields of bionics and robotics, and for military purposes. Over the past few decades, elements of computer vision, statistical analysis, image processing, nanomaterials, human visual , and ergonomics have been introduced to camouflage research [1–5]. A good evaluation method to test the effectiveness of camouflage is very important—one which can provide an effective theoretical basis for camouflage research, predict the of camouflage in advance, and help to subsequently optimize the design of camouflage patterns. Vision-based object detection techniques have been used to discriminate between objects and backgrounds, and thus can also be used to evaluate the effectiveness of camouflage [6–8]. These methods, which include the scale invariant feature transform (SIFT) [9], histogram of oriented gradient (HOG) [10], and local binary (LBP) [11], have applications in a variety of fields, such as facial recognition, depth calculation, and three-dimensional reconstruction. When applying these detection methods, it is necessary to consider the characteristics of the human visual system (HVS), to ensure that camouflage is effective for humans. Analyses of the HVS and traditional image quality assessment algorithms have been combined, and the universal image quality index (UIQI) [12] and structural similarity index (SSIM) [13] could be used to obtain results that accord with actual human . The UIQI and SSIM have been applied to evaluations of camouflage effectiveness based on the similarity between the camouflage and the background. Additional metrics, such as gradient orientation, luminance, and congruency, have also been used [14]. However, the computation costs for such metrics can be very high, and they yield only small performance improvements. Gradient magnitude similarity deviation (GMSD) [14] decreases the computational costs and allows for accurate modeling of the HVS. GMSD effectively captures diverse local structures and describes texture patterns in detail. This method has already been applied to image processing and computer vision; however, to the best

Sensors 2020, 20, 4672; doi:10.3390/s20174672 www.mdpi.com/journal/sensors Sensors 2020, 20, 4672 2 of 10 of our knowledge, it has not been applied to the study of camouflage. In background matching tasks, as well as pattern analysis, color similarities are commonly analyzed. Color matching is critical for survival for many animals in the wild. The methods described above are all based on the differences between the textures of the object and background, and use gray or red, , and blue (RGB) images. However, RGB color models do not represent the colors actually observed by humans, unless a color appearance model and color management system are used [15]. CIE color space was introduced as a standard for application to displays, design, color printing, etc. CIELUV and CIELAB, based on CIE color space, aim to achieve a uniform color space with consideration of differences in the perception of colors by the human . Perceptual differences between colors are indicated by the between points corresponding to individual colors in the color space; these differences can be used as references when assessing the similarity between an object and its background. It has been shown that color perception, in terms of lightness, chroma, and hue, is influenced by surrounding textures and colors. CIE recommends using CIELAB space to characterize colored surfaces and dyes. As the demand for accurate quantification of color differences has increased, more CIELAB color difference formulas have been developed, such as CMC (l:c) [16], BFD(l:c) [17], and CIEDE2000 [18]. CMC (l:c) and CIEDE2000 have been compared, and the camouflage similarity index (CSI) can be used to measure pixel-to-pixel color differences between a camouflaged object and its background using the CIELAB color system [19]. CSI performs well when used to discriminate an object from a large uniform background [20]. To process complex images, Spatial-CIELAB (S-CIELAB) [21] was developed from CIELAB space and takes into account sensitivity to color patterns; it may be superior for analyzing the similarity between a target object and background in assessments of camouflage effectiveness. Color and texture are both important in camouflage design. Therefore, a model that comprehensively evaluates both parameters will provide better results than one that only considers a single parameter. In this paper, we present a new method for evaluating the effectiveness of camouflage, based on the analysis of color and textural differences between camouflaged objects and their backgrounds. For the color difference analysis, camouflaged objects and backgrounds are transformed into S-CIELAB space. Both spatial and color perceptions of the HVS are considered as the method is based on the way in which the observes the real world. GMSD is applied to assess texture, as it has the advantage of a simple calculation process and yields results that can be related to actual human perception. We designed an experimental procedure using subjective tests to compare performance between the new method and previous ones.

2. Methods

2.1. Overview Figure1 shows our proposed camouflage e ffectiveness assessment method. Several typical background scenes (N) and camouflage images (M) are input into the model. Both color and texture are analyzed using the “image color similarity index” (ICSI) and GMSD. To calculate the ICSI, all the background and camouflage images are transformed from RGB to S-CIELAB space, and the color difference between each camouflage and background image is obtained based on the standard deviation of the distance in S-CIELAB space. Regarding GMSD, the gradient magnitudes of pairs of images are calculated, and the texture difference is expressed as the standard deviation of the difference in magnitude between the gradients of the camouflage and background images. During the calculation of the ICSI and GMSD, the resolutions of the camouflage and background images are equalized to facilitate comparison. As the ICSI and GMSD are calculated for image pairs, M N color and texture × difference matrices are obtained separately. The weights of each metric are determined using the information entropy method, and the final assessment of the effectiveness of each camouflage pattern is given by the weighted mean of the difference between the camouflage image and all background Sensors 2020, 20, 4672 3 of 10 images. The algorithms of the ICSI, GMSD, and the information entropy method will be described in detail from Sections 2.2–2.4, respectively. Sensors 2020, 20, x FOR PEER REVIEW 3 of 10

Figure 1. Schematic diagram of the proposed model for assessing camouflagecamouflage eeffectiveness:ffectiveness: (a) Input data of the camouflagecamouflage object image and the objective-overlappingobjective-overlapping region of the background image to the model; (b) the algorithmalgorithm of image colorcolor similarity index (ICSI); (c) The algorithmalgorithm of gradient magnitude similaritysimilarity deviation deviation (GMSD); (GMSD); (d) Determine(d) Determine the weights the weights of each metricof each using metric the information using the entropyinformation method. entropy method.

2.2. Image Color Simiarity Index (ICSI) 2.2. Image Color Simiarity Index (ICSI) As mentioned above, S-CIELAB space considers both spatial and color perceptions of the HVS, As mentioned above, S-CIELAB space considers both spatial and color perceptions of the HVS, which enables a more accurate color similarity analysis than other color spaces [21]. Before preprocessing which enables a more accurate color similarity analysis than other color spaces [21]. Before image pairs using S-CIELAB, RGB input images should be transformed into device-independent CIE preprocessing image pairs using S-CIELAB, RGB input images should be transformed into device- XYZ tristimulus values [22]. Then, both the camouflage and background image should be converted independent CIE XYZ tristimulus values [22]. Then, both the and background image into the opponent color space. As the HSV can be treated as a low-pass filter from the perspective of should be converted into the opponent color space. As the HSV can be treated as a low-pass filter imaging processing, low-pass filters for each color channel have been studied in detail in previous from the perspective of imaging processing, low-pass filters for each color channel have been studied works [23]. After spatial filtering, the filtered images are transformed back into CIELAB space, in detail in previous works [23]. After spatial filtering, the filtered images are transformed back into which includes color channels, where L* represents lightness, a* is the distance from green to red and CIELAB space, which includes color channels, where L* represents lightness, a* is the distance from b* is the distance from blue to yellow. green to red and b* is the distance from blue to yellow. The ICS, which is given by the distance in S-CIELAB color space between the i-th camouflaged The ICS, which is given by the distance in S-CIELAB color space between the i-th camouflaged object and the object-overlapping region of j-th background image, can be expressed for pixel (u, v) object and the object-overlapping region of j-th background image, can be expressed for pixel (u, v) as follows. as follows. r  2  2  2 ICS(u, v) = L i L j + a i a j + b i b j (1) ()=−+−+−**∗ − ∗ 222 **∗ − ∗ ∗ **− ∗ ICS u, v()()() Lij L a ij a b ij b (1) The standard deviation of the color difference was calculated, as this provides a more comprehensiveThe standard evaluation deviation than of the the mean. color difference was calculated, as this provides a more comprehensive evaluation than the mean. v u 2 t U V  U V 2 1 XUVX 1 UVX X  ICSI ==−11ICS()(u, v) ICS()(u, v) (2) ICSIU V ICS u,, v− U V  ICS u v  (2) UV××× u===1 v=1 UV× u ===1 v=1 uv11 uv 11 2.3. Gradient Magnitude Similarity Deviation (GMSD) 2.3. Gradient Magnitude Similarity Deviation (GMSD) GMSD can be used for texture analysis [14]; we used the Sobel for this purpose. The GMSD of theGMSDi-th camouflage can be used and for jtexture-th background analysis [14]; image we can used be the obtained Sobel operator in the horizontal for this purpose. and vertical The directions,GMSD of the as showni-th camouflage in Equations and (3)–(5): j-th background image can be obtained in the horizontal and vertical directions, as shown in Equations (3)–(5): q 2 2 ()Si(u,=⊗v) = (sh I ()i(u, v)) 22+ +⊗(sv Ii(u, v())) (3) Suvihivi,,,() s⊗ Iuv() s⊗ Iuv (3)

()=⊗ ()22 +⊗() (4) Suvjhjvj,,,() s Iuv() s Iuv

Sensors 2020, 20, 4672 4 of 10

r  2  2 S (u, v) = s I (u, v) + sv I (u, v) (4) j h ⊗ j ⊗ j      1 0 1   1 2 1   −   − − −  sh =  2 0 2 , sv =  0 0 0  (5)  −     1 0 1   1 2 1  − where “ ” denotes the convolution operation. ⊗     GMS(u, v) = 2S (u, v) S (u, v) / S2 (u, v) S2 (u, v) (6) i • j i • j Standard deviations were summed to compute the overall GMSD, as an index of the textural similarity between the camouflage and background images.

uv u  u=U v=V  u  P P ( )  u u=U v=V GMS u, v  tu 1 X X u=1 v=1  GMSD = GMS(u, v)  (7) ij UV  − UV  u= v=   1 1 

2.4. Calculation of Metrics Weights Information entropy theory can be applied to quantify the amount of useful information contained within data. When the difference between the value of an indicator is high, the entropy is small. Weights are higher for parameters providing more information. As mentioned above, M N color and × texture difference matrices can be obtained using the ICSI and GMSD methods, respectively. Using the ICSI and GMSD methods, the difference between the i-th camouflage and j-th background image is given by xijk, where k = 1 and 2 for the ICSI and GMSD, respectively. Thus, the proportion of the i-th camouflage pattern and j-th background pattern calculated using the k-th method can be expressed as follows: XM XN pijk = xijk/ xijk (8) i=1 j=1 According to the entropy theory [24], the weight of each metric can be calculated using Equation (9), where the information entropy is defined as pijkln(pijk).    PM PN   1 + 1 p ln p   ln(M N) ijk ijk  × i=1 j=1 ωk =   (9) P2  PM PN   1 + 1 p ln p   ln(M N) ijk ijk  k=1 × i=1 j=1

Using the proposed model, the final camouflage effectiveness value is given by calculating the mean of the M values for color and texture using Equation (10).

X2 XM vj = ωkxijk (10) k=1 i=1

3. Experimental Setup To validate our proposed method and compare it to previous ones, we carried out an experiment using an background and several common camouflage patterns. An effectiveness metric for each camouflage pattern was calculated using CSI, UIQI, and our method. To compare the performance of the methods, subjective tests were also carried out comparing the results to HVS assessments. Sensors 2020, 20, x FOR PEER REVIEW 5 of 10

3. Experimental Setup To validate our proposed method and compare it to previous ones, we carried out an experiment using an ocean background and several common camouflage patterns. An effectiveness metric for each camouflage pattern was calculated using CSI, UIQI, and our method. To compare the Sensorsperformance2020, 20, 4672of the methods, subjective tests were also carried out comparing the results to5 HVS of 10 assessments. Figure 2 shows a schematic diagram of the experimental procedure used to validate the Figure2 shows a schematic diagram of the experimental procedure used to validate the developed developed method. Ten subjects with color vision (according to the Ishihara 24-plate test), method. Ten subjects with normal color vision (according to the Ishihara 24-plate test), aged from 22 aged from 22 to 35 years, participated in the experiment. During the experimental process, the to 35 years, participated in the experiment. During the experimental process, the participants sat in participants sat in front of a liquid display (LCD); the set-up was adjusted to ensure a front of a liquid crystal display (LCD); the set-up was adjusted to ensure a horizontal viewing angle. horizontal viewing angle. The distance between the user’s and the screen was set to 55 cm. The The distance between the user’s eyes and the screen was set to 55 cm. The background and camouflage background and camouflage images were displayed on the screen. The resolution of the images was images were displayed on the screen. The resolution of the images was 1920 1080 (the same as 1920 × 1080 (the same as the screen resolution), and clipped camouflage images× with a resolution of the screen resolution), and clipped camouflage images with a resolution of 300 100 were placed 300 × 100 were placed within the background image, as shown in Figure 2a. The background× images within the background image, as shown in Figure2a. The background images were three images were three images of the (under calm, “sparkling”, and rough conditions). Six common of the sea (under calm, “sparkling”, and rough conditions). Six common camouflage patterns were camouflage patterns were used, including US woodland, MultiCam, and MARPAT, which have also used, including US woodland, MultiCam, and MARPAT, which have also been used in previous been used in previous studies. The background and camouflage images that we used are shown in studies. The background and camouflage images that we used are shown in the appendix in detail. the appendix in detail. The background images of the sea were used due to their complex patterns, The background images of the sea were used due to their complex patterns, although other camouflage although other camouflage patterns and background scenes could also be used in conjunction with patterns and background scenes could also be used in conjunction with our method. our method.

FigureFigure 2.2. SchematicSchematic ofof thethe experimentalexperimental procedureprocedure usedused toto validatevalidate thethe developeddeveloped method:method: ((aa)) thethe camouflagedcamouflaged imageimage showed showed in in the the background background on on the the screen; screen; ( b(b)) the the workflow workflow of of each each trail. trail.

DuringDuring thethe evaluationevaluation process, a clipped clipped camouflage camouflage image image was was placed placed randomly randomly within within a abackground background scene. scene. Each Each participant participant was wasrequired required to locate to locatethe camouflage the camouflage pattern. pattern. The “hit rate” The “hit and rate”detection and detectiontime were time the wereperformance the performance indicators; indicators; task difficulty task di ratingsfficulty were ratings also were obtained. also obtained. The hit Therate hitwas rate defined was definedas the percentage as the percentage of trials on of trialswhich on the which camouflaged the camouflaged target was target correctly was identified correctly identified(indicated (indicated by clicking by on clicking it with on the it computer with the computer mouse), and mouse), the detection and the detection time was time the time was thebetween time betweenthe participant the participant clicking the clicking start thebutton start to button begin the to begin trial and the trialsubsequently and subsequently clicking on clicking the target on the (or target (or quitting the trial). The task difficulty was rated on a seven-point rating scale at the end of each trial, ranging from 1 (easiest) to 7 (most difficult) [20,25]. Before the experiment, the experimental procedure was explained to all participants. The order of the presentation of stimuli was randomized to avoid order effects. When the participant hit the target or pressed the space bar to quit the trial, Sensors 2020, 20, x FOR PEER REVIEW 6 of 10

Sensorsquitting2020 the, 20 ,trial). 4672 The task difficulty was rated on a seven-point rating scale at the end of each 6trial, of 10 ranging from 1 (easiest) to 7 (most difficult) [20,25]. Before the experiment, the experimental theprocedure task diffi wasculty explained rating question to all participants. appeared on The the orde screen.r of Thethe presentation experiment had of stimuli no time was limit. randomized There was ato 5 avoid min rest order period effects. in the When middle the ofparticipant the trial to hit reduce the target the possibility or pressed of the visual space fatigue. bar to quit the trial, the task difficulty rating question appeared on the screen. The experiment had no time limit. There 4.was Results a 5 min and rest Discussion period in the middle of the trial to reduce the possibility of visual fatigue. The Pearson correlation coefficient (PCC) was used to assess the correlations between the 4. Results and Discussion results of the objective methods (ICSI, GMSD, and the proposed model), and those of the CSI, UIQI, and theThe subjectivePearson correlation HVS evaluation. coefficient Table (PCC)1 shows was used the PCCsto assess of the correlations di fferent methods between for the hit results rate, detectionof the objective time, methods and task (ICSI, difficulty. GMSD, To facilitateand the prop the dataosed analysis,model), and we those used theof the absolute CSI, UIQI, PCC and values the (rangingsubjective from HVS 0 toevaluation. 1), where higherTable 1 values shows indicate the PCCs stronger of the correlations different methods and more for accurate hit rate, evaluation detection oftime, camouflage and task edifficulty.ffectiveness. To facilitate the data analysis, we used the absolute PCC values (ranging from 0 to 1), where higher values indicate stronger correlations and more accurate evaluation of camouflageTable 1. effectiveness.Pearson correlation coefficients of the camouflage effectiveness evaluation methods for various performance parameters. Table 1. Pearson correlation coefficients of the camouflage effectiveness evaluation methods for Index Hit Rate (%) Detection Time (Second) Task Difficulty various performance parameters. UIQI 0.6240 0.6578 0.6882 Index Hit Rate (%) Detection Time (Second) Task Difficulty CSI 0.7586 0.8028 0.8029 UIQI 0.6240 0.6578 0.6882 CSI GMSD0.7586 0.6991 0.7498 0.8028 0.7952 0.8029 GMSD ICSI0.6991 0.6807 0.6887 0.7498 0.7803 0.7952 ICSIICSI + GMSD0.6807 0.7821 0.8087 0.6887 0.8637 0.7803 ICSI + GMSD 0.7821 0.8087 0.8637 A p-value < 0.05 was taken to indicate statistical significance. A p-value < 0.05 was taken to indicate statistical significance.

The PCCs for all three performance parameters (hit rate, detection time, and task difficulty) The PCCs for all three performance parameters (hit rate, detection time, and task difficulty) were were higher for our proposed method than for the other methods. Hence, our method performed higher for our proposed method than for the other methods. Hence, our method performed best in best in terms of evaluating camouflage effectiveness. This is confirmed by Figures3–5, wherein “x” terms of evaluating camouflage effectiveness. This is confirmed by Figures 3–5, wherein “x” represents the experimental data for hit rate, detection time, and task difficulty, respectively. In the represents the experimental data for hit rate, detection time, and task difficulty, respectively. In the figures, the proposed method (ICSI + GMSD) shows more linear trends than the other methods. figures, the proposed method (ICSI + GMSD) shows more linear trends than the other methods. The The experimental data for our method are close to the predicted data (blue lines in the figures), experimental data for our method are close to the predicted data (blue lines in the figures), and mostly and mostly fall within the confidence intervals (the area between the upper and lower curves). Thus, fall within the confidence intervals (the area between the upper and lower curves). Thus, the PCC the PCC results showed that the camouflage effectiveness assessment method proposed in this paper results showed that the camouflage effectiveness assessment method proposed in this paper was the was the best-performing method, with results that were consistent with actual human visual perception. best-performing method, with results that were consistent with actual human visual perception.

140 140 140

120 120 120

100 100 100

80 80 80 Hit rate (%) Hit rate (%) Hit rate (%)

60 Predicted 60 Predicted 60 Predicted Lower Lower Lower Upper Upper Upper Hit rate Hit rate Hit rate 40 40 40 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 UIQI CSI ICSI+GMSD (a) (b) (c)

Figure 3. The correlations of the ( a) camouflagecamouflage similarity index (CSI), (b) universal image quality index (UIQI)(UIQI) andand ((cc)) imageimage colorcolor similaritysimilarity index index and and gradient gradient magnitude magnitude similarity similarity deviation deviation (ICSI (ICSI+ GMSD)+ GMSD) for for hit hit rate. rate.

SensorsSensors 20202020,,, 2020,,, 4672xx FORFOR PEERPEER REVIEWREVIEW 77 of of 10

1010 1010 1010

88 88 88

66 66 66

44 44 44

22 22 22 Detection time (s) Detection time (s) Detection itme (s) Detection time (s) Detection time (s) Detection itme (s) PredictedPredicted PredictedPredicted PredictedPredicted 00 LowerLower 00 LowerLower 00 LowerLower UpperUpper UpperUpper UpperUpper TimeTime TimeTime TimeTime -2-2 -2-2 -2-2 00 0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 00 0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 00 0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 UIIQII CSII IICSII++GMSD ((a)) ((b)) ((c))

FigureFigure 4.4. The correlations ofof thethe ((aa)) CSI,CSI, (((bb)) UIQIUIQI andandand (((cc))) ICSIICSIICSI ++ GMSDGMSD for for detection detection time. time.

99 99 99

77 77 77

55 55 55

33 33 33 Task difficulty Task difficulty Task difficulty Task difficulty Task difficulty Task difficulty

11 PredictedPredicted 11 PredictedPredicted 11 PredictedPredicted LowerLower LowerLower LowerLower UpperUpper UpperUpper UpperUpper DifficultyDifficulty DifficultyDifficulty DifficultyDifficulty -1-1 -1-1 -1-1 00 0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 00 0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 00 0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 UIQI CSI ICSI+GMSDICSI+GMSD ((a)) ((b)) ((c))

Figure 5. The correlations of the (a) CSI, (b) UIQI and (c) ICSI + GMSD for task difficulty. Figure 5. The correlations of the ( a) CSI, ( b) UIQI and (c) ICSI + GMSDGMSD for for task task difficulty. difficulty.

5. Conclusions InIn this paper, wewe presentedpresented aa method for assess assessinginging camouflage camouflagecamouflage effectiveness eeffectivenessffectiveness based on perceived perceived color didifferencefference andand gradientgradient magnitudemagnitude metrics.metrics. Color and texture differences differences were analyzedanalyzed while takingtaking bothboth spatial spatial and and color color perceptions perceptions ofof the the HV HVSHVS into consideration. Color Color and and texture texture differences differences werewere discusseddiscussed inin detail,detail, alongalong withwith weightingweighting calculations.calculations.lculations. The The PCC PCC data data confirmed confirmedconfirmed the the superiority superiority of ourour methodmethod over over existing existing ones. ones. In In future future research, research, we we will will apply apply the proposedthe proposed method method to real-world to real- environmentsworld environments with natural with natural lighting lighting and extend and extend it to encompass it to encompass the the infrared . spectrum.

AuthorAuthor Contributions:Contributions: X.B. conceived and designed the the experiment experimentalal setup setup and and algorithms; algorithms; X.B. X.B. and and W.W. W.W. performed the experiments; X.B. and N.L. analyzed the experimental data; X.B. wrote the original draft; X.B. and performed the experiments; X.B. and N.L. analyzed the experimental data; X.B. wrote the original draft; X.B. and N.L.performed reviewed the andexperiments; edited the X.B. paper. and All N.L. authors analyzed have the read expe andrimental agreed data; to the X.B. published wrote the version original of the draft; manuscript. X.B. and N.L. reviewed and edited the paper. All authors have read andand agreedagreed toto thethe publishedpublished versionversion ofof thethe manuscript.manuscript. Funding: This research received no external funding. Funding: ThisThis researchresearch receivedreceived nono externalexternal funding.funding. Acknowledgments: This work was supported by the National Key Research and Development Program of China (No.Acknowledgments: 2017YFB1002504) ThisThis and workwork National waswas Naturesupportedsupported Science byby thethe Foundation NationalNational of KeyKey China ResearchResearch (No. 61975012). andand DevelopmentDevelopment ProgramProgram ofof ConflictsChina (No. of 2017YFB1002504) Interest: The authors and National declare no conflict Science of interest. Foundation of China (No. 61975012). Conflicts of Interest: TheThe authorsauthors declaredeclare nono conflictconflict ofof interest.interest. Appendix A. AppendixThis appendix A provides the explanations of experimental details.

AppendixThis A.1.appendix Background provides Collection the explanations of experimental details.

AppendixThere A.1. are Background three background Collection images of the sea under different environments that were used in the experiment, as shown in Figure A1. The sea has a different colour and texture in each photo. There are three background images of the sea under different environments that were used in Figure A1a,b as background 1 and background 2 were taken at the Yellow Sea, China, using a Canon thethe experiment,experiment, asas shownshown inin FigureFigure A1.A1. TheThe seasea hashas aa different colour and wave texture in each photo. EOS 400D Mark II. In addition, an X-Rite ColorChecker Color Rendition Chart was used to create an Figure A1a,b as background 1 and background 2 were taken at the Yellow Sea, China, using a Canon ICC profile to be used in the experiment, making sure that the images were processed under the same EOS 400D Mark II. In addition, an X-Rite ColorChecker Color Rendition Chart was used to create an color space and allowing the images to be displayed uniformly across all color devices. Figure A1c as ICC profile to be used in the experiment, making sure that the images were processed under the same background 3 was taken from the website of National Geographic [26]. The backgrounds were resized color space and allowing the images to be displayed uniformly across all color devices. Figure A1c as to 1920 pixels 1080 pixels with the same size as the display used in the experiment. background 3× was taken from the website of Nationaltional GeographicGeographic [26].[26]. TheThe backgroundsbackgrounds werewere resized to 1920 pixels × 1080 pixels with the same size as the display used in the experiment.

Sensors 2020, 20, 4672 8 of 10 SensorsSensors 2020 2020, 20, ,x 20 FOR, x FOR PEER PEER REVIEW REVIEW 8 of8 10 of 10 Sensors 2020,, 20,, xx FORFOR PEERPEER REVIEWREVIEW 8 of 10 Sensors 2020, 20, x FOR PEER REVIEW 8 of 10

(a) (a) (b) (b) (c) (c) (a) (b) (c) FigureFigureFigure(a A1.A1) . A1BackgroundBackground. Background withwith with didifferentfferent different condition,condition, condition,(b) ((aa)) calm; calm;(a) calm; ((bb)) sparkling; sparkling;(b) sparkling; ( c(c)) rough. rough.(c) rough.(c) Figure A1.. BackgroundBackground withwith differentdifferent condition,condition, ((a) calm; (b) sparkling; (c) rough. Appendix A.2. Camouflage Collection AppendixAppendix A.2. A.2. CamouflageFigure Camouflage A1 .Collection Background Collection with different condition, (a) calm; (b) sparkling; (c) rough. Appendix A.2. Camouflage Collection AppendixSix commonly A.2. Camouflage used camouflage Collection patterns were obtained from a Web search. Because the design AppendixSix Sixcommonly commonly A.2. Camouflage used used camouflage Collectioncamouflage patterns patterns were were obtained obtained from from a Web a Web search. search. Because Because the thedesign design concept ofSix navy commonly uniform used camouflage camouflage is the patterns same as were that ofob militarytained from weapons a Web and search. equipment Because and the seeks design conceptconcept Sixof navycommonlyof navy uniform uniform used camouflage camouflage camouflage is patternsthe is thesame same were as that asob thattained of militaryof frommilitary aweapons Web weapons search. and and Becauseequipment equipment the anddesign and to maximizeconcept of concealment, navy uniform these camouflage six camouflage is the patternssame as werethat designedof military to weapons be shaped and in theequipment form of aand seeksconceptseeks toSix maximizeto commonly ofmaximize navy concealment,uniform usedconcealment, camouflage camouflage these these six patterns is sixcamouf the camouf same werelage lage asob patterns thattained patterns of frommilitarywere were a designed Web weaponsdesigned search. to andbe toBecause shapedbe equipment shaped the in design thein and the ship,seeks using to Adobemaximize Photoshop concealment, CS6, as these shown six and camouf introducedlage patterns in Table were A1. designed to be shaped in the formconceptseeksform of a ofto ship, aofmaximize ship, navy using using uniform Adobe concealment, Adobe Photoshopcamouflage Photoshop these CS is six 6, CSthe as 6,camouf shown sameas shown lageas and that andpatterns introduced of introduced military were in weaponsdesignedTable in Table A1. A1.and to be equipment shaped in and the seeksform ofto a maximize ship, using concealment, Adobe Photoshop these sixCS 6,camouf as shownlage andpatterns introduced were designed in Table A1.to be shaped in the Table A1. Six camouflage types. form of a ship, using Adobe PhotoshopTableTable A1. CS A1.Six6, ascamouflageSix shown camouflage and types. introducedtypes. in Table A1. Table A1. Six camouflage types. TypeType Camouflage Camouflage Pattern Pattern Design Design Description Description Type CamouflageTable Pattern A1. Six camouflage types. Design Description Type Camouflage Pattern Multi-TerrainMulti-TerrainMulti-Terrain PatternDesign Pattern Pattern (MTP)Description (MTP) (MTP) is used is is used used by by Type Camouflage Pattern BritishMulti-TerrainbyBritish Britishforces. . forces.TheDesign PatternThe main The mainDescription variants main(MTP) variants variants is of used MTP of MTP ofby areMulti-Terrain Britisharea four-colorMTP a four-color forces. are awoodland Pattern four-colorThe woodland main (MTP) pattern variants woodland pattern is usedfor of use forMTP by use MTPMTP MTP on Britishwebbingareonpattern webbinga four-color forces. in for all in useTheterrains. all woodland onterrains.main webbing The variants patternThe design in design of all for ofMTP use of MTP terrains. The design of MTP was MTPareonMTP was webbinga four-color was intended intended in all woodlandto terrains. be to used be used patternTheacross acrossdesign a for a useof MTP intended to be used across a wide wideonMTPwide range webbing wasrange of intended environments ofin environmentsall terrains. to be used encountered. The encountered. across design a of range of environments encountered. FlecktarnMTPwide wasrange was intended designedof environments to in be the used mid-1970s encountered. across a was designed in the mid-1970s by thewideFlecktarnby theWestFlecktarn range West German was of German wasenvironmentsdesigned Army. designed Army. in The the inTheencountered. - mid-1970s the leopard- FlecktarnFlecktarn likeFlecktarnbylikemid-1970s pattern the pattern West took was by Germantook Europe thedesigned Europe West Army. by Germanin stormby the Thestorm mid-1970s in Army.leopard- the in the FlecktarnFlecktarn samebylikelikesameThe waythe patternpattern leopard-like wayWest that that WoodlandtooktookGerman Woodland EuropeEurope pattern Army. did byby took didin stormThestorm North Europein leopard- North inin thethe sameby way storm that in Woodland the sameway did in that North Flecktarn America.likesameAmerica. pattern way that took Woodland Europe by did storm in North in the America.Woodland did in North America. MARPATsameAmerica.MARPAT way (Marine that(Marine Woodland Pattern) Pattern) was did was the in theUnitedNorth United MARPAT (Marine Pattern) was the United StatesAmerica.MARPATStatesMARPAT Marine Marine (MarineCorp’s (Marine Corp’s first Pattern) Pattern) first digital digital was was the the United camouflageMARPATStatescamouflageUnited Marine and States(Marine and was Corp’s Marinewas implemented Pattern) firstimplemented Corp’s digital was firstthe United MARPATMARPAT throughoutStatescamouflagethroughoutdigital Marine the and camouflagetheentire Corp’s entirewas Marine implementedfirst Marine and digital forces was forces in in MARPAT camouflagethroughoutimplemented theand throughoutentire was implemented Marine the forces entire in MARPATMARPAT 2001.throughout2001. The The color color thescheme entirescheme seeks Marine seeks to update toforces update inthe the 2001.Marine The color forces scheme in 2001. seeks The to color update the MARPAT US throughout2001.WoodlandUS Woodland The color patternthe patternentirescheme into Marine into seeksa pixelated a pixelatedtoforces update in the US Woodlandscheme seeks pattern to update into a the pixelated US micropattern.2001.USmicropattern. Woodland The color pattern scheme into seeks a pixelatedto update the Woodland pattern into a US USmicropattern.Woodland Woodland was pattern the Battle into Dress a pixelated US Woodlandpixelated was micropattern. the Battle Dress Uniformmicropattern.USUniform Woodland pattern pattern for was almostfor the almost Battle all of all Dress the of the AmericanUSUniformAmericanUS Woodland Woodland armed pattern armed forceswas was for forces the almostfrom the Battle from Battle 1981 all Dress 1981 ofDressto the2006 to 2006 WoodlandWoodland Uniform pattern for almost all of the andUniformAmericanand is still is still in pattern usearmed in useby for almostforcesby almostalmost from a quarter alla 1981quarter of the ofto all2006 of all Woodland American armed forces from 1981 to militariesAmericanandmilitaries is stillaround armedaroundin use the byforces theworld. almost world. from It isa It 1981quarterone is one ofto 2006 ofof all WoodlandWoodland 2006 and is still in use by almost a theand militariesthemost mostis popularstill populararound in use . by thecamouflages. almost world. a Itquarter is one of all quarter of all militaries around the Multi-cammilitariesthe most was popular around designed camouflages.the world.to blend It isinto one any of Multi-camworld. was It is designed one of the to mostblend into any typetheMulti-camtype of most terrain, of terrain,popular popular was weather, designedweather, camouflages. camouflages. or lighting orto blendlighting into any condition.Multi-camtypecondition. of Itterrain, is wasIt the is theweather,all-season-tiredesigned all-season-tire orto lightingblend of the ofinto the any camouflagetypecondition.camouflage of terrain, world. It world.is theweather, Crye all-season-tire Crye Precision or Precision lighting of the Multi-camMulti-cam developedcondition.camouflagedeveloped this It this camouflageworld.is the camouflage all-season-tire Crye inPrecision 2003 in 2003 for of forthe Multi-cam AmericancamouflagedevelopedAmerican troops troopsthis world. in camouflage . in Crye Afghanistan. Precision in 2003This This for Multi-cam cutting-edgedevelopedAmericancutting-edge designtroops this design camouflage isin aAfghanistan. isfavorite a favorite in for2003 moreforThis for more

technicalAmericancutting-edgetechnical outfitters. outfitters.troops design in Afghanistan. is a favorite for This more

cutting-edgetechnical outfitters. design is a favorite for more technical outfitters.

Sensors 2020, 20, x FOR PEER REVIEW 8 of 10

(a) (b) (c)

Figure A1. Background with different condition, (a) calm; (b) sparkling; (c) rough.

Appendix A.2. Camouflage Collection Six commonly used camouflage patterns were obtained from a Web search. Because the design concept of navy uniform camouflage is the same as that of military weapons and equipment and seeks to maximize concealment, these six camouflage patterns were designed to be shaped in the form of a ship, using Adobe Photoshop CS6, as shown and introduced in Table A1.

Table A1. Six camouflage types.

Type Camouflage Pattern Design Description Multi-Terrain Pattern (MTP) is used by British forces. The main variants of MTP are a four-color woodland pattern for use MTP on webbing in all terrains. The design of MTP was intended to be used across a

wide range of environments encountered. Flecktarn was designed in the mid-1970s by the West . The leopard- Flecktarn like pattern took Europe by storm in the same way that Woodland did in North America. MARPAT (Marine Pattern) was the Marine Corp’s first digital camouflage and was implemented MARPAT throughout the entire Marine forces in 2001. The seeks to update the US Woodland pattern into a pixelated micropattern. US Woodland was the pattern for almost all of the Sensors 2020, 20, 4672 9 of 10 American armed forces from 1981 to 2006 Woodland and is still in use by almost a quarter of all militaries around the world. It is one of Table A1. Cont. the most popular camouflages. Multi-camMulti-cam was was designed designed to blend into any type ofany terrain, type of weather, terrain, or weather, lighting condition.or lighting It is the condition. all-season-tire It is the of the camouflageall-season-tire world. of Crye the camouflage Precision Multi-cam Multi-cam developedworld. Crye this Precision camouflage developed in 2003 this for camouflage in 2003 for American American troops in Afghanistan. This troops in Afghanistan. This SensorsSensors 2020 2020, 20,, x20 FOR, x FOR PEER PEER REVIEW REVIEW 9 of9 10of 10 cutting-edgecutting-edge design design is isa afavorite favorite for for more technicalmore outfitters. technical outfitters. TypeType 07 is07 a isgroup a group of military of military uniforms uniforms introducedintroducedType in 072007 in is 2007 aand group and used ofused by military allby all branchesbranchesuniforms of the of introduced thePeople People’s Liberation in’s 2007Liberation and Army used Army TypeType 07 07 (PLA)(PLA) ofby the of all thePeople’s branches People’s Republic of Republic the People’s of China of China Liberation Army (PLA) of the People’s (PRC).(PRC). Republic of China (PRC).

AppendixAppendix A.3. A.3. Apparatus Apparatus ExperimentalExperimental stimuli stimuli were were shown shown onon aona 27-inch27-inch a 27-inch EIZOEIZO EIZO FlexScanFlexScan FlexScan EV2450EV2450 EV2450 withwith with 19201920 1920 ×10801080 ×1080 pixels. pixels. × FigureFigure A2A2 illustrates A2illustrates illustrates the the set-upthe set-up set-up of of the theof experiment. the experiment. experiment. The The participants The participants participants sat insat front satin frontin of front the of monitor. theof the monitor. monitor. The chair The The waschairchair adjusted was was adjusted toadjusted ensure to aensureto horizontal ensure a horizontal a viewing horizontal angle.viewin viewin Theg angle. camouflagedg angle. The The camouflaged shipscamouflaged inserted ships inships the inserted backgroundsinserted in thein the werebackgroundsbackgrounds displayed were on were thedisplayed displayed monitor. on Theonthe the monitor. distance monitor. betweenThe The distance distance the user’sbetween between eye the and the user’s the user’s screen eye eye and was and the setthe screen to screen 55 was cm, was asset shown setto 55to in55cm, Figurecm, as shownas A2 shown. Each in Figurein camouflaged Figure A2. A2. Each ship Each camouflaged randomly camouflaged appeared ship ship randomly three randomly times appeared inappeared different three three locations times times in in eachdifferentdifferent background locations locations and in resultedeachin each background inbackground total of 54and stimuli.and resulted resulted We in designed totalin total of and54of stimuli.54 carried stimuli. We out We designed the designed visual and experiment and carried carried byout usingout the the visual Visual visual experiment Studio. experiment by byusing using Visual Visual Studio. Studio.

55cm55cm

FigureFigure A2. A2. The The experimental experimental set-up set-up of the of thevisual visual experiment. experiment. Figure A2. The experimental set-up of the visual experiment. References References 1. Morin, S.A.; Shepherd, R.F.; Kwok, S.W.; Stokes, A.A.; Nemiroski, A.; Whitesides, G.M. Camouflage and 1. Morin, S.A.; Shepherd, R.F.; Kwok,Kwok, S.W.; Stokes,Stokes, A.A.;A.A.; Nemiroski,Nemiroski, A.;A.; Whitesides,Whitesides, G.M.G.M. CamouflageCamouflage and display for soft machines. Science 2012, 337, 828–832, doi:10.1126/science.1222149. display for soft machines. Science 2012, 337, 828–832.828–832, doi:10.1126/science.1222149. [CrossRef] 2. Song, W.T.; Zhu, Q.D.; Huang, T.; Liu, Y.; Wang, Y.T. Volumetric display based on multiple mini-projectors 2. Song, W.T.; Zhu, Q.D.; Huang, T.; Liu, Y.; Wang, Y.T. VolumetricVolumetric displaydisplay basedbased onon multiple mini-projectors and a rotating screen. Opt. Eng. 2015, 54, 013103, doi:10.1117/1.OE.54.1.013103. and a rotating screen. Opt. Eng. 20152015,, 5454,, 013103, 013103. doi:10.1117/1.OE.54.1.013103. [CrossRef] 3. Volonakis, T.N.; Matthews, O.E.; Liggins, E.; Baddeley, R.J.; Scott-Samuel, N.E.; Cuthill, I.C. Camouflage 3. Volonakis, T.N.;T.N.; Matthews,Matthews, O.E.;O.E.; Liggins,Liggins, E.;E.; Baddeley,Baddeley, R.J.;R.J.; Scott-Samuel,Scott-Samuel, N.E.;N.E.; Cuthill,Cuthill, I.C. CamouflageCamouflage assessment: Machine and human. Comput. Ind. 2018, 99, 173–182, doi:10.1016/j.compind.2018.03.013. assessment: Machine and human. Comput.Comput. Ind. Ind. 20182018, ,9999, ,173–182, 173–182. doi:10.1016/j.compind.2018.03.013. [CrossRef] 4. Copeland, A.C.; Trivedi, M.M. Computational models for search and discrimination. Opt. Eng. 2001, 40, 4. Copeland, A.C.; Trivedi, M.M.M.M. ComputationalComputational modelsmodels forfor searchsearch andand discrimination.discrimination. Opt. Eng. 20012001,, 40,, 1885–1896, doi:10.1117/1.1390297. 1885–1896.1885–1896, doi:10.1117/1.1390297. [CrossRef] 5. Troscianko, T.; Benton, C.P.; Lovell, P.G.; Tolhurst, D.J.; Pizlo, Z. Animal camouflage and visual perception. 5. Troscianko, T.; Benton, C.P.; Lovell,Lovell, P.G.; Tolhurst,Tolhurst, D.J.;D.J.; Pizlo,Pizlo, Z.Z. Animal camouflagecamouflage and visual perception. perception. Philos. Trans. Roy. Soc. B 2009, 364, 449–461, doi:10.2307/40485809. Philos. Trans. Roy. Soc. B B 20092009,, 364364,, 449–461, 449–461. doi:10.2307/40485809. [CrossRef] 6. 6. Nyberg,Nyberg, S.; Bohman,S.; Bohman, L. Assessing L. Assessing camouf camouflagelage methods methods using using textural textural features. features. Opt. Opt. Eng. Eng. 2001 2001, 40,, 4060–71,, 60–71, doi:10.1117/1.1390295.doi:10.1117/1.1390295. 7. 7. Song,Song, J.; Liu,J.; Liu, L.; Huang,L.; Huang, W.; W.; Li, Li,Y.; Y.;Chen, Chen, X.; X.;Zhang, Zhang, Z. TargetZ. Target detection detection via viaHSV HSV color color model model and and edge edge gradientgradient information information in infrared in infrared and and visible visible image image sequences sequences under under complicated complicated background. background. Opt. Opt. Quant. Quant. .Electron. 2018 2018, 50, 50175,, 175, doi:10.1007/s11082-018-1442-z. doi:10.1007/s11082-018-1442-z. 8. 8. González,González, A.; A.;Vázquez, Vázquez, D.; D.;López, López, A.M.; A. M.;Amores, Amores, J. On-board J. On-board object object detection: detection: Multicue Multicue, multimodal,, multimodal, and and multiviewmultiview random random forest forest of local of local experts. experts. IEEE IEEE Tran. Tran. Cybern. Cybern. 2016 2016, 47, 1–11,47, 1–11, doi:10.1109/TCYB.2016.2593940. doi:10.1109/TCYB.2016.2593940. 9. 9. Bosch,Bosch, A.; A.;Zisserman, Zisserman, A.; A.;Muñoz, Muñoz, X. Scene X. Scene classificati classification usingon using a hybrid a hybrid generative/discriminative generative/discriminative approach. approach. IEEEIEEE Trans. Trans. Pattern Pattern Anal. Anal. Mach. Mach. Intell. Intell. 2008 2008, 30, 30712, 712−727,−727, doi:10.1109/TPAMI.2007.70716. doi:10.1109/TPAMI.2007.70716. 10. 10.Mikolajczyk, Mikolajczyk, K.; Schmid,K.; Schmid, C. A C. performance A performance evaluation evaluation of local of local descriptors. descriptors. IEEE IEEE Trans. Trans. Pattern Pattern Anal. Anal. Mach. Mach. Intell.Intell. 2005 2005, 27, 271615, 1615−1630,−1630, doi:10.1109/tpami.2005.188. doi:10.1109/tpami.2005.188.

Sensors 2020, 20, 4672 10 of 10

6. Nyberg, S.; Bohman, L. Assessing camouflage methods using textural features. Opt. Eng. 2001, 40, 60–71. [CrossRef] 7. Song, J.; Liu, L.; Huang, W.; Li, Y.; Chen, X.; Zhang, Z. Target detection via HSV color model and edge gradient information in infrared and visible image sequences under complicated background. Opt. Quant. Electron. 2018, 50, 175. [CrossRef] 8. González, A.; Vázquez, D.; López, A.M.; Amores, J. On-board object detection: Multicue, multimodal, and multiview random forest of local experts. IEEE Tran. Cybern. 2016, 47, 1–11. [CrossRef] 9. Bosch, A.; Zisserman, A.; Muñoz, X. Scene classification using a hybrid generative/discriminative approach. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 712–727. [CrossRef] 10. Mikolajczyk, K.; Schmid, C. A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1615–1630. [CrossRef] 11. Ahonen, T.; Hadid, A.; Pietikäinen, M. Face recognition with local binary patterns. In Proceedings of the 8th European Conference on Computer Vision (ECCV), Prague, Czech, 11–14 May 2004; Pajdla, T., Matas, J., Eds.; Springer: Berlin/Heidelberg, , 2004; pp. 469–481. [CrossRef] 12. Lin, C.J.; Chang, C.C.; Liu, B.S. Developing and evaluating a target background similarity metric for camouflage detection. PLoS ONE 2014, 9, e87310. [CrossRef][PubMed] 13. Patil, K.V.; Pawar, K.N. Method for improving camouflage image quality using texture analysis. Int. J. Comput. Appl. 2017, 180, 6–8. [CrossRef] 14. Xue, W.; Zhang, L.; Mou, X.; Bovik, A.C. Gradient magnitude similarity deviation: A highly efficient perceptual image quality index. IEEE Trans. Image Process. 2014, 23, 684–695. [CrossRef][PubMed] 15. Gentile, R.S. Device-independent color in PostScript. In Proceedings of the Human Vision, Visual Processing, & Digital Display IV, IS&T/SPIE’s Symposium on Electronic Imaging: Science and Technology; Jan, P.A., Bernice, E.R., Eds.; Proc. SPIE 1913, San Jose, CA, USA, 31 January–5 February 1993; pp. 419–432. [CrossRef] 16. McDonald, R. Acceptability and perceptibility decisions using the CMC color difference formula. Text. Chem. Color 1988, 20, 31–37. [CrossRef] 17. Luo, M.R.; Rigg, B. BFD (l:c) colour-difference formula. Part 1—Development of the formula. J. Soc. Dyers Colour. 1987, 103, 86–94. [CrossRef] 18. Luo, M.R.; Cui, G.H.; Rigg, B. The development of the CIE 2000 color difference formula: CIEDE2000. Color Res. Appl. 2001, 26, 340–350. [CrossRef] 19. Lin, C.J.; Chang, C.C.; Lee, Y.H. Developing a similarity index for static camouflaged target detection. Imaging Sci. J. 2013, 62, 337–341. [CrossRef] 20. Lin, C.J.; Prasetyo, Y.T.; Siswanto, N.D.; Jiang, B.C. Optimization of color design for military camouflage in CIELAB color space. Color Res. Appl. 2019, 44, 367–380. [CrossRef] 21. Johnson, G.M.; Fairchild, M.D. A top down description of S-CIELAB and CIEDE2000. Color Res. Appl. 2003, 28, 425–435. [CrossRef] 22. Kwak, Y.; MacDonald, L. Characterization of a desktop LCD projector. Displays 2000, 21, 179–194. [CrossRef] 23. Poirson, A.B.; Wandell, B.A. Appearance of colored patterns: Pattern-color separability. J. Opt. Soc. Am. 1993, 10, 2458–2470. [CrossRef][PubMed] 24. Zou, Z.-H.; Yun, Y.; Sun, J.-N. Entropy method for determination of weight of evaluating indicators in fuzzy synthetic evaluation for water quality assessment. J. Environ. Sci. 2006, 18, 1020–1023. [CrossRef] 25. Pomplun, M.; Reingold, E.M.; Shen, J. Investigating the visual span in comparative search: The effects of task difficulty and divided attention. Cognition 2001, 81, B57–B67. [CrossRef] 26. Available online: https://www.nationalgeographic.org/encyclopedia/ocean/ (accessed on 27 February 2020).

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).