S S symmetry

Article Human Symmetry Uncertainty Detected by a Self-Organizing Neural Network Map

Birgitta Dresp-Langley 1,* and John M. Wandeto 2

1 ICube Lab UMR 7357 CNRS, Strasbourg University, 67085 Strasbourg, France 2 Department of Information Technology, Dedan Kimathi University of Technology, Box 12405-10109 Nyeri, Kenya; [email protected] * Correspondence: [email protected]

Abstract: Symmetry in biological and physical systems is a product of self-organization driven by evolutionary processes, or mechanical systems under constraints. Symmetry-based feature extraction or representation by neural networks may unravel the most informative contents in large image databases. Despite significant achievements of artificial in recognition and classification of regular patterns, the problem of uncertainty remains a major challenge in ambiguous data. In this study, we an artificial neural network that detects symmetry uncertainty states in human observers. To this end, we exploit a neural network metric in the output of a biologically inspired Self- Organizing Map Quantization Error (SOM-QE). Shape pairs with perfect geometry mirror symmetry but a non-homogenous appearance, caused by local variations in hue, saturation, or lightness within and/or across the shapes in a given pair produce, as shown here, a longer choice response (RT) for “yes” responses relative to symmetry. These data are consistently mirrored by the variations in the SOM-QE from unsupervised neural network analysis of the same stimulus images. The neural network metric is thus capable of detecting and scaling human symmetry uncertainty in response to   patterns. Such capacity is tightly linked to the metric’s proven selectivity to local contrast and color variations in large and highly complex image data. Citation: Dresp-Langley, B.; Wandeto, J.M. Human Symmetry Keywords: symmetry; shape; local color; self-organized visual map; quantization error; SOM-QE; Uncertainty Detected by a choice response time; human decision; uncertainty Self-Organizing Neural Network Map. Symmetry 2021, 13, 299. https://doi.org/10.3390/sym13020299

1. Introduction Academic Editor: Janghoon Yang Received: 17 January 2021 Symmetry in biological and physical systems is a product of self-organization [1] driven Accepted: 8 February 2021 by evolutionary processes and/or mechanical systems under constraints. It conveys a basic Published: 10 February 2021 feature to living objects, from molecules to animal bodies, or to physical forces acting in synergy to create symmetrical structures [1–6]. In pattern formation, perfect symmetry is Publisher’s Note: MDPI stays neutral a regularity within a pattern, the two halves of which are mirror images of each other. In with regard to jurisdictional claims in and in particular human information processing [7–11], symmetry is con- published maps and institutional affil- sidered an important carrier of information, detected universally by humans from an early iations. on [12,13]. Human symmetry detection [14,15] in patterns or shapes involves visual and cognitive processes from lower to higher levels of functional organization [16–24]. Ver- tical mirror symmetry is a particularly salient form of visual symmetry [23–25], processed at early stages in human vision and producing greater or lesser detection reliability [23] Copyright: © 2021 by the authors. depending on local features of the stimulus display with greater or lesser stimulus certainty. Licensee MDPI, Basel, Switzerland. Shape symmetry is a visual property that attracts [18] and determines perceived This article is an open access article volume [19–22] and perceptual salience [26] of objects represented in the two-dimensional distributed under the terms and image plane. Aesthetic judgment and choice preference [27,28] are influenced by symmetry, conditions of the Creative Commons justifying biologically inspired models of symmetry in humans [29] under the Attribution (CC BY) license (https:// light of the fact that symmetry is detected not only by primates but also by other species, creativecommons.org/licenses/by/ such as insects, for example [30]. 4.0/).

Symmetry 2021, 13, 299. https://doi.org/10.3390/sym13020299 https://www.mdpi.com/journal/symmetry Symmetry 2021, 13, 299 2 of 15

Symmetry may be exploited in pattern detection and classification by neural networks, which may have to learn multiple copies of the same object representation displayed in different orientations. Encoding symmetry as a prior shape in the network can, therefore, help avoid redundant computation where the network has to learn to detect the same pattern or shape in multiple orientations [31]. Symmetry-based feature extraction and/or representation [32] by neural networks using deep learning can, for example, help discover the most informative representations in large image databases with only a minor amount of preprocessing [33]. However, despite significant achievements of artificial intelligence in recognition and classification of well-reproducible patterns, the problem of uncertainty still requires additional attention, especially in ambiguous data. An artificial neural network that detects uncertainty states, where a human observer doubts about an image interpreta- tion has been described previously for the case of MagnetoEncephaloGraphic (MEG) image data with significant ambiguity [34]. Here in this study, we present an artificial neural network that detects uncertainty states in human observers with regard to shape symmetry. To this end, we exploit a neural network metric in the output of a biologically inspired Self-Organizing Map (SOM). The Quantization Error of the SOM (SOM-QE) [35–43] is a measure of output variance and quantifies the difference between neural network states at different stages of unsupervised image learning. In our previous studies, we demonstrated functional properties of the SOM-QE such as a sensitivity to the spatial extent, intensity, and sign or color of local image contrasts in unsupervised image classification by the neural network. The metric reliably detects the finest, clinically or functionally relevant variations in local image contrast contents [36–43], often invisible to the human eye [38,39,42,43]. Here it will be shown that the SOM-QE as a neural network state metric reliably captures, or correlates with, varying levels of human uncertainty in the detection of symmetry of shape pairs with varying local color information contents. Previous work using human two-alternative forced choice decision has shown that local color variations in two-dimensional pattern displays may significantly influence per- ceived relative distances [44–47]. In the present study, we use shapes that display perfect geometrical vertical mirror symmetry, as described further below here in Section 2.1, where visual uncertainty about symmetry is introduced by systematic variations in color (hue) and or saturation of local shape elements, leading to lesser or greater amounts of visual information content. The psychophysically determined choice response time, previously shown to directly reflect stimulus uncertainty in terms of a biological or physiological system states [48,49], is exploited as measure of symmetry uncertainty. Response time (RT) measurement is the psychophysical approach to what is referred to as “mental chronom- etry” [50–52], which uses measurements of the time between a stimulus onset and an immediate behavioral response to the stimulus. Hick’s Law [53,54], in particular, estab- lished on the basis of experiments in traditional sensory psychophysics, was repeatedly confirmed in studies by others [48–52]. The law directly relates response time to amount of information in the stimulus, and amount of information in the stimulus to uncertainty as a biologically grounded and potentially universal mechanism underlying sensation, perception, attention and, ultimately, decision [48–53]. In its most general form, Hick’s Law postulates that, provided the error rate in the given psychophysical task is low, RT increases linearly with the amount of transmitted information (I) in a set of signals or, as here in this study, a visual image configuration. The law extends to a direct relationship between choice RT and stimulus uncertainty (SU), as plotted graphically in Figure1 for illustration, and predicts that, provided the response error rate is low, RT increases linearly with stimulus uncertainty (SU) RT = a + b (SU) (1) Symmetry 2021, 13, x FOR PEER REVIEW 3 of 16

Symmetry 2021, 13, 299 3 of 15

Figure 1. Hick’s Law [53,54] postulates that, provided the error rate in the given psychophysical Figure 1. Hick’s Law [53,54] postulates that, provided the error rate in the given psychophysical task task is low, sensory system uncertainty (SU) increases linearly with the amount of transmitted is low, sensory system uncertainty (SU) increases linearly with the amount of transmitted information information (I) in a set (top graph). The law presumes a direct relationship between choice RT and sensory(I) in a set system (top graph uncertainty). The law (SU) presumes where RT a directincrea relationshipses linearly betweenwith amount choice of RT transmitted and sensory infor- system mation/stimulusuncertainty (SU) uncertainty where RT increases(bottom graph linearly). with amount of transmitted information/stimulus uncertainty (bottom graph). There is no evidence that RT in low-level visual decision tasks would be influenced There is no evidence that RT in low-level visual decision tasks would be influenced by individual variables such as gender. Inter-individual differences have been found in by individual variables such as gender. Inter-individual differences have been found RT tasks involving higher levels of semantic processing, as in lexical decision and similar in RT tasks involving higher levels of semantic processing, as in lexical decision and higher level cognitive decision tasks [55]. Biological and cognitive ageing is found to cor- similar higher level cognitive decision tasks [55]. Biological and cognitive ageing is found relate with longer RT due to changes in neurotransmitter dynamics in the ageing brain to correlate with longer RT due to changes in neurotransmitter dynamics in the ageing [56]. This age effect on RT is, however, subject to a certain amount of functional plasticity brain [56]. This age effect on RT is, however, subject to a certain amount of functional and may be counteracted by training [57]. To avoid potential effects of age in the experi- plasticity and may be counteracted by training [57]. To avoid potential effects of age in the ments here, we chose a study population of healthy young male individuals. experiments here, we chose a study population of healthy young male individuals.

2. Materials and Methods Visual system uncertainty associated with the symmetry of shape pairs was varied experimentally in a series of two-dimensional images showing shape pairs with perfect geometrical (vertical mirror) symmetry but varying amounts of local color information. To quantify human uncertainty in response to the variable amounts of local color informa-

Symmetry 2021, 13, x FOR PEER REVIEW 4 of 16

2. Materials and Methods Visual system uncertainty associated with the symmetry of shape pairs was varied Symmetry 2021, 13, 299 experimentally in a series of two-dimensional images showing shape pairs with perfect4 of 15 geometrical (vertical mirror) symmetry but varying amounts of local color information. To quantify human uncertainty in response to the variable amounts of local color infor- mation,tion, images images were were presented presented in random in random order order on aon computer a computer screen screen to observers to observers who who had hadto decide to decide as quickly as quickly as possible as possible whether whether two two shapes shapes in a in given a given images images were were symmetrical symmet- ricalor not or ( yes/nonot (yes/noprocedure). procedure). The psychophysicallyThe psychophysically measured measured choice choice response response time wastimecom- was computedputed as a measureas a measure of uncertainty of uncertainty following following the rationale the rationale of Hick’s of Hick’s Law explained Law explained in detail in detailin the introductionin the introduction above. above. The original The original images images were submitted were submitted to SOM-QE to SOM-QE to test whether to test whetherthe algorithm the algorithm reliably detectsreliably thedetects different the diff levelserent of levels human of human uncertainty uncertainty reflected reflected by the bypsychophysical the psychophysical response response time variations. time variations.

2.1. Original Original Images Images The original imagesimages fed fed into into the the SOM SOM one one by by one, one, as as explained explained further further below below in “Section in “Sec- tion2.4”, 2.4”, were were generated generated in Photoshop in Photoshop 12. They 12. are They made are available made available in the Supplementary in the Supplementary Materials MaterialsSection under Section “S1”; under copies “S1”; thereof copies are thereof shown, are for shown, illustration for illustration only, in Figure only,2 . in All Figure images 2. Allare images identically are identically scaled. The scaled. image The size image is constant size is constant (2720 × 1446(2720 pixels). × 1446 pixels). All paired All paired shape shapeconfigurations configurations in the in images the images have perfecthave pe geometricrfect geometric mirror mirror symmetry, symmetry, with thewith same the samenumber number of local of local shape shape elements elements on theon the left left and and on on the the right, right, i.e., i.e., a a total total number number ofof 12 shape elements on each side. All local shape elements are of identicalidentical size, the perimeter of each single element, which is the sum of th thee lengths of the six sides, being constant at 1260 pixels, ensuringensuring constantconstant areaarea size size of of the the local local elements. elements. The The local local color color parameters parameters of ofshape shape elements elements in in the the images images were were selectively selectively manipulated manipulated in in Adobe Adobe RGB RGB color color space to tointroduce introduce varying varying levels levels of of visual visual uncertainty uncertainty about about the the mirror mirror symmetry symmetry ofof shapeshape pairs inin the different images. The The corresponding corresponding physical physical variations variations in in color, color, hue, saturation, lightnesslightness andand R-G-BR-G-B are are given given below below in in Table Table1. The1. The medium medium grey grey (R = (R 130, = 130, G = 130,G = B130, = 130) B = 130)background background is identical is identical in all in the all images. the images.

Figure 2. CopiesCopies of of the the test test images, images, for for illustration. illustration. Mirror Mirror symmetric symmetric shape shape pairs pairs are displayed are displayed on a medium on a medium grey back- grey background.ground. Visual Visual symmetry symmetry uncertainty uncertainty in the in shape the shape pairs pairswas va wasried varied by giving by giving shape shape elements elements variable variable amounts amounts of color of information resulting in variations in appearance. The condition with the highest amount of locally different color infor- color information resulting in variations in appearance. The condition with the highest amount of locally different color mation is MULTICOL2. information is MULTICOL2.

Symmetry 2021, 13, 299 5 of 15

Table 1. Physical color parameters producing local variations in pattern appearance.

Color Hue Saturation Lightness R-G-B BLUE 240 100 50 0-0-255 RED 0 100 50 255-0-0 “Strong” GREEN 120 100 50 0-255-0 MAGENTA 300 100 50 255-0-255 YELLOW 60 100 50 BLUE 180 95 50 10-250-250 RED 0 100 87 255-190-190 “Pale” GREEN 120 100 87 190-255-190 MAGENTA 300 25 87 255-190-255 YELLOW 600 65 67 255-255-190

2.2. Experimental Display The images were displayed for visual presentation on a high-resolution computer screen (EIZO COLOR EDGE CG 275W, 2560 × 1440-pixel resolution) connected to a DELL computer equipped with a high-performance graphics card (NVIDIA). Color and lumi- nance calibration of the RGB channels of the monitor was performed using the appropriate Color Navigator self-calibration software, which is delivered with the screen and runs un- der Windows. As stated above, visual symmetry uncertainty in the shape pairs was varied by giving the local shape elements variable color appearance in terms of hue, lightness and saturation. We have four variations for each of the single-color shape pair configurations, labeled as “RED” and “BLUE”, and two variations for the multicolor shape pair configura- tions, labeled as “MULTICOL”. As explained in the introduction, we expect that multiple local variations in appearance across shapes of a pair would be likely to produce higher levels of visual symmetry uncertainty in a human observer compared with single local or global variations.

2.3. Choice Response Time Test In the test phase measuring human decision , the 20 images were displayed in random order in two successively repeated test sessions per human observer. Tests were run on a workstation consisting of a computer screen (EIZO COLOR EDGE CG 275W, 2560 × 1440-pixel resolution) connected to a DELL computer equipped with a high performance graphics card (NVIDIA). Fifteen healthy young individuals, chosen mainly from a population of undergraduate students, participated in the test phase. All participants had normal or corrected-to-normal visual acuity. In addition, the Ishihara plates [58] were used prior to individual testing to ensure that all of them also had normal color vision. The choice response tests were run in October and November 2019, and in conformity with the Helsinki Declaration for scientific experiments on human individuals. Participants provided informed consent prior to testing; individuals’ identities are not revealed. The test protocol adheres to standards of good procedure stated in the ethics regulations of the Centre National de la Recherche Scientifique (CNRS) relative to response data collection from healthy humans in non-invasive standard tasks, for which examination of the experimental protocol by a specific ethics committee is not mandatory. Each individual participant was placed in an adjustable chair at a viewing distance of about 80 cm from the computer screen. Individual positions were adjusted in order to ensure that the screen was at eye-level. Tests were run in individual sessions and under mesopic viewing conditions. Each participant was adapted to the ambient lighting condition for about five . Instructions given stated that images with two abstract patterns, one on the left and one on the right, would be shown on the screen in two separate sequences. The task communicated to each participant was to: “decide as quickly as possible and as soon as an image appears on the screen whether or not the two patterns in the given image appear to be symmetrical or not”. They had to press “1” for “yes”, or “2” for “no” on the computer keyboard, their index and middle fingers of their dominant hand ready above the numbers to be able Symmetry 2021, 13, 299 6 of 15

to press a given key without any motor response delay. Each coded response choice was recorded and stored next to the choice response time in a labeled data column of an MS Excel file. The choice response time corresponds to the time between an image onset and the a response key is pressed. As soon as a response key was pressed, a current image disappeared from the screen; 900 later, the next image was displayed. Image presentations in random order, trial sequencing, and data coding and recording were computer controlled. The program is written in Python for Windows, and is freely available for download [59,60].

2.4. Neural Network (SOM) Analysis The conceptual background and method of neural network analysis follows the same principle and protocol already described in our latest previous work on biological cell imaging data analysis by SOM [37,43]. It is described here again in full detail, for the benefit of the reader. The Self-Organizing Map is an artificial neural network (ANN) with a func- tional architecture that formally corresponds to the nonlinear, ordered, smooth mapping of high-dimensional input data onto the elements of a regular, low-dimensional array [61]. It is assumed that the set of input variables can be defined as a real vector x of n-dimension. A parametric real vector mi of n-dimension is associated with each element in the SOM. Vector mi is a model and the SOM is therefore an array of models. Assuming a general distance measure between x and mi given by d(x,mi), the map of an input vector x on the SOM array is defined as the array element mc that best matches x yielding the smallest d(x,mi). During the learning process, an input vector x is compared with all the mi to identify mc. The Euclidean distances ||x-mi|| define mc. Models topographically close in the map up to a certain geometric distance, indicated by hci, will activate each other to learn from their joint input x. This results in a local relaxation or smoothing effect on the models in this neighborhood, which in continuous learning leads to global ordering. Learning is represented by

m(t + 1) = mi(t) + α(t)hci(t)dx(t) − mi(t)e (2)

where t = 1, 2, 3 etc. represents an integer, the discrete-time coordinate hci(t) denotes the neighborhood function—a smoothing kernel defined across the map points which converges towards zero with time—α(t) is the learning rate, which also converges towards zero with time and affects the amount of learning in each model. At the end of a winner- take-all learning process in the SOM, each image input vector x is matched to its best matching model within the map mc. The difference between x and mc, ||x − mc||, is a measure indicating how close a final SOM value is to the original input value; it is reflected by the quantization error, QE. The average QE of all x (X) within a given image is determined by N = − QE 1/N ∑ Xi mci (3) i=1 where N is the number of input vectors x in the image. The final weights of the SOM are defined in terms of a three-dimensional output vector space representing each R, G, and B channel. Magnitude, as well as the direction of change in any of these from one image to another is reliably reflected by changes in the QE. SOM training consisted of 1000 iterations. The SOM was a two-dimensional rectangular map of 4 by 4 nodes, hence capable of creating 16 models, or domains, of observation. The spatial locations, or coordinates, of each model at different locations on the map exhibit specific characteristics, each one different from all the others. When a new input signal is fed into the map, all the models compete; the winner will be the model whose features most closely resemble the features of the input signal. The input signal will be grouped accordingly into one of the models. Each model, or domain, is an independent decoder of the same input independently interpreting the information contained in the input, which is represented as a mathematical vector of the same form as that of the model. Therefore, the presence or absence of an active response Symmetry 2021, 13, 299 7 of 15

at a specific map location, rather than the exact input–output signal transformation or magnitude of the response, generates an interpretation of the input. To define initial values for map size, a trial-and-error procedure was implemented. Map sizes larger than 4 by 4 produced observations where some models ended up empty, meaning that these models did not match input the end of the training. As a consequence, 16 models were sufficient to represent all local data in the image. Neighborhood distance and learning rate were assigned initial values of 1.2 and 0.2, respectively. These values were obtained through the trial-and-error procedure, after testing for the quality of the “first guess”, directly deter- mined by the value of the resulting quantization error. The smaller the latter, the better the “first guess”. It is worthwhile pointing out that the models were initialized by randomly picking training image vectors. This allows the SOM to work on the original data with no prior assumptions about levels of organization within the data. This, however, requires starting with a wider neighborhood function, and a bigger learning-rate factor than in procedures where initial values for model vectors may be pre-selected [62]. Our approach is economical in terms of computation times, which constitutes one of its major advantages. The 20 images here were fed one by one into a single SOM. The training image for the SOM prior to further input can be any of these. Since each of the 20 input images is fed into SOM at a time, it gets subjected to all the 16 models in SOM. These 16 models compete, and the winner ends up as the best matching model for the input vector. After unsupervised winner-takes-all SOM learning, the SOM-QE output is written into a data file. Further steps generate output plots of SOM-QE, where each output value is associated with the corre- sponding input image. The output data are then plotted in increasing/decreasing orders of SOM-QE magnitude as a function of the corresponding image variations (automatic image classification). The computation time of SOM analysis of each of the 20 images was about two per image. The code used for implementing the SOM-QE is available online at: https://www.researchgate.net/publication/330500541_Self-organizing_map-based_ quantization_error_from_images (accessed on 8 January 2021).

3. Results With 20 images per individual session, two successive sessions per participant, and 15 participants, a total of 600 choice response time data were recorded. A shape pair corresponding to a single factor level relative to shape appearance and color was presented twice in a session with 20 images to allow for left- and right-hand side presentation of a given appearance factor level in the shape pairs. The labels of the individual factor level associated with each shape pair are given in Figure1. With two repeated sessions per participants, we have four individual response time data for each single factor level. All data analyses relative to choice response times were run on the 15 average response times for each factor level from the 15 participants. These data are made available here in S2 of the Supplementary Materials Section. Since all shape pairs in all the images were mirror-symmetric, “no” responses occurred only very rarely in the experiment (17 of the 600 recorded choice responses signaled “no”, which corresponds to less than three percent of the total number of observations), as would be expected. In these rare cases, only the choice response times corresponding to a “yes” among the four responses recorded for a given factor level were used for computing the average. In terms of operational factor levels in the Cartesian experimental design plan, we have four levels (1,2,3,4) of a factor termed “Appearance” (A4) associated with the colors BLUE and RED, and two levels (1,2) of “Appearance” (A2) associated with the multiple color case termed MULTICOL here. The three color conditions, blue, red, and multicolor, describe three operational levels of a factor termed “Color” (C3) herein. In a first step, two separate two-way analyses of variance (ANOVA) were run to test for significant effects of the factors “Appearance” and “Color”. The first ANOVA compares between four levels of “Appearance” (1,2,3,4) in two levels (BLUE, RED) of the “Color” factor. The second ANOVA compares between two levels of “Appearance” in three levels (BLUE, RED, MULTICOL) of the “Color” factor. Symmetry 2021, 13, 299 8 of 15

3.1. Two-Way ANOVAon Choice Response Times 3.1.1. A4 × C2 × 15

This analysis corresponds to a Cartesian analysis plan A4 × C2 × 15, with four levels (1,2,3,4) of the “Appearance” factor and two levels (BLUE, RED) of the “Color” factor on the 15 individual average response times (RT), yielding a total number (N-1) of 119 degrees of freedom (DF). The results from this analysis are shown here below in the top part of Table2.

Table 2. Results from the two-way analyses of variance with factor-specific degrees of freedom (DF), the corresponding F statistics, and their associated probability limits (p).

Factor DF F p APPEARANCE 3 68.42 <0.001 1st 2-way COLOR 1 0.012 <0.914 NS ANOVA INTERACTION 3 5.37 <0.01 APPEARANCE 1 8.20 <0.01 2nd 2-way COLOR 2 123.56 <0.001 ANOVA INTERACTION 2 0.564 <0.57 NS

The results of this analysis signal a statistically significant effect of the “Appearance” factor on the average RT and a statistically significant interaction between the “Appearance” and the “Color” factor for the cases BLUE and RED. A statistically significant effect of “Color” independent of “Appearance” is not observed, leading to the conclusion that either of these two colors produced similar effects on RT relative to shape symmetry when their appearance was modified. This holds with the exception for statistical comparison between BLUE3 and BLUE4, which is the only one that is not significant here, as revealed

Symmetry 2021, 13, x FOR PEER REVIEWby the post-hoc comparison (Holm–Sidak) between these two factor levels9 of 16(t(1,1) = 0.32, p < 0.75 NS). The effects can be appreciated further by looking at the effect sizes for the different conditions, which are visualized further in Figures3 and4.

Figure 3. Statistically significant differences in average RT (top) for the comparison between BLUE Figure 3. Statistically significant differences in average RT (top) for the comparison between BLUE and RED shape pairs with appearance levels 1 and 2. The corresponding Self-Organizing Map andQuantization RED shape Error pairs(SOM-QE) with values appearance (bottom) levelsfrom the 1 neural and 2. network Thecorresponding analysis are plotted Self-Organizing in the Map Quantizationgraph below. Error (SOM-QE) values (bottom) from the neural network analysis are plotted in the graph below.

Symmetry 2021, 13, x FOR PEER REVIEW 9 of 16

Figure 3. Statistically significant differences in average RT (top) for the comparison between BLUE and RED shape pairs with appearance levels 1 and 2. The corresponding Self-Organizing Map Symmetry 2021, 13, 299 Quantization Error (SOM-QE) values (bottom) from the neural network analysis are plotted 9in of the 15 graph below.

Figure 4. Statistically significant differences in average RT (top) for the comparison between BLUE and RED shape pairs with appearance levels 1, 3 and 4. The corresponding SOM-QE values (bottom) from the neural network analysis are plotted in the graph below. The difference in average RT between BLUE3 and BLUE4 is the only one here that is not statistically significant (see “Section 3.1.1.”).

3.1.2. A2 × C3 × 15

This analysis corresponds to a Cartesian analysis plan A2 × C3 × 15, with two levels (1,2) of the “Appearance” factor and three levels (BLUE, RED, MULTICOL) of the “Color” factor on the 15 individual average response times (RT), yielding a total number (N-1) of 89 degrees of freedom. The results from this analysis are shown above in the bottom part of Table2. They signal a statistically significant effect of the “Appearance” factor on the average RT and a statistically significant effect of the “Color” factor. A statistically significant interaction is not observed here. Statistical post-hoc comparisons (Holm–Sidak) reveal statistically significant differences between the factor levels MULTICOL and RED (t(1,1) = 14.44, p < 0.001) and between the factor levels MULTICOL and BLUE (t(1,1) = 12.60, p < 0.001), but not, as could be expected from the previous ANOVA, between the factor levels RED and BLUE (t(1,1) = 1.84, p < 0.09 NS). The “Color” effect here is reflected by the observation that shape pairs with multiple color elements yield significantly longer symmetry related RT compared with shape pairs composed of any of the two single colors here. This effect can be appreciated further by looking at the effect sizes for the different comparisons, which are visualized further in Figure5 below. Symmetry 2021, 13, x FOR PEER REVIEW 10 of 16

Figure 4. Statistically significant differences in average RT (top) for the comparison between BLUE and RED shape pairs with appearance levels 1, 3 and 4. The corresponding SOM-QE values (bot- tom) from the neural network analysis are plotted in the graph below. The difference in average RT between BLUE3 and BLUE4 is the only one here that is not statistically significant (see “Section 3.1.1.”).

3.1.2. A2 × C3 × 15

This analysis corresponds to a Cartesian analysis plan A2 × C3 × 15, with two levels (1,2) of the “Appearance” factor and three levels (BLUE, RED, MULTICOL) of the “Color” factor on the 15 individual average response times (RT), yielding a total number (N-1) of 89 degrees of freedom. The results from this analysis are shown above in the bottom part of Table 2. They signal a statistically significant effect of the “Appearance” factor on the average RT and a statistically significant effect of the “Color” factor. A statistically signif- icant interaction is not observed here. Statistical post-hoc comparisons (Holm–Sidak) re- veal statistically significant differences between the factor levels MULTICOL and RED (t(1,1) = 14.44, p < 0.001) and between the factor levels MULTICOL and BLUE (t(1,1) = 12.60, p < 0.001), but not, as could be expected from the previous ANOVA, between the factor levels RED and BLUE (t(1,1) = 1.84, p < 0.09 NS). The “Color” effect here is reflected by the observation that shape pairs with multiple color elements yield significantly longer symmetry related RT compared with shape pairs composed of any of the two single colors Symmetry 2021, 13, 299 here. This effect can be appreciated further by looking at the effect sizes for the different10 of 15 comparisons, which are visualized further in Figure 5 below.

FigureFigure 5.5. Differences in average RT ( top) for the comparison between between BLUE BLUE and and RED RED shape shape pairs pairs withwith appearanceappearance levellevel 11 andand thethe multicoloredmulticolored MULTICOL MULTICOL shape shape pairs pairs with with appearance appearance levels levels 1 1 and and 2. The differences between BLUE and RED shape pairs of any appearance level are not statisti- 2. The differences between BLUE and RED shape pairs of any appearance level are not statistically cally significant (see “Section 3.1.1”). The differences between image conditions BLUE1 or RED1 significant (see “Section 3.1.1”). The differences between image conditions BLUE1 or RED1 and and MULTICOL1 and between BLUE2 or RED2 and MULTICOL2 are highly significant, as is the MULTICOL1difference between and between MULTICOL1 BLUE2 orand RED2 MULTICOL2 and MULTICOL2 (see “Section are highly 3.1.2”). significant, The corresponding as is the difference SOM- betweenQE values MULTICOL1 (bottom) from and the MULTICOL2 neural network (see analysis “Section are3.1.2 plotted”). The in corresponding the graph below. SOM-QE values (bottom) from the neural network analysis are plotted in the graph below.

3.2. RT Effect Sizes The effect sizes, in terms of differences between means, that correspond to significant statistical differences signaled by two-way ANOVA were plotted graphically, and are shown in the top graph in Figure2, and in the top graphs in Figures3 and4 along with the corresponding shape pairs that produced the results. The graphs show clearly that shape pairs with non-homogenous appearance, i.e., local variations in hue, saturation, or lightness within and/or across shapes in a given pair, produce longer choice RT for “yes” responses relative to shape symmetry.

3.3. SOM-QE Effect Sizes The SOM-QE metrics from the unsupervised neural network analysis of the test im- ages were also plotted graphically and are displayed in the bottom graphs of Figures2–4. The graphs show clearly that the magnitudes of the SOM-QE from the neural network anal- ysis consistently mirror the observed magnitudes of average choice RT for “yes” responses relative to shape symmetry produced by shape pairs with varying appearance in terms of local variations in hue, saturation, or lightness within and/or across shapes in a given pair.

3.4. Linear Regression Analyses The results from the previous analyses show that the average choice RT for “yes” responses relative to shape symmetry, produced by shape pairs with varying appearance in terms of local variations in hue, saturation, or lightness within and/or across shapes in a given pair, produce significant variations consistent with variations in decisional Symmetry 2021, 13, 299 11 of 15

uncertainty about the mirror symmetry of the shapes in a pair. The higher the variability in hue, saturation or lightness of single shape elements, the longer the RT for “yes” hence the higher the stimulus uncertainty for “symmetry”. Indeed, the longest choice RT for “yes” responses relative to shape symmetry is produced by the shape pairs MULTICOL1 and MULTICOL2. To bring the tight link between variations in RT reflecting different levels of human uncertainty and the variations in the SOM-QE metric from the neural network analyses, we performed a linear regression analysis on the RT data for shape pairs with varying levels of appearance in BLUE, RED and MULTICOL shapes, and a linear regression analysis on the SOM-QE data for exactly the same shape pairs. The results from Symmetry 2021, 13, x FOR PEER REVIEWthese analyses are plotted below in Figure6. The linear regressions coefficients (R2)12 are of 16 provided in the graph for each analysis. It is shown that RT for “yes” responses relative to shape symmetry and the SOM-QE as a function of the same shape variations follow highly similar and significant linear trends.

FigureFigure 6. 6.The The tight tight link link between between variations variations in in RT RT reflecting reflecting different different levels levels of of human human uncertainty uncertainty and and the the variations variations in in the the SOM-QESOM-QE metric metric from from the the neural neural network network analyses analyses is broughtis brought to theto the fore fore here here under under the the light light of linear of linear regression regression analysis analysis on theon RTthe data RT data for shape for shape pairs pairs with varyingwith varying levels levels of appearance of appearance in BLUE, in BLUE, RED and RED MULTICOL and MULTICOL shapes, shapes, and linear and regression linear re- gression analysis on the SOM-QE data for exactly the same shape pairs. analysis on the SOM-QE data for exactly the same shape pairs. 4. Discussion 4. Discussion It is shown that mirror symmetric shape pairs with variable appearance caused by It is shown that mirror symmetric shape pairs with variable appearance caused local variations in color information within and/or across shapes of a pair produce longer by local variations in color information within and/or across shapes of a pair produce choice RT for “yes” responses relative to the shape symmetry. The variations in average longer choice RT for “yes” responses relative to the shape symmetry. The variations in choice RT for “yes” are consistent with variations in visual system uncertainty for sym- average choice RT for “yes” are consistent with variations in visual system uncertainty metry, made fully operational here by the variations in amount or diversity of local color for symmetry, made fully operational here by the variations in amount or diversity of information (“Appearance”) in the display. Consistently, the display with the highest local color information (“Appearance”) in the display. Consistently, the display with number of local variations, MULTICOL2, produces the longest choice RT. The higher the amount of variable information, the longer the RT for “yes” engendered by the stimulus uncertainty. Although color per se may be deemed irrelevant to the given task, color sin- gletons, like the ones manipulated in this experiment to introduce variations in uncer- tainty, can act as powerful distractors in a variety of psychophysical choice tasks [63–65]. This is accounted for by models where the attentional weight of any object may be con- ceived as the product of bottom-up (visual) and top-down (expectational) components [63]. The higher the number of different color singletons in a given paired configuration, the longer the choice response time for “symmetry”, in consistency with the most general variant of Hick’s Law [53]. As also shown, the variations in RT are consistently mirrored by the variations in the SOM-QE from the unsupervised neural network analysis of the same stimulus images. This provides further data showing that artificial neural networks

Symmetry 2021, 13, 299 12 of 15

the highest number of local variations, MULTICOL2, produces the longest choice RT. The higher the amount of variable information, the longer the RT for “yes” engendered by the stimulus uncertainty. Although color per se may be deemed irrelevant to the given task, color singletons, like the ones manipulated in this experiment to introduce variations in uncertainty, can act as powerful distractors in a variety of psychophysical choice tasks [63–65]. This is accounted for by models where the attentional weight of any object may be conceived as the product of bottom-up (visual) and top-down (expectational) components [63]. The higher the number of different color singletons in a given paired configuration, the longer the choice response time for “symmetry”, in consistency with the most general variant of Hick’s Law [53]. As also shown, the variations in RT are consistently mirrored by the variations in the SOM-QE from the unsupervised neural network analysis of the same stimulus images. This provides further data showing that artificial neural networks are capable of detecting human uncertainty in perceptual judgment tasks [34]. The capability of the SOM-QE to capture such uncertainty in human choice responses to the symmetry of shapes with local variations in color parameters is tightly linked to the proven selectivity of this neural network metric to local contrast and color variations in a large variety of complex image data [36–43]. Here, the metric is revealed as a measure of both variance in the image input data, and uncertainty in specific human decisions in response to such data. The neural network metric captures the effects of local color contrast on symmetry saliency in cases where pure shape geometry signals perfect mirror symmetry. This unambiguously shows that visual parameters beyond stimulus geometry [66–69] influence what has previously been termed the “symmetry of things in a thing”. Such local, non-geometrically determined effects on perceived shape symmetry have potentially important implications for image-guided human precision tasks [70,71], now more and more often assisted by neural network-driven image analysis. From a general functional viewpoint, local-color-based visual system uncertainty for symmetry delaying conscious choice response times in humans is consistent with current theory invoking interactions between low-level visual and higher-level cognitive mechanisms in perceptual decisions relative to symmetry [72,73]. The fully conscious detection of symmetry in choice response tasks most likely involves information processing through brain networks with neurons that display large receptive field areas and a massive amount of lateral connectivity [74]. Other machine learning approaches have exploited low-level and global features of natural images or scenes to train artificial symmetry detectors on the basis of symmetry-axis ground-truth datasets [75,76], demonstrating the importance of local features such as color in boosting symmetry detection learning by machines compared with gray-scale image data. In human perception, however, the attentional weight of visual information enabling symmetry detection depends not only on the contrast of local features to their local surroundings (saliency), but also on the importance of the features with respect to the given task (relevance). This study deals with human perceptual system uncertainty when geometrically defined local (feature) and global (shape) symmetry rules are satisfied. Other types of symmetry uncertainty detection, not addressed in this study, are relevant in image processing. For example, feature selection methods such as uncertainty class– feature association map-based selection [77] have proven to be highly useful preprocessing approaches for eliminating irrelevant and redundant features from complex image data such as DNA microarray data [78,79], where the number of dimensions increase steadily and fast. In such feature selection approaches, which are generally applied to image clusters, the Symmetric Uncertainty Principle [77] is exploited to achieve dimensionality reduction for optimal classification performance. The SOM used in this study operates on the basis of the assumption that all data in the image (or a cluster thereof) are relevant; the dimensionality of the input to be mapped is predefined and already minimized.

5. Conclusions While symmetry detection by machines and the importance of colored features for such purposes is an increasingly popular field of investigation, questions relating to image Symmetry 2021, 13, 299 13 of 15

properties that may increase system uncertainty for detecting or responding to symmetry, especially when the latter is geometrically perfect, are less often investigated. This work shows a clear study case where increasing amount of local color information delays the human visual system response to symmetry in consistency with predictions of Hick’s Law [53,54]. This system uncertainty is reliably captured by the output metric of a bi- ologically inspired artificial neural network, the SOM, which possesses self-organizing functional properties akin to those of the human sensory system, within the limitations of the study scope set here.

Supplementary Materials: The following are available online at https://www.mdpi.com/2073-8994/ 13/2/299/s1: Folder S1: OriginalImagesFedIntoSOM; Table S1: AverageResponseTimeData15Subjects.xls. Author Contributions: Conceptualization, B.D.-L.; methodology, B.D.-L., J.M.W.; software, J.M.W.; validation, B.D.-L., J.M.W.; formal analysis, B.D.-L., J.M.W.; investigation, B.D.-L., J.M.W.; resources, B.D.-L.; data curation, B.D.-L., J.M.W.; writing—original draft preparation, B.D.-L., J.M.W.; writing— review and editing, B.D.-L., J.M.W.; visualization, B.D.-L. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki. All the individuals provided informed consent to participate. Their identity is not revealed. The test procedure fully adheres to rules and regulations set by the ethics board of the corresponding author’s host institution (CNRS) for response data collection from healthy human individuals in non-invasive psychophysical choice response tasks, for which examination of the experimental protocol by a specific ethics committee is not mandatory. Ethical review and approval were, as a consequence, waived for this study. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: All data exploited here are made available in this publication. Acknowledgments: Material support from the CNRS is gratefully acknowledged. Conflicts of Interest: The authors declare no conflict of interest.

References 1. Schweisguth, F.; Corson, F. Self-Organization in Pattern Formation. Dev. Cell 2019, 49, 659–677. [CrossRef][PubMed] 2. Carroll, S.B. Chance and necessity: The evolution of morphological complexity and diversity. Nature 2001, 409, 1102–1109. [CrossRef][PubMed] 3. García-Bellido, A. Symmetries throughout organic evolution. Proc. Natl. Acad. Sci. USA 1996, 93, 14229–14232. [CrossRef] 4. Groves, J.T. The physical chemistry of membrane curvature. Nat. Chem. Biol. 2009, 5, 783–784. [CrossRef] 5. Hatzakis, N.S.; Bhatia, V.K.; Larsen, J.; Madsen, K.L.; Bolinger, P.Y.; Kunding, A.H.; Castillo, J.; Gether, U.; Hedegård, P.; Stamou, D. How curved membranes recruit amphipathic helices and protein anchoring motifs. Nat. Chem. Biol. 2009, 5, 835–841. [CrossRef] [PubMed] 6. Holló, G. Demystification of animal symmetry: Symmetry is a response to mechanical forces. Biol. Direct 2017, 12, 11. [CrossRef] 7. Mach, E. On Symmetry. In Popular Scientific Lectures; Open Court Publishing: Lasalle, IL, USA, 1893. 8. Arnheim, R. Visual Thinking, 1969; University of California Press: Oakland, CA, USA, 2004. 9. Deregowski, J.B. Symmetry, Gestalt and information theory. Q. J. Exp. Psychol. 1971, 23, 381–385. [CrossRef] 10. Eisenman, R. Complexity–simplicity: I. Preference for symmetry and rejection of complexity. Psychon. Sci. 1967, 8, 169–170. [CrossRef] 11. Eisenman, R.; Rappaport, J. Complexity preference and semantic differential ratings of complexity-simplicity and symmetry- asymmetry. Psychon. Sci. 1967, 7, 147–148. [CrossRef] 12. Deregowski, J.B. The role of symmetry in pattern reproduction by Zambian children. J. Cross Cult. Psychol. 1972, 3, 303–307. [CrossRef] 13. Amir, O.; Biederman, I.; Hayworth, K.J. Sensitivity to non-accidental properties across various shape dimensions. Vis. Res. 2012, 62, 35–43. [CrossRef] 14. Bahnsen, P. Eine Untersuchung über Symmetrie und Asymmetrie bei visuellen Wahrnehmungen. Z. Für Psychol. 1928, 108, 129–154. 15. Wagemans, J. Characteristics and models of human symmetry detection. Trends Cogn. Sci. 1997, 9, 346–352. [CrossRef] Symmetry 2021, 13, 299 14 of 15

16. Sweeny, T.D.; Grabowecky, M.; Kim, Y.J.; Suzuki, S. Internal curvature signal and noise in low- and high-level vision. J. Neurophys- iol. 2011, 105, 1236–1257. [CrossRef][PubMed] 17. Wilson, H.R.; Wilkinson, F. Symmetry perception: A novel approach for biological shapes. Vis. Res. 2002, 42, 589–597. [CrossRef] 18. Baylis, G.C.; Driver, J. Perception of symmetry and repetition within and across visual shapes: Part-descriptions and object-based attention. Vis. Cognit. 2001, 8, 163–196. [CrossRef] 19. Michaux, A.; Kumar, V.; Jayadevan, V.; Delp, E.; Pizlo, Z. Binocular 3D Object Recovery Using a Symmetry Prior. Symmetry 2017, 9, 64. [CrossRef] 20. Jayadevan, V.; Sawada, T.; Delp, E.; Pizlo, Z. Perception of 3D Symmetrical and Nearly Symmetrical Shapes. Symmetry 2018, 10, 344. [CrossRef] 21. Li, Y.; Sawada, T.; Shi, Y.; Steinman, R.M.; Pizlo, Z. Symmetry is the sine qua non of shape. In Shape Perception in Human and Computer Vision; Dickinson, S., Pizlo, Z., Eds.; Springer: London, UK, 2013; pp. 21–40. 22. Pizlo, Z.; Sawada, T.; Li, Y.; Kropatsch, W.G.; Steinman, R.M. New approach to the perception of 3D shape based on veridicality, complexity, symmetry and volume: A mini-review. Vis. Res. 2010, 50, 1–11. [CrossRef] 23. Barlow, H.B.; Reeves, B.C. The versatility and absolute efficiency of detecting mirror symmetry in random dot displays. Vis. Res. 1979, 19, 783–793. [CrossRef] 24. Barrett, B.T.; Whitaker, D.; McGraw, P.V.; Herbert, A.M. Discriminating mirror symmetry in foveal and extra-foveal vision. Vis. Res. 1999, 39, 3737–3744. [CrossRef] 25. Machilsen, B.; Pauwels, M.; Wagemans, J. The role of vertical mirror symmetry in visual shape perception. J. Vis. 2009, 9, 11. [CrossRef][PubMed] 26. Dresp-Langley, B. Bilateral Symmetry Strengthens the Perceptual Salience of Figure against Ground. Symmetry 2019, 11, 225. [CrossRef] 27. Dresp-Langley, B. Affine Geometry, Visual Sensation, and Preference for Symmetry of Things in a Thing. Symmetry 2016, 8, 127. [CrossRef] 28. Sabatelli, H.; Lawandow, A.; Kopra, A.R. Asymmetry, symmetry and beauty. Symmetry 2010, 2, 1591–1624. [CrossRef] 29. Poirier, F.J.A.M.; Wilson, H.R. A biologically plausible model of human shape symmetry perception. J. Vis. 2010, 10, 1–16. [CrossRef][PubMed] 30. Giurfa, M.; Eichmann, B.; Menzl, R. Symmetry perception in an insect. Nature 1996, 382, 458–461. [CrossRef] 31. Krippendorf, S.; Syvaeri, M. Detecting symmetries with neural networks. Mach. Learn. Sci. Technol. 2021, 2, 015010. [CrossRef] 32. Toureau, V.; Bibiloni, P.; Talavera-Martínez, L.; González-Hidalgo, M. Automatic Detection of Symmetry in Dermoscopic Images Based on Shape and Texture. Inf. Process. Manag. Uncertain. Knowl. Based Syst. 2020, 1237, 625–636. 33. Shen, D.; Wu, G.; Suk, H.I. Deep Learning in Medical Image Analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [CrossRef] 34. Hramov, A.E.; Frolov, N.S.; Maksimenko, V.A.; Makarov, V.V.; Koronovskii, A.A.; Garcia-Prieto, J.; Antón-Toro, L.F.; Maestú, F.; Pisarchik, A.N. Artificial neural network detects human uncertainty. Chaos 2018, 28, 033607. [CrossRef] 35. Dresp-Langley, B. Seven Properties of Self-Organization in the Human Brain. Big Data Cogn. Comput. 2020, 4, 10. [CrossRef] 36. Wandeto, J.M.; Dresp-Langley, B. Ultrafast automatic classification of SEM image sets showing CD4 + cells with varying extent of HIV virion infection. In Proceedings of the 7ièmes Journées de la Fédération de Médecine Translationnelle de l’Université de Strasbourg, Strasbourg, France, 25–26 May 2019. 37. Dresp-Langley, B.; Wandeto, J.M. Unsupervised classification of cell imaging data using the quantization error in a Self-Organizing Map. In Transactions on Computational and Computational Intelligence; Arabnia, H.R., Ferens, K., de la Fuente, D., Kozerenko, E.B., Olivas Varela, J.A., Tinetti, F.G., Eds.; Advances in Artificial Intelligence and Applied Computing; Springer-Nature: Berlin/Heidelberg, Germany, in press. 38. Wandeto, J.M.; Nyongesa, H.K.O.; Remond, Y.; Dresp-Langley, B. Detection of small changes in medical and random-dot images comparing self-organizing map performance to human detection. Inf. Med. Unlocked 2017, 7, 39–45. [CrossRef] 39. Wandeto, J.M.; Nyongesa, H.K.O.; Dresp-Langley, B. Detection of smallest changes in complex images comparing self-organizing map and expert performance. In Proceedings of the 40th European Conference on Visual Perception, Berlin, Germany, 27–31 August 2017. 40. Wandeto, J.M.; Dresp-Langley, B.; Nyongesa, H.K.O. Vision-Inspired Automatic Detection of Water-Level Changes in Satellite Images: The Example of Lake Mead. In Proceedings of the 41st European Conference on Visual Perception, Trieste, Italy, 26–30 August 2018. 41. Dresp-Langley, B.; Wandeto, J.M.; Nyongesa, H.K.O. Using the quantization error from Self-Organizing Map output for fast detection of critical variations in image time series. In ISTE OpenScience, collection “From data to decisions”; Wiley & Sons: London, UK, 2018. 42. Wandeto, J.M.; Dresp-Langley, B. The quantization error in a Self-Organizing Map as a contrast and colour specific indicator of single-pixel change in large random patterns. Neural Netw. 2019, 119, 273–285, Special Issue in Neural Netw. 2019, 120, 116–128.. [CrossRef][PubMed] 43. Dresp-Langley, B.; Wandeto, J.M. Pixel precise unsupervised detection of viral particle proliferation in cellular imaging data. Inf. Med. Unlocked 2020, 20, 100433. [CrossRef][PubMed] 44. Dresp-Langley, B.; Reeves, A. Simultaneous brightness and apparent depth from true colors on grey: Chevreul revisited. Seeing Perceiving 2012, 25, 597–618. [CrossRef][PubMed] Symmetry 2021, 13, 299 15 of 15

45. Dresp-Langley, B.; Reeves, A. Effects of saturation and contrast polarity on the figure-ground organization of color on gray. Front. Psychol. 2014, 5, 1136. [CrossRef] 46. Dresp-Langley, B.; Reeves, A. Color and figure-ground: From signals to qualia, In Perception Beyond Gestalt: Progress in Vision Research; Geremek, A., Greenlee, M., Magnussen, S., Eds.; Psychology Press: Hove, UK, 2016; pp. 159–171. 47. Dresp-Langley, B.; Reeves, A. Color for the perceptual organization of the pictorial plane: Victor Vasarely’s legacy to Gestalt psychology. Heliyon 2020, 6, 04375. [CrossRef] 48. Bonnet, C.; Fauquet, A.J.; Estaún Ferrer, S. Reaction times as a measure of uncertainty. Psicothema 2008, 20, 43–48. 49. Brown, S.D.; Marley, A.A.; Donkin, C.; Heathcote, A. An integrated model of choices and response times in absolute identification. Psychol. Rev. 2008, 115, 396–425. [CrossRef][PubMed] 50. Luce, R.D. Response Times: Their Role in Inferring Elementary Mental Organization; Oxford University Press: New York, NY, USA, 1986. 51. Posner, M.I. Timing the brain: Mental as a tool in neuroscience. PLoS Biol. 2005, 3, e51. [CrossRef] 52. Posner, M.I. Chronometric Explorations of Mind; Erlbaum: Hillsdale, NJ, USA, 1978. 53. Hickw, E. Rate Gain Information. Q. J. Exp. Psychol. 1952, 4, 11–26. 54. Bartz, A.E. Reaction time as a function of stimulus uncertainty on a single trial. Percept. Psychophys. 1971, 9, 94–96. [CrossRef] 55. Jensen, A.R. Clocking the Mind: Mental Chronometry and Individual Differences; Elsevier: Amsterdam, The Netherland, 2006. 56. Salthouse, T.A. Aging and measures of processing speed. Biol. Psychol. 2000, 54, 35–54. [CrossRef] 57. Kuang, S. Is reaction time an index of connectivity during training? Cogn. Neurosci. 2017, 8, 126–128. [CrossRef] [PubMed] 58. Ishihara, S. Tests for Color-Blindness; Hongo Harukicho: Tokyo, Japan, 1917. 59. Monfouga, M. Python Code for 2AFC Forced-Choice Experiments Using Contrast Patterns. 2019. Available online: https: //pumpkinmarie.github.io/ExperimentalPictureSoftware/ (accessed on 8 January 2021). 60. Dresp-Langley, B.; Monfouga, M. Combining Visual Contrast Information with Sound Can Produce Faster Decisions. Information 2019, 10, 346. [CrossRef] 61. Kohonen, T. Self-Organizing Maps. 2001. Available online: http://link.springer.com/10.1007/978-3-642-56927-2 (accessed on 8 January 2021). 62. Kohonen, T. MATLAB Implementations and Applications of the Self-Organizing Map; Unigrafia Oy: Helsinki, Finland, 2014; p. 177. 63. Nordfang, M.; Dyrholm, M.; Bundesen, C. Identifying bottom-up and top-down components of attentional weight by experimental analysis and computational modeling. J. Exp. Psychol. Gen. 2013, 142, 510–535. [CrossRef][PubMed] 64. Liesefeld, H.R.; Müller, H.J. Modulations of saliency signals at two hierarchical levels of priority computation revealed by spatial statistical distractor learning. J. Exp. Psychol. Gen. 2020, in press. [CrossRef][PubMed] 65. Dresp, B.; Fischer, S. Asymmetrical contrast effects induced by luminance and color configurations. Percept. Psychophys. 2001, 63, 1262–1270. [CrossRef] 66. Dresp-Langley, B. Why the brain knows more than we do: Non-conscious representations and their role in the construction of conscious experience. Brain Sci. 2012, 2, 1–21. [CrossRef] 67. Dresp-Langley, B. Generic properties of curvature sensing by vision and touch. Comput. Math. Methods Med. 2013, 634168. [CrossRef][PubMed] 68. Dresp-Langley, B. 2D geometry predicts perceived visual curvature in context-free viewing. Comput. Intell. Neurosci. 2015, 9. [CrossRef] 69. Gerbino, W.; Zhang, L. Visual orientation and symmetry detection under affine transformations. Bull. Psychon. Soc. 1991, 29, 480. 70. Batmaz, A.U.; de Mathelin, M.; Dresp-Langley, B. Seeing virtual while acting real: Visual display and strategy effects on the time and precision of eye-hand coordination. PLoS ONE 2017, 12, e0183789. [CrossRef][PubMed] 71. Dresp-Langley, B. Principles of perceptual grouping: Implications for image-guided surgery. Front. Psychol. 2015, 6, 1565. [CrossRef] 72. Martinovic, J.; Jennings, B.J.; Makin, A.D.J.; Bertamini, M.; Angelescu, I. Symmetry perception for patterns defined by color and luminance. J. Vis. 2018, 18, 4. [CrossRef][PubMed] 73. Treder, M.S. Behind the Looking-Glass: A Review on Human Symmetry Perception. Symmetry 2010, 2, 1510–1543. [CrossRef] 74. Spillmann, L.; Dresp-Langley, B.; Tseng, C.H. Beyond the classic receptive field: The effect of contextual stimuli. J. Vis. 2015, 15, 7. [CrossRef] 75. Tsogkas, S.; Kokkinos, I. Learning-Based Symmetry Detection in Natural Images. In Lecture Notes in Computer Science; Computer Vision—ECCV 2012; Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7578. 76. Liu, Y. Computational Symmetry in Computer Vision and Computer Graphics; Now Publishers Inc.: Norwell, MA, USA, 2009. 77. Bakhshandeh, S.; Azmi, R.; Teshnehlab, M. Symmetric uncertainty class-feature association map for feature selection in microarray dataset. Int. J. Mach. Learn. Cyber. 2020, 11, 15–32. [CrossRef] 78. Radovic, M.; Ghalwash, M.; Filipovic, N.; Obradovic, Z. Minimum redundancy maximum relevance feature selection approach for temporal gene expression data. BMC Bioinform. 2017, 18, 9. [CrossRef] 79. Strippoli, P.; Canaider, S.; Noferini, F.; D’Addabbo, P.; Vitale, L.; Facchin, F.; Lenzi, L.; Casadei, R.; Carinci, P.; Zannotti, M.; et al. Uncertainty principle of genetic information in a living cell. Theor. Biol. Med. Model. 2005, 30, 40. [CrossRef][PubMed]