Downloaded by guest on September 29, 2021 www.pnas.org/cgi/doi/10.1073/pnas.1716084115 a Benitez-Quiroz F. Carlos visually to mechanism transmit efficient an is color Facial aeperception face mecha- emotion. effective decode an and transmit is visually color to facial results nism where facial These emotion of perception movements. of and production expressions muscle the of facial model revised a by independent support is provided signal that color using acti- this from muscle that emotion facial demonstrate any decode also of We absence successfully vation. the in people even features, that observer. color these hypoth- an show our of by favor We interpretation in evidence esis. visual provide studies their our of and unex- results The sender emo- important, a of faces? expressions an by from facial of of tion emotion production the existence decode of visual the mechanism to plored human suggest colors the questions facial does These And these valence? use negative system vs. col- positive facial categories emotion and observable between differential Are and we within questions. consistent hypothesis, ors two this following study the To the in activation. address even muscle study emotion facial to of interpret observers we absence allow visually Here, colors and face. facial transmit the visible successfully these these on that in changes with hypothesis variations color innervated the flow visible also Blood vessels. yield is blood face vessels of the network of large a action surface pro- called the be generally However, to muscles, units. facial believed one’s are contracting humans by 16, in February duced emotion Behrmann Marlene of Member expressions Board Editorial Facial by accepted 2017) and 12, MA, September Cambridge, review Technology, for of (received Institute 2018 Massachusetts Saxe, Rebecca by Edited 43210 OH Columbus, University, State Ohio hypothesis. successfully changes these observers? are by And interpreted color)? visually skin and ethnic- gender, individuals, ity, differen- across (even and within categories/valences category/valence between consistent emotion tial changes same experi- color the when skin of the is, expressions are That emotion, observers? an to encing valence and categories tion pres- blood emotive during flow, occur (13–15). blood that experiences changes in other color and variations facial levels, by sometimes glucose These sure, caused . is skin be in that the paleness can redness of the changes or the surface instance, the with for to associated on closest Consider, composition vessels 1). blood to blood (Fig. or information of flow emotion network blood send the the can changing face by a observers that movements. is facial hypothesis of Our independently emotion transmits cessfully sys- (10–12). visual 9). environments our noisy (8, by in interpreted perception even easily tem, and are expressions production facial their and These 7) influence (6, exist may differences individual context and (2, cultural although categories 5), emotion 4, distinct with associated expressions facial units action called resulting, typically (3). The (AUs) are (2). articulations muscles muscle facial observable their relaxing and contracting F eateto lcrcladCmue niern,TeOi tt nvriy oubs H420 and 43210; OH Columbus, University, State Ohio The Engineering, Computer and Electrical of Department h rsn ae rvdsspotn vdnefrthis for evidence supporting provides paper present The emo- communicate robustly and accurately color facial Can suc- color facial that hypothesis supplemental the test we Here, yield AUs of combinations different that shown been has It htpol ovyeoin hog ailepesosby expressions facial argued through have scientists and convey philosophers people (1), that y 2,300 over or | categorization a,b | apaahSrinivasan Ramprakash , | optrvision computer a,b n li .Martinez M. Aleix and , 1073/pnas.1716084115/-/DCSupplemental hsatcecnan uprigifrainoln at online information under supporting contains article This distributed is Editorial article 1 the BY-NC-ND). by (CC 4.0 access invited License editor NonCommercial-NoDerivatives guest open a is This R.S. Submission. Direct PNAS Board. a is color. article facial from applica- This emotion patent of a recognition submitted the has includes University that State Ohio tion and The statement: R.S., idea. C.F.B.-Q., main of the Conflict data; conceived analyzed A.M.M. and A.M.M. paper; and the wrote R.S., A.M.M. C.F.B.-Q., and C.F.B.-Q. research; research; designed performed C.F.B.-Q., of R.S. help the with A.M.M., contributions: Author oy(eadeso dniy edr tnct,adsi oo)and cate- color) skin emotion and an ethnicity, gender, within identity, consistent of (regardless are gory are that there features that be mean color eliminated we can facial discriminant, having dis- By ones) alone, of information. neutral features shading production all color (plus the on emotion whether based of discriminated determine expressions to facial is tinct goal first Our and Results production emotion. suc- identify is the visually color and of facial transmit to model where used expressions revised cessfully facial a of perception for visual call color. facial results of modulations These through emotion of communication independent. and partially) color least of (at contributions are the perception is, emotion That to AUs. we AUs color of Finally, by that to transmitted movements. additive information muscle is on emotion of the valence) absence that (and the demonstrate in category even emotion face correct the the emotions. peo- perceive that specific demonstrates do of subjects any ple human those without to match images faces these (i.e., to Showing images movement) face muscle neutral facial of to color us a allows the This use change category. emotion we each with Second, color associated descriptive category features them. most the emotion identify between to an algorithm differential machine-learning within but consistent valence) indeed (or are areas facial owo orsodnesol eadesd mi:[email protected]. Email: addressed. be should correspondence whom To nteasneo n ailmsl movement. even muscle facial colors, any facial of these absence correct the from in the valence identify and people category hypothe- that emotion this demonstrate of studies favor Our in sis. evidence behavioral com- and converging then putational present can We who emotion. visi- observers, the are to decode patterns and changes successfully color flow category facial blood specific the as These to ble emotion. unique each changes of flow valence blood facial computations these yield of some study that we hypothesis supplemental Here, the movements. muscle yield computations facial has these identifiable of research visually some Previous that system. hypothesis the nervous studied compu- central of the number a by of tations execution the to correspond Emotions Significance hs eut rvd vdnefra fcetadrobust and efficient an for evidence provide results These distinct in features color specific that demonstrate we First, a,b,1 b etrfrCgiieadBanSine,The Sciences, Brain and Cognitive for Center . www.pnas.org/lookup/suppl/doi:10. raieCmosAttribution- Commons Creative NSLts Articles Latest PNAS | f6 of 1

PSYCHOLOGICAL AND COMPUTER SCIENCES COGNITIVE SCIENCES times, each time leaving a different subset out for testing. The average classification accuracy and SD on the left-out subsets is computed, yielding the table results shown in Fig. 2A. Columns in this confusion table specify the true emotion cate- gory, and rows correspond to the classification given by our clas- sifier. Thus, diagonal entries correspond to correct classifications and off-diagonal elements to confusions. The correct classification accuracy, averaged over all emo- tion categories, is 50.15% (chance = 5.5%), and all emotion cat- egories yield results that are statistically significant above chance (P < 0.001). A power analysis yielded equally supportive results (SI Appendix, Table S1). The high effect size and low P value, combined with the small number of confusions made by this lin- ear classifier, provide our first evidence for the existence of con- sistent color features within each emotion category that are also differential across emotions. These results are not dependent on the skin color as demon- Fig. 1. Emotions are the execution of a number of computations by the strated in SI Appendix, Fig. S3A and Results Are Not Dependent nervous system. Previous research has studied the hypothesis that some of on Skin Color. these computations yield muscle articulations unique to each emotion cat- egory. However, the human face is also innervated with a large number of Classification of positive vs. negative valence by this machine- learning algorithm was equally supportive of our hypothesis. blood vessels, as illustrated (veins in blue, arteries in red). These blood ves- −90 sels are very close to the surface of the skin. This makes variations in blood Classification was 92.93% (chance = 50%), P < 10 . flow visible as color patterns on the surface of the skin. We hypothesize that the computations executed by the nervous system during an emotion yield Experiment 2. To identify the most discriminant (diagnostic) color patterns unique to each emotion category. (Copyright The Ohio State color features, we repeated the above computational analysis but University.) using a one-vs.-all approach (Materials and Methods, Experiment 2). This means that the images of the target emotion are used as the samples of one class, while the images of all other (nontar- differential between them. We test this in experiment 1. Exper- get) emotions are assigned to the second class. Specifically, LDA iment 2 identifies the color changes coding each emotion cate- is used to yield the color models ˜xj of each of the target emotions gory. Experiment 3 incorporates these color patterns to neutral and the color model of the nontarget emotions ˜x−j , j = 1, ... , 18. faces to show that people visually recognize emotion based on That is, ˜xj and ˜x−j are defined by the most discriminant color color features alone. And experiment 4 shows that this emotive features given by LDA. color information is independent from that transmitted by AUs. Using this LDA classifier and a 10-fold cross-validation pro- cedure yields the accuracies and standard errors shown in Fig. Experiment 1. We use images of 184 individuals producing 18 2B. As we can see in Fig. 2B, the classification accuracies of facial expressions of emotions (5) (Materials and Methods, all emotion categories are statistically better than chance (P < Images). These include both genders and many ethnicities and 10−50, chance = 50%; SI Appendix, Table S2). These results are races (SI Appendix, Fig. S1A and Extended Methods, Databases). not dependent on skin color as demonstrated in SI Appendix, Previous work has demonstrated that the pattern of AU activa- Fig. S3B. tion in these facial expressions is consistent within an emotion Importantly, the relevance of the color channels in each local category and differential between them (5, 7, 16). But are facial face area varies across emotion categories, as expected (SI color patterns also consistent within each emotion and differen- Appendix, Fig. S4A). This result thus supports our hypothesis tial between emotions? that experiencing distinct emotions yields differential color pat- To answer this question, we built a linear machine-learning terns on the face. That is, when producing facial expressions of classifier using linear discriminant analysis (LDA) (17, 18) (Mate- distinct emotions, the observable facial color differs. But when rials and Methods, Classification). First, each face is divided into distinct people produce facial expressions of the same emotion, 126 local regions defined by 87 anatomical landmark points (SI these discriminant color features remain constant. Furthermore, Appendix, Fig. S2). These facial landmark points define the con- the most discriminant color areas are mostly different from the tours of the face (jaw and crest line) and internal facial com- most discriminant shape changes caused by AUs (SI Appendix, ponents (brows, eyes, nose, and mouth). A Delaunay triangula- Fig. S4B). Thus, color features that transmit emotion are gener- tion of these points yields the local areas of the face shown in SI ally independent of AUs. Appendix, Fig. S2. These local regions correspond to local areas The identified color features can now be added to images of of the network of blood vessels illustrated in Fig. 1. facial expressions, as shown in Fig. 3A. In addition, the dimen- Let the feature vector ˆxij define the isoluminant opponent sionality of the color space is 18, with each dimension defining color (19) features of the j th emotion category as produced by an emotion category (SI Appendix, Fig. S5 and Dimensionality of the ith individual (Materials and Methods, Color Space). the Color Space), further supporting our hypothesis. If indeed local facial color is consistent within each emotion Since the production of spontaneous expressions generally dif- category and differential between them, then a linear classifier fers from that of posed expressions (16, 20), we also assessed the should discriminate these feature vectors as a function of their ability of isoluminant color to discriminate spontaneous expres- emotion category. We use a simple linear model and disjoint sions. Specifically, the DISFA (Denver Intensity of Spontaneous training and testing sets to avoid overfitting. Facial Action) database (21) includes a large number of samples Testing was done using 10-fold cross-validation (Materials and of four emotion categories plus neutral. Using these images and Methods, Experiment 1). This means that we randomly split the the above machine-learning approach yields an average classi- dataset of all our images into 10 disjoint subsets of equal size. fication accuracy of 95.53% (chance = 20%; SI Appendix, Table Nine of these subsets are used to train our linear classifier, while S3) (Fig. 2C). We note that spontaneous expressions of distinct the 10th is used for testing, i.e., to determine how accurate this emotions are more clearly discriminated than posed expressions, classifier is on unobserved data. This process is repeated 10 as expected.

2 of 6 | www.pnas.org/cgi/doi/10.1073/pnas.1716084115 Benitez-Quiroz et al. Downloaded by guest on September 29, 2021 Downloaded by guest on September 29, 2021 ˜ eie-urze al. et Benitez-Quiroz 3. experiment of method forced-choice 3. the experiment images, of of pair each For . 3. Fig. emotion express to appears screen the in which j images identify two to the asked were of Participants one tested. were pairs tion statistically are results All SE. specify SD bars the error color and the emotions, and nontarget accuracy) higher and equal target the circles the in larger indicated **** P between (i.e., category 50%); classification = The of of 2). (change accuracy accuracy (experiment significant LDA the indicate using specifies bars classifications circle Blue two-way a the the emotion. of of of Results results size (B) The the identities. category. of across true table the Confusion columns (A) and categories. distinct of images 2. Fig. ftennagtemotions, nontarget the of emotion, target the to and discrimi- the using faces neutral database. of our color in models the nant individuals change 184 we the is, of That faces neutral H, the surprised; surprised. modify Su, fearfully surprised; FS, sadly SS, disgusted; 3. fearful; fearfully Experiment sadly SF, FD, disgusted; sadly angry; SD, angry; fearfully sadly FA, SA, fearful; sad; S, F, neutral; surprised; N, surprised; disgustedly happily DS, HS, disgusted; disgusted; happily D, HD, happy; surprised; angrily AS, disgusted; .Ti ilsanew a yields This faces, 3B). neutral (Fig. of surprised set dis- fearfully happily emotion and happy, six disgusted, sad, following angry, gusted, the faces: of neutral color vector to the feature categories add the to with approach this associated used image the call us equation the using modified x is face) neutral a B AC BC AB , −j ˜ hs mgswr hnsoni ar opriiat Fg 3C (Fig. participants to pairs in shown then were images These pcfial,tefauevector feature the Specifically, I ij where ), .Ec arhda mg corresponding image an had pair Each S6). Fig. Appendix, SI or h rtpi fiae orsod oteepeso fhpies h eodpi oepeso fagr n h atpi oepeso of expression to pair last the and anger, of expression to pair second the , of expression the to corresponds images of pair first The (A) ossetwt u yohss pcfi oo etrsrmi nhne o mgso h aeeoinctgr u r ifrnilbetween differential are but category emotion same the of images for unchanged remain features color specific hypothesis, our with Consistent ˜ I ik afo h time the of Half . ˜ x α h oo oeso xeiet2aenwue to used now are 2 experiment of models color The j Left 1 = and to aeil n ehd,Eprmn 3). Experiment Methods, and (Materials ˜ x ˜ I −j ij Right ˜ I , ij j dnie above. identified n nte n orsodn oone to corresponding one another and , 1, = nr,dsutd ap,hpiydsutd a,faflysrrsd ( surprised. fearfully sad, disgusted, happily happy, disgusted, angry, : ˜ I ik . . . < ˜ , I ij k 10 Left x 6. , = 6 a h etiaeadhl of half and image left the was in −50 j apeimages Sample (B) S7). Fig. Appendix, (SI 2 experiment in identified colors facial diagnostic the includes image soitdwt image with associated l agt otre emo- nontarget target, All . (C . ofso al fa of table Confusion ) y ij = x in y ij + , ˜ I α( ij I in We . k k ˜ x Let wycasfiaino h oreoinctgre nDSA ,agy D angrily AD, angry; A, DISFA. in categories emotion four the on classification -way wycasfiainwt D eprmn ) osidct h rdce category predicted the indicate Rows 1). (experiment LDA with classification -way j (of − iev.ngtv aec a losgicn;accuracy significant; also was (chance valence negative vs. tive cant, emotion, nontarget a tar- the color, with emotion face get neutral the preferred consistently with participants 4A, Fig. in given .Terslsaesonin significant, shown statistically are are six-alternative results results classification a The average i.e., 4B; 3D). Fig. (Fig. choices, experiment) possible Six-alternative forced-choice Methods, six and (Materials of experiment forced-choice one into image across randomized was order keypress. This by one. responded right Participants the participants. was it time the g cuayo h eonto fvlnei hschallenging this aver- in valence fact, of In recognition valence. the of of perception accuracy con- correct this age the Nonetheless, happy. yields with fusion confused is which disgusted, et etse h blt fsbet ocasf each classify to subjects of ability the tested we Next, selected participants times of percentage The P < 50% = 0.01 .Tercgiino posi- of recognition The S4). Table Appendix, (SI ), P C < ˜ I and ij 10 ahrta h mg ihteclr of colors the with image the than rather , ˜ I j ik apetil fteto n six-alternative and two- the of trials Sample D) = −4 hs eut eesaitclysignifi- statistically were results These . {1, . . . . 32. = D scnb eni i.4A, Fig. in seen be can As 6}. , P 92% x < xscrepnst h target the to corresponds axis 0. NSLts Articles Latest PNAS (chance xetfrhappily for except 01, .All 16.67%). = ˜ I ij over 82. = | ˜ I f6 of 3 65% ik ˜ I ˜ I is ij ik

PSYCHOLOGICAL AND COMPUTER SCIENCES COGNITIVE SCIENCES ABC

Fig. 4. Results of experiments 3 and 4. (A) The y axis shows the proportion of times participants indicated that the neutral image with the diagnostic color ˜ ˜ features of the target emotion specified in the x axis (Iij) expresses that emotion better than the neutral face with the colors of a nontarget emotion (Iik). Images are in isoluminant color space; i.e., images do not have any shading information. (B) Confusion table of a six-alternative forced choice experiment. Subjects are shown a single image at a time and asked to classify it into a set of six emotion categories. The size of the circles specifies classification accuracies, and colors are SD. Accuracies are better than chance except for happily disgusted, which is confused for happy. (C) The y axis shows the proportion of times ˜ participants indicated that a face with the AUs and diagnostic color features of the target emotion specified in the x axis (Iij) expresses that emotion better ˜ than a face with the AUs of the target emotion and the colors of a nontarget emotion (Iik). ***P < 0.0001, **P < 0.001, *P < 0.01.

experiment is 72.9% (chance = 50%, P < 10−5)(SI Appendix, Discussion Table S5). Emotions are the execution of a number of computations by the nervous system. For over two millennia, studies have evalu- Experiment 4. One may whether the ated the hypothesis that some of these computations yield visible effect demonstrated in the previous experiment disappears when facial muscle changes (called AUs) specific to each emotion (1, the facial expression also includes AUs. That is, are the emotion 2, 4, 5, 7, 22). information visually transmitted by facial muscle movements and The present paper studied the supplemental hypothesis that diagnostic color features independent from one another? If the these computations result in visible facial color changes caused color features do indeed transmit (at least some) independent by emotion-dependent variations in facial blood flow. The stud- information from that of AUs, then having the color of the target ies reported in the present paper provide evidence favoring this emotion j should yield a clearer perception of the target emotion hypothesis. Crucially, this visual signal is correctly interpreted than having the colors of a nontarget emotion k; i.e., congruent even in the absence of facial muscle movement. Additionally, the colors make categorization easier, and incongruent colors make emotion signal transmitted by color is additive to that encoded it more difficult. in the facial muscle movements. Thus, the emotion information To test this prediction, we created two additional image sets. transmitted by color is at least partially independent from that by In the first image set, we added the color model of emotion cat- facial movements. egory j to the images of the 184 individuals expressing the same Supporting our results, the human retina (23) and later visual + emotion with AUs. Formally, image Iij corresponds to the modi- areas (24) have cells specialized for different types of stimuli. + Some cells are tuned to detect motion, such as those caused fied feature vector zij = xij + α(˜xj − ˜x−j ). Here xij is the feature vector of the facial expression of emotion j as expressed by indi- by facial movements. Other cells are specialized for color per- ception. The visual analysis of emotion using these two separate vidual i (i.e., the feature vector of image Iij ), and α = 1 (Mate- rials and Methods, Experiment 4; Fig. 3A; and SI Appendix, Fig. systems (motion and color) adds robustness to the emotion sig- S7). In the second set, we added the color features of a nontar- nal, allowing people to more readily interpret it in noisy environ- get emotion k in an image of a facial expression of emotion j , ments. Moreover, the existence of two separate neural mecha- k nisms for the analysis of motion and color provides a plausible k =6 j . The resulting images, I are given by the feature vectors ij explanation for the independence of emotion information trans- k ˜ ˜ + zij = xij + α(xk − x−k ), α = 1 (SI Appendix, Fig. S7). Thus, Iij is mitted by facial movements and facial color. an image with the facial color hypothesized to transmit emotion Also in support of our results is the observation that human k j , while Iij is an image with the diagnostic colors of emotion k faces are mostly bare, whereas those of other primates are cov- instead, k =6 j . ered by facial hair (25). This bareness facilitates the transmission Participants completed a two-alternative forced-choice exper- of emotion through the modulation of facial color in humans. iment where they had to indicate which of two images of facial This could mean that the transmission of an emotion signal + k expressions convey emotion j more clearly, Iij or Iij , k =6 j (Mate- through facial color is a mechanism not available to all primates rials and Methods, Experiment 4 and SI Appendix, Fig. S7). The and may be a result of recent evolutionary forces. + k These findings call for a revisit to emotion models. For location of Iij and Iij (left or right) was randomized. The pro- + instance, the present paper has demonstrated that color can portion of times subjects selected Iij as more clearly expressive k successfully communicate 18 distinct emotion categories to an of emotion j than Iij is shown in Fig. 4C. As predicted, the observer as well as positive and negative valence, but more cate- + selection of Iij was significantly above chance (chance = 50%). gories or degrees of valences are likely (7, 26). For example, it is These results are statistically significant, P < 0.01 (SI Appendix, possible that new combinations of AUs and colors, not tested in Table S6). the present study, can yield previously unidentified facial expres- The recognition of valence was even stronger; average classifi- sions of emotion. This is a fundamental question in emotion cation = 85.01% (chance = 50%, P < 10−5). research likely to influence the definition of emotion and the role These results support our hypothesis of an (at least par- of emotions in high-level cognitive tasks (26, 27). tially) independent transmission of emotion signal by color The results of the present study also call for the study of the features. brain pathways associated with emotion perception using color

4 of 6 | www.pnas.org/cgi/doi/10.1073/pnas.1716084115 Benitez-Quiroz et al. Downloaded by guest on September 29, 2021 Downloaded by guest on September 29, 2021 uiatclrsaegvnb h poetclrcanl yellow–blue channels color opponent the by Formally red–green. given and space color luminant nti oo pc;i.e., space; data the color of variance) this and mean in (i.e., moments second and first the compute oo Space. Color pixel. each i.e., regions; the local these of d areas. local triangular removed 126 are of Since they total iris). visible, and a generally (sclera leaving Let not eyes the consideration, are the define further of areas triangles inside from these these the in of 16 Six changes and areas. blood mouth local the 148 of of interior total in a shown yields areas gulation local triangular the create used we Here, image. the on vr image Every as face Face. the Methods, of Areas Extended Appendix, (SI face neutral The the Databases). surprised. includes disgustedly also and sadly dis- disgusted, dataset angry, fearfully angrily surprised, surprised, sadly fearfully angrily fearful, angry, gusted, fearfully sadly disgusted, disgusted, sadly happily surprised, fear- surprised, recognition surprised, disgusted, happily and angry, sad, ful, production happy, have are images of categories These emotion consistency 21. The and (7). for 5 validated refs. from extensively are been images The neutral. plus gories, Images. pro- Subjects University. State consent. Ohio written The vided at Practices Research Responsible of Consent. and Approval Methods other and Materials and this by per- conveyed the eloquently understand studies so help painters. The could emotion emotion. paper of of present col- ception perception the of the in blocks the yield blurred reported are to of note combined consisting Of Rothko, ors (39). interpretation, years Mark emotional for of an exploiting paintings been have have can painters color as of perception the disorder? the color? on or this dependent movements is this facial is But interpreting Or (38). in emotion limitations of atyp- by expressions an caused facial is of psychopathologies perception in ical characteristic the recurring A in gies. plays color that (34–37). emotion importance of expression the demonstrate graphics visual a pass to (33). i.e., test humans, Turing from indistinguishable be if to mechanisms are independent they two to these able be through or to emotion need interpret will extended interaction) com- human–robot per- be (e.g., and color vision intelligence to puter and artificial color in need facial algorithms of Similarly, also ception. contribution the will include (7) to revised emotion of neu- expressions in value high of topic a 32). is (31, results roscience our Also, by (28–30). supported understood ones as the poorly such mechanisms is domain-specific pathways the independent understanding these in of pathways specificity of the some but variety of emotion, of a expressions of facial stud- of existence interpretation Previous the movements. suggested facial have with ies conjunction in and alone eie-urze al. et Benitez-Quiroz tics, R ijks p×q3 ial,terslso h rsn td upr h iwthat view the support study present the of results the Finally, psychopatholo- of study the in significant computer also are in findings Our studies hypothesis, our with consistent Also facial of perception the of models computational Current D k = x hDlua ragei image in triangle Delaunay th = ij (d = s {d n the and ijks1 eue mgso 8 niiul xrsig1 mto cate- emotion 18 expressing individuals 184 of images used We ij r (µ = = 1 , , ij T .Dlua raglto 4)i sdto used is (40) triangulation Delaunay S2). Fig. Appendix , (SI 66 (s . . . 1 d h bv-endfauevcosaempe noteiso- the onto mapped are vectors feature above-defined The I , ij ijks2 ij σ 1 snwuigtefloigfauevco fclrstatis- color of vector feature following the using now is , , ij T r . . . 1 d , , adakpit fec ftefca opnnso the of components facial the of each of points landmark d 126 . . . eoeec aeclriaeof image color face each Denote ijks3 , i } s pcfistesbetand subject the specifies ijr , ) easto ucin htrtr h ieso each of pixels the return that functions of set a be h xeietdsg a prvdb h Office the by approved was design experiment The µ T ), h ij T ∈ s ijks 126 µ ijk R ijk d , = 3 ∈ σ k = (I enstevle fteLScanl of channels LMS the of values the defines ij (d T R 126 ij l −1 2 savco nldn the including vector a is ) ijks1 h Dcodntso adakpoint landmark a of coordinates 2D the ) Σ T ∈ I + s=1 l ij i.e., ; R d 504 h ijks2 ijks sn h aemdln,we modeling, same the Using . hstrian- This S2. Fig. Appendix, SI d , − k σ (I d ijk ij j ijks3 ) h mto aeoyand category emotion the = = , l (d d −1 ijks1 ijk Σ p 1 , s=1 l − × . . . d q q ijks2 l , iesas pixels d ieswithin pixels (h ijkl ) ijks T ) ∈ T − where , R µ 2 We . ijk I ij ) ∈ 2 . qasoewhen one equals with associated eigenvectors the by defined µ)( is matrix the space of eigenvalues nonzero color the mally, Classification. mto aeoyo sample of category emotion in samples of number ples in vectors feature 1, experiment is in neutral) reported with (plus results procedure category the except following emotion compute the To each repeat subset. we in every samples in of equal number the subsets that disjoint 10 cross-validation, into 10-fold set samples For the class. a divide are we expression neutral the and emotion µ)( space. color in hue and saturation vectors the of yields properties This the maintains which as (41), face neutral every of µ vector feature color the define oyo h ers aeoyma,gvnb h ulda distance, Euclidean the by given mean, category argmin nearest the vectors of feature gory sample subspace test LDA the the of compute to used not were and 1. Experiment C means, class lso l te mto aeoist ls ie,tennagtemo- nontarget the sam- (i.e., the 2 and class emotion) to target categories the emotion Thus, (i.e., other tions). 1 all class of to ples (Materials category above emotion described one 1 approach Experiment the Methods, repeating and by category emotion 2. Experiment cuaisis accuracies eetti rcdr 0tmsadcmuetema lsicto accu- classification mean the compute and times racy 10 procedure this repeat eetdti rcdr ,0 ie,ec iedaigarno sam- random v Let a average. drawing the time pute each from times, on to 1,000 ple downsampling due procedure apply biases this we avoid repeated To imbalance, 1. sample experiment the in described classifier nearest-mean c etr noteoiia pc scle eosrcin ti ie by given is It reconstruction. called and is space original in the into not vectors vectors feature all of vector by given atcpnsfis iwdabaksre o 0 s olwdb white a by followed ms, 500 for screen blank a experi- viewed behavioral first forced-choice Participants two-alternative a 3C is Fig. trial ment. Each of trials. expression facial 20 a screen of of a image beginning an by i.e., followed the emotion, emotion, emotion At that target of categories. image the sample emotion indicated a six screen with the the a of block, of each each one In of design. tested block we a used blocks, We monitor. six (CRT) tube ray cathode inch cat- procedure. emotion and six Design the selected 3B min. We 35 Fig. 3B). in (Fig. shown pixel egories every in luminance of same pixels emotion the of of features all of nance where images ing Subjects experiment. this Stimuli. blindness. in color participated for vision tested normal were to corrected or mal Subjects. 3. Experiment 1 = in126 T stenme fclasses. of number the is = ˆx ¯x S {1, ij j ˜ x C t µ t (v − , − −j r = 10-fold aigtesm ubro ape.Ti iiini oei way a in done is division This samples. of number same the having σ 1,1 α ∈ e . . . S µ) µ) in126 T (t = t hnuigtedtst frf.5ad2,rsetvl;ie,each i.e., respectively; 21, and 5 refs. of datasets the using when 5 emdf h rgnlnurlfaces, neutral original the modify We = , S j r sdt compute to used are , T r ˜ x . . . T t wnyhmnsbet 7wmn enae 39 )wt nor- with y) 23.92 age, mean women; (7 subjects human Twenty ihtepooyia U rsn.Ec ftesxbok consists blocks six the of Each present. AUs prototypical the with S ˜ x euetesm 0fl rs-aiainpoeueand procedure cross-validation 10-fold same the use We 18}. , j − ,and 1, ˜ −c −j + I stebtencassatrmatrix, scatter between-class the is = sgvnby given is = ) ij µ C σ T , ¯x and eas s h omlzto rpsdb rtoae al. et Bratkova by proposed normalization the use also We . v δ 10-fold v r ie yteclrfauevectors feature color the by given are = = 0.1 v e D scmue nteaoedfie oo pc.For- space. color above-defined the on computed is LDA h D slmnn oo pc scmue with computed is space color isoluminant LDA The eietf h otdsrmnn oo etrso each of features color discriminant most the identify We 1 T I omthtenme fsmlsin samples of number the match to 1 T ) 1,504 T . .Formally, 2. (Cm) ¯x ste(euaie)cvrac matrix, covariance (regularized) the is (t j P i.S5 Fig. Appendix, SI Similarly, . ˘x r S e in ) = − e T r ∗ t 10 j =1 h lsicto cuayoe l ftets sam- test the of all over accuracy classification The . −1 with , stefauevco ihteaeaeclradlumi- and color average the with vector feature the is oanurlepeso.Tersligiae aethe have images resulting The expression. neutral a to = 0.1 ˆx ¯x S Σ ij e ujcswr ofral etd5 mfo 21- a from cm 50 seated comfortably were Subjects µ µ t y P ,in ), X −1 , and (t P th-fold th-fold oke h eairleprmn hre than shorter experiment behavioral the keep to y i m r S =1 (t n eoeswee Since elsewhere. zero and ) t 10 8tms ahtm sinn h ape of samples the assigning time each times, 18 ) S λ B =1 r c t v ˜ x ˆx steoal ucinta eun h true the returns that function oracle the is ) 1 and F, x P r h Do h rs-aiae classification cross-validated the of SD The . = in −j 1 = and , > in (µ Σ = respectively. , j C hseuto dstedansi color diagnostic the adds equation This . {t n =1 = .Teclrmdlo emotion of model color The 0. x th-fold v Σ t −1 S r and t 1 v |y j x −1 r ˆx λ 1 T 1 ie,in (i.e., ij ∈ P (t 1 ¯x {e lutaeatpcltm ieo trial. a of line time typical a illustrate ¯x , S hsyed h iciiatvector discriminant the yields This . −j r − I S B S ) ∀ r ∗ stema etr etro all of vector feature mean the is steiett matrix, identity the is t B = 1) where (17), where , =y µ x h ape nsubset in samples The . sasge oteeoincate- emotion the to assigned is t r 10-fold ∈S c = (t } S r F t −j )} 1, and I 1 r rjce onto projected are , in NSLts Articles Latest PNAS S .Terpoeto fthese of reprojection The ). {e . . . ) stezr–n oswhich loss zero–one the is , = 2 ¯x i r ∗ j . S ¯x = = {S −j =y −c 0 l ftesubsets, the of All 10: , Σ 1, y m 1 (t ij = S , stema feature mean the is x r . . . x −1 S −c . . . )} = = c in {t hn ecom- we Then, . where , t pcfial,we Specifically, . P P ˘x = 8.Teresult- The 184. , r , = in |y S S (µ i m B i m 10 + =1 =1 1, (t δ = ahsub- each }, in1 T r α( = . . . ) P P ˆx 6 = , ij ˜ x 1 and 0.01, S σ n ˜ x j j C j t 0 we 10, , j c =1 c | r the are t Each F. =1 in1 T j which , − with }, sthus is C = sthe is f6 of 5 ( , ( ˜ x = e ˆx ˜ . . . x ¯x −j ij r ∗ j j 18 v − − = ), 1 T ,

PSYCHOLOGICAL AND COMPUTER SCIENCES COGNITIVE SCIENCES ˜ ˜ k fixation cross for 500 ms, followed by the image pair Iij and Iik, k 6= j (Fig. the modified color of nontarget emotions Iij are given by the feature vector 3C). Responses were recorded by keypress. Statistical significance is given by zij = xij + α(˜xk − ˜x−k) with k =6 j, α = 1 (SI Appendix, Fig. S7). a one-tailed t test. Design and procedure. Participants were comfortably seated 50 cm from a Six-alternative forced-choice experiment. In the six-alternative forced 21-inch CRT monitor. We used the same block design of experiment 3. Each choice experiment (6AFC), participants saw a single image in each trial, and trial is a two-alternative forced choice behavioral experiment. SI Appendix, there was a single block with all images randomized. Participants chose one Fig. S8 illustrates a typical timeline of a trial. In each trail, participants viewed of six possible emotion category labels provided to the right of the image a blank screen for 500 ms, followed by a white fixation cross for 500 ms, + k (Fig. 3D). Selection was done with the computer mouse. followed by the image pair Iij , Iij. Participants are instructed to indicate whether the left or right image expresses emotion j more clearly. Responses Experiment 4. + k are recorded by keypress. The positions (left, right) of Iij and Iij are random- Subjects. Twenty human subjects (8 women; mean age, 24.45 y) partici- ized. Their presentation order (trials) is also randomized across participants. pated. All subjects passed a color blindness test. Statistical significance is given by a one-tailed t test, which yielded the values Stimuli. We modified the images of six facial expressions of emotion with 0.0000081 (H), 0.0005 (S), 0.0051 (A), 0.00015 (D), 0.0016 (HD), and 0.0006 (FS) the prototypical AUs active, Iij, i = 1, ... , 184, j = 1, ... , 6 (Fig. 3A). We mod- (abbreviations H, S, A, D, HD, and FS are defined in Fig. 2). ified these images to have the colors of a target or nontarget emotion. The images with the target emotion colors added are I+ and are given by the ACKNOWLEDGMENTS. We thank the reviewers and editor for constructive ij feedback. We also thank Ralph Adolphs and Lisa Feldman Barrett for com- modified feature vector zij = xij + α(˜xj − ˜x−j), i = 1, ... , 184, α = 1. Hence, + ments and Delwin Lindsey and Angela Brown for pointing us to ref. 25. This Iij are images of facial expressions with the AUs of emotion category j plus research was supported by the National Institutes of Health, Grant R01-DC- the colors of emotion j (Fig. 3 and SI Appendix, Fig. S7). The images with 014498, and the Human Frontier Science Program, Grant RGP0036/2016.

1. Aristotle (1939) Aristotle: Minor Works (Harvard Univ Press, Cambridge, MA). 22. Hjortsjo¨ CH (1969) Man’s Face and Mimic Language (Studen Litteratur, Lund, 2. Duchenne GB (1990) The Mechanism of Human Facial Expression (Cambridge Univ Sweden). Press, Paris). 23. Euler T, Haverkamp S, Schubert T, Baden T (2014) Retinal bipolar cells: Elementary 3. Ekman P, Rosenberg EL (1997) What the Face Reveals: Basic and Applied Studies of building blocks of vision. Nat Rev Neurosci 15:507–519. Spontaneous Expression Using the Facial Action Coding System (FACS) (Oxford Univ 24. Bach M, Hoffmann MB (2000) Visual motion detection in man is governed by non- Press, New York). retinal mechanisms. Vis Res 40:2379–2385. 4. Ekman P, Sorenson ER, Friesen WV (1969) Pan-cultural elements in facial displays of 25. Changizi MA, Zhang Q, Shimojo S (2006) Bare skin, blood and the evolution of pri- emotion. Science 164:86–88. mate colour vision. Biol Lett 2:217–221. 5. Du S, Tao Y, Martinez AM (2014) Compound facial expressions of emotion. Proc Natl 26. Spunt RP, Adolphs R (2017) The neuroscience of understanding the emotions of oth- Acad Sci USA 111:E1454–E1462. ers. Neurosci Lett, in press. 6. Jack RE, Garrod OG, Yu H, Caldara R, Schyns PG (2012) Facial expressions of emotion 27. Barrett LF (2017) Categories and their role in the science of emotion. Psychol Inq are not culturally universal. Proc Natl Acad Sci USA 109:7241–7244. 28:20–26. 7. Martinez AM (2017) Visual perception of facial expressions of emotion. Curr Opin 28. Mendez-B´ ertolo´ C, et al. (2016) A fast pathway for fear in human amygdala. Nat Psychol 17:27–33. Neurosci 19:1041–1049. 8. Barrett LF, Kensinger EA (2010) Context is routinely encoded during emotion percep- 29. Wang S, et al. (2014) Neurons in the human amygdala selective for perceived emo- tion. Psychol Sci 21:595–599. tion. Proc Natl Acad Sci USA 111:E3110–E3119. 9. Etkin A, Buchel¨ C, Gross JJ (2015) The neural bases of emotion regulation. Nat Rev 30. Lindquist KA, Wager TD, Kober H, Bliss-Moreau E, Barrett LF (2012) The brain basis of Neurosci 16:693–700. emotion: A meta-analytic review. Behav Brain Sci 35:121–143. 10. Smith FW, Schyns PG (2009) Smile through your fear and : Transmitting and 31. Spunt RP, Adolphs R (2017) A new look at domain specificity: Insights from social identifying facial expression signals over a range of viewing distances. Psychol Sci neuroscience. Nat Rev Neurosci 18:559–567. 20:1202–1208. 32. Kanwisher N (2010) Functional specificity in the human brain: A window into the 11. Du S, Martinez AM (2011) The resolution of facial expressions of emotion. J Vis 11:24. functional architecture of the mind. Proc Natl Acad Sci USA 107:11163–11170. 12. Srinivasan R, Golomb JD, Martinez AM (2016) A neural basis of facial action recogni- 33. Geman D, Geman S, Hallonquist N, Younes L (2015) Visual Turing test for computer tion in humans. J Neurosci 36:4434–4442. vision systems. Proc Natl Acad Sci USA 112:3618–3623. 13. Blake TM, Varnhagen CK, Parent MB (2001) Emotionally arousing pictures increase 34. Mathur MB, Reichling DB (2016) Navigating a social world with robot partners: A blood glucose levels and enhance recall. Neurobiol Learn Mem 75:262–273. quantitative cartography of the uncanny valley. Cognition 146:22–32. 14. Cahill L, Prins B, Weber M, McGaugh JL (1994) β-adrenergic activation and memory 35. Jimenez J, et al. (2010) A practical appearance model for dynamic facial color. ACM for emotional events. Nature 371:702–704. Trans Graphics 29:141. 15. Garfinkel SN, Critchley HD (2016) Threat and the body: How the heart supports fear 36. Alkawaz MH, Basori AH (2012) The effect of emotional colour on creating realistic processing. Trends Cognit Sci 20:34–46. expression of avatar. Proceedings of the 11th ACM SIGGRAPH International Confer- 16. Fabian Benitez-Quiroz C, Srinivasan R, Martinez AM (2016) Emotionet: An accurate, ence on Virtual-Reality Continuum and Its Applications in Industry (ACM, New York), real-time algorithm for the automatic annotation of a million facial expressions in pp 143–152. the wild. Proc IEEE Conf Comput Vis Pattern Recognit, 10.1109/CVPR.2016.600. 37. de Melo CM, Gratch J (2009) Expression of emotions using wrinkles, blushing, sweat- 17. Martinez AM, Zhu M (2005) Where are linear feature extraction methods applicable? ing and tears. International Workshop on Intelligent Virtual Agents, eds Ruttkay Z, IEEE Trans Pattern Anal Mach Intell 27:1934–1944. Kipp M, Nijholt A, Vilhjalmsso´ HH (Springer, Amsterdam), pp 188–200. 18. Mart´ınez AM, Kak AC (2001) PCA versus LDA. IEEE Trans pattern Anal Mach Intell 38. Bruyer R (2014) The Neuropsychology of Face Perception and Facial Expression (Psy- 23:228–233. chology Press, New York). 19. Gegenfurtner KR (2003) Cortical mechanisms of colour vision. Nat Rev Neurosci 4:563– 39. Kandel ER (2016) Reductionism in Art and Brain Science: Bridging the Two Cultures 572. (Columbia Univ Press, New York). 20. Fernandez-Dols´ JM, Crivelli C (2013) Emotion and expression: Naturalistic studies. 40. Delaunay B (1934) Sur la sphere vide. Bull Acad Sciences USSR VII: Class Sci Math, pp Emot Rev 5:24–29. 793–800. 21. Mavadati SM, Mahoor MH, Bartlett K, Trinh P, Cohn JF (2013) DISFA: A spontaneous 41. Bratkova M, Boulos S, Shirley P (2009) oRGB: A practical opponent color space for facial action intensity database. IEEE Trans Affect Comput 4:151–160. computer graphics. IEEE Comput Graphics Appl 29:42–55.

6 of 6 | www.pnas.org/cgi/doi/10.1073/pnas.1716084115 Benitez-Quiroz et al. Downloaded by guest on September 29, 2021