[Version for pre-print view only; revised in January 2021] Chen et al., 2021

Facial expressions dynamically decouple the transmission of emotion categories and intensity over time Chaona Chen1, Daniel S. Messinger2, Cheng Chen3, Hongmei Yan4, Yaocong Duan1, Robin A. A. Ince5, Oliver G. B. Garrod5, Philippe G. Schyns1,5, & Rachael E. Jack1,5

1School of Psychology, University of Glasgow, , UK 2Department of Psychology, University of Miami, Florida, USA 3Foreign Language Department, Teaching Center for General Courses, Chengdu Medical College, Chengdu, China 4The MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China 5Institute of Neuroscience and Psychology, University of Glasgow, Scotland, UK

Abstract. Facial expressions dynamically transmit information-rich social messages. How they achieve this complex signalling task remains unknown. Here we identified, in two cultures – East Asian and Western European – the specific face movements that transmit two key signalling elements – emotion categories (e.g., ‘happy’) and intensity (e.g., ‘very intense’) – in basic and complex emotions. Using a data-driven approach and information- theoretic analyses, we identified in the six basic emotions (e.g., happy, fear, sad) – the specific face movements that transmit the emotion category (classifiers), intensity (intensifiers), or both (classifier+intensifier) to each of 60 participants in each culture. We validated these results in a broader set of complex emotions (e.g., excited, shame). Cross- cultural comparisons revealed cultural similarities (e.g., eye whites as intensifiers) and differences (e.g., mouth gaping). Further, in both cultures, classifier and intensifier face movements are temporally distinct. Our results reveal that facial expressions transmit complex emotion messages by cascading information over time.

One Sentence Summary. Facial expressions of emotion universally transmit multiplexed emotion information using specific face movements that signal emotion categories and intensity in a temporally structured manner over time.

1 [Version for pre-print view only; revised in January 2021] Chen et al., 2021

Social communication is essential for the survival of most species because it provides important information about the internal states1 and behavioural intentions2 of others. Across the animal kingdom, social communication is often achieved using non-verbal signalling such as facial expressions3-6. For example, when smiling retracts the corners of the lips, this facial movement is often readily perceived as a sign of happiness or appeasement in humans, apes, and dogs7-9. Facial expressions can also convey additional important information such as emotional intensity – for example, contentment to cheerful to delighted and ecstatic – each of which can also signal affiliation and social bonding or reward and joy10-12. Across human cultures, the intensity of expressed emotion can also lead to different social inferences – for example, in Western European cultures broad smiling is often associated with positive traits such as competence and leadership. In contrast, in Eastern cultures such as Russia and China where milder expressions are favoured, broad smiles are often associated with negative traits such as low intelligence13 or high dominance14. Therefore, facial expressions are a powerful tool for social communication because they can transmit information-rich social messages, such as emotion categories and their intensities, that inform and shape subsequent social perceptions and interactions15-20. However, how facial expressions achieve this complex signalling task remains unknown – that is, which specific components of facial expression signals transmit the key elements of a social message: its category and intensity.

Here, we address this question by studying the communicative functions and adaptive significance of human facial expressions of emotion from the perspective of theories of communication (see Fig. 1). These theories posit that signals are designed to serve several main purposes, two of which are particularly important for social communication. The first main purpose is ‘classifying,’ which enables the receiver to recognize a particular emotion category. For example, smiles are typically associated with states of happiness and positive

2 [Version for pre-print view only; revised in January 2021] Chen et al., 2021 affect. The second main purpose is ‘intensification,’ where specific modulations of a signal – such as variations in amplitude, size, duration, or repetition rate – enhances the signal salience, quickly draws the receiver’s attention, and communicates the magnitude of social message. For example, larger, higher amplitude signals are detectable from longer distances21, and signals with long durations or high repetition rates can easily draw the attention of otherwise distracted receivers22,23 thereby enabling them to focus on analysing the signal in more detail24-26 which may be particularly important is cases of threat. Although certain signals might serve to communicate either the emotion category or its intensity, some might play a dual role, particularly for emotions that require efficient signalling to elicit rapid responses from others, such as surprise, fear, disgust, or anger. We study these communicative functions in two distinct cultures – East Asian and Western European – each with known differences in perceiving facial expressions27,28, to derive a culturally informed understanding of facial expression communication29,30. Fig. 1 illustrates the logic of our hypothesis as a Venn diagram, where each colour represents a different communicative function.

3 [Version for pre-print view only; revised in January 2021] Chen et al., 2021

Fig. 1 | Sending and receiving signals for social communication. To communicate a message to others, the sender encodes a message (e.g., “I am very happy” coloured in blue) in a signal. Here, the signal is a facial expression composed of different face movements, called Action Units (AUs)31. The sender transmits this signal to the receiver across communication channel. On receiving the signal, the receiver decodes a message from it (“he is very happy”) according to existing associations. A complex signal such as a facial expression could contain certain components – e.g., smiling, crinkled eyes, or wide opened mouth – that transmit specific elements of the message such as the emotion category ‘happy’ or its intensity (‘very’). We represent these different communicative functions using the Venn diagram. Green represents the set of AUs that communicate the emotion category (‘Classify,’ e.g., ‘happy’), red represents those that communicate intensity (‘Intensify,’ e.g., ‘very’), and orange represents those that serve a dual role of classification and intensification (‘Classify & Intensify’).

The green set represents the facial signals that receivers use to classify the emotion category

(e.g., ‘happy’), red represents those that receivers use to perceive emotional intensity (e.g.,

‘very’), and the orange intersection represents the facial signals that serve both functions of classification and intensification (e.g., ‘very happy’). The empirical question we address is to identify, in each culture, the facial signals – here, individual face movements called Action

Units31 (AUs) and their dynamic characteristics such as amplitude and temporal signalling order – that serve each type of communicative function (see Fig. 2 for the methodical

4 [Version for pre-print view only; revised in January 2021] Chen et al., 2021 approach). We find that, in each culture, individual face movements such as smiling, eye widening, or scowling, each serve a specific communicative function of transmitting emotion category and/or intensity information. Cross-cultural comparisons showed that certain face movements serve a similar communicative function across cultures – for example, Upper Lid

Raiser (AU5) serves primarily as an emotion classifier with occasional use as an intensifier

– while others serve different functions across cultures – for example, Mouth Stretch (AU27) primarily serves as an emotion classifier for East Asian participants and an intensifier for

Western participants (see Fig. 3). An analysis of the temporal ordering of classifier and intensifier face movements show that, in each culture, they are temporally distinct with intensifier face movements peaking earlier or later than classifiers. Together, our results reveal for the first time how facial expressions, as a complex dynamical signalling system, transmit multi-layered emotion messages. Our results therefore provide new insights into the longstanding goal of deciphering the language of human facial expressions3,4,32-35.

Results

Identifying face movements that communicate emotion categories and intensity. To identify the specific face movements that serve each communicative function – emotion classifier, intensifier, or dual classifier and intensifier – we used a data-driven approach that agnostically generates face movements and tests them against subjective human cultural perception36. We then measured the statistical relationship between the dynamic face movements – i.e., Action Units (AUs) – presented on each trial and the participants’ response using an information-theoretic analysis37. Fig. 2 operationalizes our hypothesis and illustrates our methodological approach with the six classic emotions – happy, surprise, fear, disgust, anger and sad.

5 [Version for pre-print view only; revised in January 2021] Chen et al., 2021

Fig. 2 | Data-driven modelling of the dynamic face movements that transmit emotion category and intensity information in the six classic emotions. a, Transmitting. On each experimental trial, a dynamic facial movement generator36 randomly selected a sub-set of individual face movements called Action Units31 (AUs) from a core set of 42 AUs (here, Cheek Raiser – AU6, Lip Corner Puller – AU12, Lips Part – AU25, see labels to the left). A random movement is then assigned to each AU individually using random values for each of six temporal parameters (onset latency, acceleration, peak amplitude, peak latency, deceleration, and offset latency; see labels illustrating the solid black curve). The randomly activated AUs are then combined to produce a photo-realistic facial animation, shown here with four snapshots across time. The face movement vector at the bottom shows the three AUs randomly selected on this example trial. b, Decoding. The receiver viewed the facial animation and classified it according to one of six classic emotions – happy, surprise, fear, disgust, anger or sad – and rated its intensity on a 5-point scale from very weak to very strong (response here is ‘happy,’ ‘strong,’ shown in blue). Otherwise, the receiver selected ‘other’. Sixty Western and 60 East Asian participants each completed 2,400 such trials with all facial animations displayed on same-ethnicity male and female face identities.

6 [Version for pre-print view only; revised in January 2021] Chen et al., 2021

On each experimental trial, we generated a random facial animation using a dynamic face movement generator36 that randomly selected a sub-set of individual face movements, called

Action Units31 (AUs; minimum of 1 AU, maximum of 5 AUs), and assigned a random movement to each AU individually using six temporal parameters (onset, acceleration, peak amplitude, peak latency, deceleration, and offset; see the labels illustrating the black solid curve in Fig. 2a; full details are provided in the Methods). For example, in the trial shown in

Fig. 2a, three AUs are randomly selected – Cheek Raiser (AU6), Lip Corner Puller (AU12), and Lips Part (AU25) – and each is activated by a random movement (Fig. 2a, see solid, dotted or dashed curves representing each AU). The dynamic AUs are then combined to produce a photo-realistic facial animation, shown as four snapshots across time (see in Fig.

2a). The receiver viewed the random facial animation and classified it according to one of the six classic emotions – happy, surprise, fear’, disgust, anger, or sad – and rated its intensity on a 5-point scale from very weak to very strong. For example, in Fig. 2b, on this trial the receiver perceived this combination of AUs – Cheek Raiser (AU6), Lip Corner Puller

(AU12), and Lips Part (AU25) each with a dynamic pattern – as ‘happy’ at ‘strong’ intensity.

If the receiver did not perceive any of the six emotions from the facial animation, they selected ‘other’. Sixty Western receivers (white European, 31 females, mean age = 22 years,

SD = 1.71 years) and 60 East Asian receivers (Chinese, 24 females, mean age 23 years, SD =

1.55 years; see full details in Methods, under ‘Participants’) each completed 2,400 such trials with all facial animations displayed on same-ethnicity male and female faces (full details are provided in Methods, under ‘Stimuli and procedure’).

Using this procedure, we therefore captured on each experimental trial the dynamic face movement patterns that elicited a given emotion category and intensity perception in the receiver – e.g., ‘happy’, ‘strong’ intensity; ‘sad’, ‘low’ intensity, and so forth (see Fig. 2b,

7 [Version for pre-print view only; revised in January 2021] Chen et al., 2021 highlighted in blue). After many such trials, we were then able to build the statistical relationship between the face movements presented on each trial and the receiver’s perceptions, to produce a statistically robust model of the face movement patterns that transmit emotion category and emotion intensity information to each receiver. As illustrated above, the strength of this data-driven approach is that it can objectively and precisely characterize the face movements that receivers use to classify emotions and to judge their intensity. This agnostic approach to generating face movements and testing them against subjective perception is therefore less constrained than theory-focused methods, and can instead extract the communicative functions of face movements directly from the receiver’s implicit knowledge29,38,39.

Following the experimental trials, we identified – for each receiver – the individual face movements (i.e., AUs) that corresponded to each of the three communicative functions depicted in the Venn diagram in Fig. 2: emotion classifiers (colour-coded in green), intensifiers (red), and classifier and intensifiers (orange). To do so, we measured the strength of the statistical relationship between each individual face movement and the receiver’s emotion classification and intensity responses using Mutual Information37,40 (MI). MI is the most general measure of the statistical relationship between two variables that makes no assumption about the distribution of the variables or the nature of their relationship37 (e.g. linear, nonlinear). For each culture seperately, we computed MI as follows.

1. Classifier face movements (green set). To identify intensifier AUs, we computed the

MI between each AU (present vs. absent on each trial) and each receiver’s emotion

classification responses (e.g., individual trials categorized as ‘happy’ vs. those that

were not). To establish statistical significance, we used a non-parametric permutation

test, which derived a chance level distribution for each receiver by randomly shuffling

8 [Version for pre-print view only; revised in January 2021] Chen et al., 2021

their responses. We then identified the AUs with statistically significant MI values,

controlling the family-wise error rate (FWER) over AUs with the method of

maximum statistics (FWER P < 0.05 within receiver test). Significant MI values

indicate a strong relationship between a specific facial movement (e.g., Lip Corner

Puller, AU12) and the classification of an emotion (e.g., ‘happy’; full details in

Methods, under ‘Characterizing the communicative function of face movements’).

2. Intensifier face movements (red set). To identify intensifier AUs, we used all trials

associated with a given emotion classification response as described in (1) – for

example, all ‘anger’ trials – and identified within those trials the specific AUs that

intensify that emotion. Specifically, we computed across trials the MI between each

AU (present vs. absent on each trial) and the receiver’s corresponding intensity

ratings (i.e., low vs. high). We established statistical significance using the same

permutation test and family-wise error rate method as described in (1) above and

identified the AUs with statistically significant MI values (FWER P < 0.05).

Significant MI values indicate a strong relationship between the face movement (e.g.,

Upper Lid Raiser, AU5) and the perceived intensity of the emotion (e.g., ‘high

intensity’ anger; full details in Methods, under ‘Characterizing the communicative

function of face movements’).

3. Classifier & Intensifier face movements (orange intersection). These AUs, which

serve a dual role, have significant MI values for both emotion classification (green

set) and intensification (red set).

We applied the above analysis to the data of each individual receiver. Finally, we computed the population prevalence of each of the above statistical effects. This indicates the proportion of the population from which the sample of experimental participants was drawn

9 [Version for pre-print view only; revised in January 2021] Chen et al., 2021 that would be expected to show the same effect, if subjected to the same experimental procedure. Inferring a non-zero population prevalence41,42 at P = 0.05, (Bonferroni corrected over emotions) corresponds to a significance threshold of N > 10 receivers showing a significant result in each culture (i.e., full details in Methods, under ‘Characterizing the communicative function of face movements, Population prevalence’). That is, 10 out of our

60 receivers showing an effect provides enough evidence to reject a null hypothesis that the population prevalence proportion is 0 (i.e., no receiver in the population would show the effect). Fig. 3a and b show these results.

10 [Version for pre-print view only; revised in January 2021] Chen et al., 2021

11 [Version for pre-print view only; revised in January 2021] Chen et al., 2021

Fig. 3 | Face movements that transmit emotion category and/or intensity information the six classic emotions in each culture. a, Each row of colour-coded faces shows, for each culture – Western and East Asian – and for each of the six classic emotions, the face movements (i.e., AUs) receivers used to classify the emotion (‘Classify,’ in green), perceive its intensity (‘Intensify,’ in red) or both (‘Classify & Intensify,’ in orange; see Venn diagram). Colour saturation shows the number of statistically significant receivers for each AU (FWER P < 0.05 within receiver test; see colour bar to right, normalized per emotion). b, Results above presented in tabular format. Only AUs with >10 significant receivers are shown, as this corresponds to rejecting the population null hypothesis of zero prevalence (P < 0.05, Bonferroni corrected). For example, Sharp Lip Puller (AU13) is a classifier (green) in happy in 42/60 Western receivers and 51/60 East Asian receivers. We repeated this analysis with a broad set of complex facial expressions of emotion in each culture (Supplemental Fig. S1 show full results). The color-coded matrix on the right (‘Classic & complex’) shows communicative function of each AU across these two data sets. Colour saturation shows the proportion of emotion categories in which the AU serves a given communicative function, ranked from highest to lowest (left to right). For example, Inner Brow Raiser (AU2) is exclusively a classifier (green) in each culture and is thus represented by a full saturation green cell in each culture. Cross-cultural comparison of the communicative functions of the AUs showed cross-cultural differences (denoted by Black dots) and similarities (no marking). c, Color-coded face maps show the AUs that serve the same or different communicative function across cultures. The list of AUs is shown next to each face.

In Fig. 3a, each row of colour-coded faces shows the face movements that serve each type of communicative function: emotion classification (‘Classify,’ in green), intensification

(‘Intensify,’ in red) and classification and intensification (‘Classify and Intensify,’ in orange; see Methods, under ‘Characterizing the communicative function of face movements’). Colour saturation represents the number of receivers with a significant effect (FWER P < 0.05 within receiver test; see colour bar to the right of Fig. 3a) above population prevalence41,42 threshold

(see Methods, under ‘Characterizing the communicative function of face movements’). Fig.

3b shows each of these face movements separately, for each emotion, colour-coded by communicative function, and with the number of receivers showing this effect. For example,

Lip Corner Puller-Cheek Raiser (AU12-6) in happy serves as classifier (green) in 34/60 receivers and as a classifier and intensifier (orange) in 24/60 receivers43. Mouth Stretch

12 [Version for pre-print view only; revised in January 2021] Chen et al., 2021

(AU27) serves as classifier and intensifier (orange) in surprise and fear in 41/60 and 12/60 receivers, respectively, and as an intensifier (red) in anger in 15/60 receivers.

To validate the findings reported for the six classic emotions, we applied the same analyses to a second set of facial expressions of 50 more complex emotions in each culture including excited, embarrassed, anxious, hate in Western culture and /amazed, /shame,

/anguish, /dismay in East Asian culture (see full details in Supplementary Fig. S1) acquired using the same methodology44 as illustrated in Fig. 2. As for the six classic emotions, we first identified the face movements that serve each type of communicative function – classifier, intensifier, and classifier and intensifier – for each emotion and culture separately using trials pooled across receivers to ensure high statistical power (see full details in Supplementary Methods, under ‘Characterizing the communicative function of face movements in complex emotions’). To determine the primary communicative function of each AU and in each culture, we computed for each AU, the ranking of the three communicative functions – e.g., whether the AU primarily serves as classifier, intensifier, or classifier & intensifier – based the frequency of the AU’s function across all basic and complex emotions (see full details in Methods, under ‘Cross-cultural comparison of the communicative function of face movements’). In Fig. 3b, the colour-coded bars on the right

(i.e., under ‘Classic & complex’) shows the ranking of each AU’s communicative function

(i.e., the color-coded cells from left to right in each row indicate the most to least frequent communicative function of each AU), with the colour saturation shows the proportion of emotion categories in which the AU serves a given communicative function.

Cross-cultural comparison of the communicative function of face movements

13 [Version for pre-print view only; revised in January 2021] Chen et al., 2021

With these results we have demonstrated the communicative function of face movements in the classification and intensification of emotions, across a broad range of facial expressions of the six classic emotions and the complex emotions in each culture. Next, to examine whether the face movements serve the same or different communicative function across the two cultures, we conducted a cross-cultural comparison of the results presented in Fig. 3b.

For each communicative function (i.e., classifier, intensifier or classifier & intensifier) separately, we identified the AUs with the same ranking across the two cultures and those with different rankings (see full details in Methods, under ‘Cross-cultural comparison of the communicative function of face movements’). For example, Upper Lid Raiser (AU5) primarily serves as classifier (i.e., 1st raking as a classifier) in both cultures whereas Mouth

Stretch (AU27) primarily serve as classifier & intensifier for Western perceivers and a classifier for East Asian receivers. Broad smiles (i.e., Lips Part-Lip Corner Puller, AU25-12) more frequently serve as an intensifier for East Asian receivers (2nd ranking) and less so for

Western receivers (3rd ranking). In Fig 3b, black dots indicate the AUs that serve different communicative function across cultures. Color-coded face maps in Fig 3c shows AUs that serve the same or different communicative functions across cultures.

Distinct temporal signatures of classifier and intensifier face movements

Given the different roles that classification and intensification play in communication – for example, in transmitting a broad, attention-grabbing message versus one that enables more fine-grained classifications – we might expect classifier and intensifier face movements to have different temporal signatures that could assist with signal decoding24,45.

To examine this, we analysed for each culture separately the temporal signatures of the classifier and intensifier face movements across the six classic and 50 complex emotions, henceforth referred to as facial expressions of emotion. We applied our analysis to face

14 [Version for pre-print view only; revised in January 2021] Chen et al., 2021 movements that exclusively serve the communicative function of emotion classification

(indicated by fully saturated green cells under ‘Classic & complex’ in Fig. 3b) and those that serve emotion intensification (indicated by the orange/red-coded cells under ‘Classic & complex’ in Fig. 3b). For each of these classifier and intensifier face movements (displayed as face maps in Fig. 4), and for each emotion separately, we computed the average time point at which the AU reached its peak amplitude across trials rated as high intensity by receivers.

In Fig. 4, a coloured point represents one trial, with vertical bars representing the mean peak

AU amplitude across trials of the classic emotions and the complex emotions (see

Supplementary Figs. S3 and S5 for each emotion separately). Colour-coded vertical bars indicate the AUs that, on average, peaked earlier or later than classifier AUs in each culture

(two-sample t-test, two-tailed P < 0.05 for each emotion, classifier AUs pooled across emotions, see Methods, under ‘Analysis of the temporal dynamics of classifier and intensifier face movements’ and Supplementary Figs. S2 and S4 for individual classifier AUs in each culture). For example, Upper Lid Raiser (AU5), color-coded in red, peaks earlier than classifier AUs in happy, but later in scared. In comparison, Lip Corner Puller-Cheek Raiser

(AU12-6), color-coded in orange, peaks earlier in embarrassment but later in happy; Nose

Wrinkler (AU9), color-coded in orange, peaks earlier than classifier AUs in anger and disgust, but later in fury. We found similar patterns in the East Asian receivers – for example,

Upper Lid Raiser (AU5), color-coded in red, peaks later than the classifier AUs in

/scared; Lips Part-Lip Corner Puller (AU25-12), color-coded in red, peaks later than the classifier AUs in /feel well; Nose Wrinkler (AU9), color-coded in orange, peaks earlier than the classifier in /anger, /disgust, but later in /indignant. Fig. 4 therefore demonstrates that intensifier and classifier face movements have a distinct temporal signature, with intensifiers peaking earlier or later than classifier face movements in each

15 [Version for pre-print view only; revised in January 2021] Chen et al., 2021 culture (see Supplementary Figs. S3 and S5 for the full details of individual intensifier AUs in each emotion).

16 [Version for pre-print view only; revised in January 2021] Chen et al., 2021

17 [Version for pre-print view only; revised in January 2021] Chen et al., 2021

Fig. 4 | Temporal signatures of emotion classifier and intensifier face movements. In each row, colour-coded points show for each face movement (i.e., Action Unit; see labels and face maps on left) when in time it reached its peak amplitude on individual trials, rated as ‘high intensity’ by receivers, pooled across the classic and complex emotions (see Supplemental Figs. S2-5 for each emotion separately). Vertical bars show the mean AU peak time per emotion (see emotion labels on left); colour saturation of points represent the difference from the mean; black horizontal bars represent standard errors of means (SEM; see key, bottom right; mean AU peak time and SEM are shown on the right). AUs are colour- coded by communicative function with ‘Intensifier’ (red) and ‘Classifier and Intensifier’ (orange) shown separately; ‘Classifier’ AUs (green) are pooled (see Methods, under ‘Analysis of the temporal dynamics of classifier and intensifier face movements’; Supplementary Figs. S2 and S4). Colour-coded vertical bars show the intensifier AUs that are temporally distinct from classifiers (two-sample t-tests, two-tailed P < 0.05, Bonferroni corrected over AUs and emotions); black bars indicate no significant difference (see Supplementary Figs. S3 and S5 for full data). For example, Nose Wrinkler (AU9), coloured in red, peaks earlier in anger than do classifier AUs; Mouth Stretch (AU27), coloured in orange, peaks earlier in fear and surprise and later in delighted. We found similar patterns in the result of East Asian – for example, Nose Wrinkler (AU9), color-coded in orange, peaks earlier than the classifier AUs in /disgust, but later in /indignant; Mouth Stretch (AU27), color-coded in red, peaks earlier than the classifier AUs in /anxiety. This comparison of intensifier and classifier face movements thus indicates that the signalling of emotion category and intensity information has a temporal structure in each culture.

Discussion

In this study, we set out to understand how facial expressions achieve the complex task of dynamically transmitting the multi-layered social messages of emotions. Using a data-driven, reverse correlation approach and an information-theoretic analysis framework, we have identified, in two distinct cultures, the specific face movements that convey two key elements of emotion communication – emotion classification (e.g., ‘happy’, ‘sad’) and intensification

(e.g., ‘very strong’). In 60 East Asian and 60 Western receivers, we showed, across a broad set of facial expressions of basic and complex emotions, that individual face movements

(AUs) frequently serve different communicative functions. These functions fall into three main categories: those used to classify the emotion (classifiers), those used to perceive emotional intensity (intensifiers), and those that serve the dual role of classifier and intensifier (classifier+intensifier). Cross-cultural comparisons of their functions showed clear

18 [Version for pre-print view only; revised in January 2021] Chen et al., 2021 cultural similarities – e.g., the wide opened eyes (AU5) frequently serve as an intensifier in both cultures – plus distinct cultural variance – e.g., the wide gaping mouth (AU27) tends to serve as a classifier in East Asian culture and an intensifier in Western culture. Further, analysis of the timing of these emotion classifier and intensifier face movements revealed that, in each culture, they are temporally distinct with intensifiers peaking earlier or later than classifiers. This temporal pattern thus decouples the transmission of emotion category and intensification information over time – a pattern we found in both cultures. Together, our results reveal a new complexity in the signalling system of facial expressions, in which face movements serve specific communicative functions with a universal temporal structuring over time.

Specificity of face movements in serving the communicative functions of emotion classification or intensification. In accordance with the inherent visual salience and attention- grabbing properties of face movements46,47 – examples of which include showing the whites of the eye in signalling fear48 and a wide-open mouth in signalling surprise49 – coupled with the co-evolved sensitivities of the brain and the effects of ritualization, one might expect specific face movements to serve a specific communicative function across all emotions.

Indeed, there are benefits to be gained from a system in which specific face movements have a fixed meaning, such as efficient and clear communication with little to no ambiguity.

However, such a system would also limit the ability of the human face to transmit the large number of complex and nuanced messages that are required for social interactions. Therefore, while some face movements might have a specific “leading” role in signalling whereby the message they convey is fixed – for example, positive or negative affect – other face movements could play a more versatile “supporting” role, whereby the messages they convey are more flexible and varied. For example, as shown in Figure 3 and Supplementary Figure

19 [Version for pre-print view only; revised in January 2021] Chen et al., 2021

S1, the wide opened eyes – Upper Lid Raiser (AU5) – serves as a classifier for several emotions in each culture (e.g., fear, surprise, ecstatic and fury for Western receivers;

/fear, /surprise, /anger, /delighted and /rage for East Asian receivers) and as an intensifier for other emotions (e.g., happy, anger, contempt and embarrassed for Western receivers; /scared, /alarmed and panicky). This relative flexibility highlights the key role of combinatorics in the generation of facial expressions; in a combinatorial system, different “supporting” face movements could convey different messages in the context of other “leading” face movements.

Understanding the temporal structure of emotion communication. A key advantage of data- driven studies is their ability to surprise with unexpected results38. In this study, for example, we observed not only that some face movements intensify earlier or later than certain classifier movements, but also that the same face movement (e.g., Upper Lid Raiser, AU5) could intensify earlier and later than classifier movements across different emotions. This raises the question as to whether ‘early intensifier’ face movements might serve a slightly different role than ‘later intensifier’ movements. For example, earlier intensifiers (e.g., Upper

Lid Raiser, AU5; Nose Wrinkler, AU9) could also serve to transmit broad information, such as negative affect, early in the dynamics of facial signalling24 with later face movements refining the message as a specific emotion category (e.g., anger). Similarly, later intensifiers might refine the emotion category message to support adaptive action – for example, refining anger as fury. To understand the mechanisms of signal intensification at this finer scale, further empirical work is needed, including on models of attention and facial expression decoding, and on the dynamic implementation of these information-processing mechanisms in the brain.

20 [Version for pre-print view only; revised in January 2021] Chen et al., 2021

Relatedly, the temporal decoupling of intensifier and classifier face movements has important implications for framing theories of emotion processing in relation to models of visual attention. Specifically, the visual system is a bandwidth-limited information channel, in which attention (covert and overt) “guides” the accrual of task-relevant stimulus information for categorical decision50,51. Our results suggest deployment of attention in space (i.e., different face movements) and time (read out at different time points) to guide the accrual of face movements from the face. By contrast, events might “grab” a receiver’s attention and interrupt this accrual52. For example, the Upper Lid Raiser (AU5) is known to grab attention in this way48,53 and to be dynamically represented in the occipito-ventral pathway before other face features – e.g., the eyes of an expressive face represented before the wrinkled nose and later the smiling mouth54,55. Decoupling intensifier and classifier face movements over time might reflect an adaptation between the sender’s transmission of a multi-layered emotion message and the receiver’s bandwidth-limited information channel that needs to accrue dynamic visual information54,55. Whether and how this temporally distinct representation of expressive face features changes with dynamic stimuli is now the focus of ongoing studies.

In sum, we set out to understand how facial expressions perform the complex task of transmitting multi-layered emotion messages by focusing on two key components of emotion communication – emotion classification and intensification. Using a data-driven method and an information-theoretic analysis framework, we identified in 60 Western receivers and 60

East Asian receivers and a across a large set of facial expressions of basic and complex emotion, three communicative functions of face movements: those used to classify the emotion (classifiers), to perceive emotional intensity (intensifiers), and those serving the dual role of classifier and intensifier. Analysis of the timing of these face movements showed that

21 [Version for pre-print view only; revised in January 2021] Chen et al., 2021 intensifier and classifier face movements are temporally distinct, in which intensifiers peaked earlier or later than classifiers. Together, our results reveal the complexities of facial expressions as a dynamical signalling system and show how they can perform the signalling task of transmitting multi-layered emotion messages. Our results therefore provide new insights into the formal language of facial expressions and, in turn, by virtue of our perception-based approach, raise new questions about how the visual system and brain parses this complex information.

Methods

Participants. We recruited a total of 120 participants – 60 Western Europeans (31 females, mean age = 22 years, SD = 1.71 years) and 60 East Asians (Chinese, 24 females, mean age 23 years, SD = 1.55 years) for the reverse correlation experiment. To control for the possibility that the participant’s perception of facial expressions could have been influenced by cross- cultural interactions, we recruited participants with minimal exposure to and engagement with other cultures as assessed by questionnaire (Supplementary Methods, Screening

Questionnaire). We recruited all Western participants in UK and collected the data in the

University of Glasgow. We recruited all East Asian participants in China and collected the data in the University of Electronic Science and Technology of China using the same experimental settings. All East Asian participants had a minimum International English

Testing System score of 6.0 (competent user). All participants had normal or corrected-to- normal vision and were free of any emotion-related atypicalities (autism spectrum disorder, depression, anxiety), learning difficulties (e.g., dyslexia), synaesthesia, and disorders of face perception (e.g., prosopagnosia) as per self-report. We obtained each participant’s written informed consent before testing and paid £6 per hour for their participation. The University

22 [Version for pre-print view only; revised in January 2021] Chen et al., 2021 of Glasgow and the University of Electronic Sciences and Technology of China authorized the experimental protocol (reference ID 300160203).

Stimuli and procedure. On each experimental trial, a dynamic face movement generator

(Yu, Garrod, & Schyns, 2012) randomly selected a set of individual face movements called

Action Units31 (AUs) from a core set of 41 AUs using a binomial distribution (i.e., minimum

= 1, maximum = 5 on each trial, median = 3 AUs across trials). A random movement is then applied to each AU separately using random values selected for each of six temporal parameters (onset latency, acceleration, peak amplitude, peak latency, deceleration, and offset latency, see labels illustrating the solid black curve in Fig. 2). These dynamic AUs were then combined to produce a photo-realistic facial animation. The participant – the receiver – viewed the random facial animation and categorized it according to one of the six classic emotion categories – i.e., ‘happy’, ‘surprise’, ‘fear’, ‘disgust’, ‘anger’ or ‘sad’ – and rated its intensity on a 5-point scale from ‘very weak’ to ‘very strong’. If the participant did not perceive any of the six emotions, they selected ‘other’. Each participant completed 2,400 such trials, displayed on 8 face identities of the same race (8 Westerners: 4 males, 4 females, mean age = 23 years, SD = 4.10 years; 8 Chinese: 4 males, 4 females, mean age = 22.1 years,

SD = 0.99 years).

We displayed the random facial animations on a black background using a 19- flat panel

Dell monitor (60 Hz refresh rate, 1024 × 1280-pixel resolution) in the centre of the participant’s visual . Each facial animation played for 1.25 s followed by a black screen.

A chin rest maintained a constant viewing distance of 68 cm, with stimuli subtending 14.25°

(vertical) × 10.08° (horizontal) of visual angle, which represents the average visual angle of a human face during typical social interaction56. Each participant completed the experiment

23 [Version for pre-print view only; revised in January 2021] Chen et al., 2021 over a series of eighteen ~20-minute sessions with a short break (~5 min) after each session and a longer break (at least 1 hour) after 3 consecutive sessions.

Characterizing the communicative function of face movements

Classifier face movements. To identify the Action Units (AUs) that participants used to classify each emotion, we computed the Mutual Information37,40 (MI) between each AU

(present vs. absent on each trial) and each receiver’s emotion classification responses (e.g., the individual trials categorized as ‘happy’ vs. those were not categorized as so). To establish significance, we derived a chance level distribution for each receiver using a non-parametric permutation test in which we randomly shuffled the receiver’s responses across the 2400 trials. We did this for 1,000 iterations, computed the MI for each AU at each iteration to control the family-wise error rate over AUs57. We then used the 95th percentile of the distribution of maximum MI values over iterations as a threshold for inference (FWER P <

0.05 within receiver test).

Intensifier face movements. To identify intensifier AUs, we first extracted the individual trials that each participant categorized as a given emotion (e.g., ‘anger’) and re-binned the corresponding intensity ratings (from 1 to 5) into low and high intensity with an equal- population transformation (i.e., towards an equal number of trials in low and high intensity bins). This is an iterative heuristic procedure that combines the smallest bin with its smallest neighbour to reduce the number of bins while maintaining as equal sampling as possible, without splitting the trials from its original bin. We applied this procedure to each participant and emotion separately to ensure that the intensity rating bins reflected each participant’s relative judgements of intensity (i.e., low or high) of each emotion. We then computed, for each emotion and participant separately, the MI between each AU (present vs. absent on each

24 [Version for pre-print view only; revised in January 2021] Chen et al., 2021 trial) and each participant’s corresponding re-binned intensity ratings (low vs. high). To establish statistical significance, we used the same permutation test with maximum statistics as described above and identified the AUs with significant MI values (FWER P < 0.05 within receiver test).

Classifier & Intensifier face movements. These AUs comprise those that passed statistical threshold for both classifier face movements and intensifier face movements (i.e., significant

MI results in both analyses).

Population prevalence. To obtain a population inference from the within-receiver results, we use population prevalence41,42. In this approach, we model the population from which our experimental participants are sampled with a binary property: each possible participant either would, or would not, show a true positive effect in the specific test considered (i.e., of a specific AU classifying an expression). A certain proportion of the population possess this binary effect. We then perform inference against the null hypothesis that the value of this population proportion parameter, the prevalence, is 0. With 60 participants in each culture as considered, at P < 0.05 with Bonferroni corrections over emotions and AUs, we can reject the null hypothesis that the proportion of the population with the effect is 0 when significant within-receiver results are observed in at least 10 out of the 60 participants.

Cross-cultural comparison of the communicative function of face movements. To examine whether the face movements have the same or different communicative functions across cultures, we conducted a cross-cultural comparison on the results presented in Fig. 3b.

First, we computed for each AU the frequency of each communicative function (i.e., classifier, intensifier, classifier & intensifier) across the classic and complex emotions in each

25 [Version for pre-print view only; revised in January 2021] Chen et al., 2021 culture separately and ranked the communicative functions by frequency. For example, for

Western receivers, Upper Lid Raiser (AU5) serves as classifier most frequently (i.e., 1st ranking), a classifier & intensifier less frequently (i.e., 2nd ranking) and an intensifier (i.e., 3rd ranking) least frequently. The colour-coded matrices in Fig. 3b (under ‘Classic & complex’) show the results with the communicative functions of each AU ordered from highest to lowest (left to right). We then used these rankings to compare the communicative functions of each AU across cultures. For example, Upper Lid Raiser (AU5) is primarily a classifier (1st rank) in both cultures and thus serves a similar communicative function. In contrast, Nose

Wrinkler (AU9) is primarily an intensifier (1st rank) for Western receivers but less to for East

Asian receivers (2nd rank). The color-coded face maps in Fig. 3c shows the results of this cross-cultural comparison.

Analysis of the temporal dynamics of classifier and intensifier face movements. To examine the temporal dynamics of the classifier and intensifier face movements in each culture, we compared their average peak latencies – when in time each AU reached its peak amplitude. First, for each culture separately, we examined whether the classifier AUs differed in their temporal dynamics by applying a one-way ANOVA to the peak latencies of all classifier AUs – i.e., those that exclusively serve as classifiers (see Supplementary Figs. S2 and S4 for full list of these AUs in each culture). Results of post-hoc pairwise comparisons

(Bonferroni corrected over AUs) show that all classifier AUs except for four AUs in Western and three AUs in East Asian are not statistically different in their peak latencies (see

Supplementary Figs. S2 and S4 for results). On this basis, we pooled the classifier AUs across emotions for further analysis. We then compared the temporal dynamics of each intensifier AU with the pooled classifier AUs by applying a two-sample t-test to the intensifier and classifier AUs’ average peak latencies for each emotion separately. Fig. 5

26 [Version for pre-print view only; revised in January 2021] Chen et al., 2021 shows these results, with each AU represented in each row and colour-coded according to its communicative function as before. Colour-coded circles represent the time that the AU reached its peak amplitude on individual trials rated as high intensity for a given emotion.

Colour-coded vertical lines and the emotion labels indicate the intensifier AUs with peak latencies that are significantly different from the pooled classifier AUs (two-tailed P < 0.05,

Bonferroni corrected over AUs and emotions); black vertical lines indicate those emotions where the intensifier AUs do not temporally differ from the classifier AUs (see full details of these emotions in Supplementary Figs. S3 and S5, where we show the data displayed on separate rows for each AU and emotion). We repeated this analysis with and without the four classifier AUs that are temporally distinct from the other classifier AUs as described above; the results did not differ.

27 [Version for pre-print view only; revised in January 2021] Chen et al., 2021

References

1 Messinger, D. S., Mattson, W. I., Mahoor, M. H. & Cohn, J. F. The eyes have it:

Making positive expressions more positive and negative expressions more negative.

Emotion 12, 430 (2012).

2 Frijda, N. H., Ortony, A., Sonnemans, J. & Clore, G. L. The complexity of intensity:

Issues concerning the structure of emotion intensity. (1992).

3 Darwin, C. The Expression of the Emotions in Man and Animals. 3rd edn, (Fontana

Press, 1999/1872).

4 Duchenne, G.-B. & de Boulogne, G.-B. D. The mechanism of human facial

expression. (Cambridge university press, 1990/1862).

5 Ekman, P., Davidson, R. J. & Friesen, W. V. The Duchenne smile: Emotional

expression and brain physiology: II. J. Pers. Soc. Psychol. 58, 342 (1990).

6 Hess, U., Blairy, S. & Kleck, R. E. The intensity of emotional facial expressions and

decoding accuracy. J. Nonverbal. Behav. 21, 241-257 (1997).

7 Andrew, R. J. Evolution of facial expression. Science 142, 1034-1041 (1963).

8 Preuschoft, S. “Laughter” and “smile” in Barbary macaques (Macaca sylvanus).

Ethology 91, 220-236 (1992).

9 Gamble, J. Humor in apes. Humor 14, 163-179 (2001).

10 Rychlowska, M. et al. Functional smiles: tools for love, sympathy, and war. Psychol.

Sci. 28, 1259-1270 (2017).

11 Mehu, M., Grammer, K. & Dunbar, R. I. Smiles when sharing. Evol. Hum. Behav. 28,

415-422 (2007).

12 Messinger, D. S., Fogel, A. & Dickson, K. L. All smiles are positive, but some smiles

are more positive than others. Dev. Psychol. 37, 642 (2001).

28 [Version for pre-print view only; revised in January 2021] Chen et al., 2021

13 Krys, K. et al. Be careful where you smile: culture shapes judgments of intelligence

and honesty of smiling individuals. J. Nonverbal. Behav. 40, 101-116 (2016).

14 Tsai, J. L. et al. Cultural variation in social judgments of smiles: The role of ideal

affect. J. Pers. Soc. Psychol. 116, 966 (2019).

15 Yoon, K. L., Joormann, J. & Gotlib, I. H. Judging the intensity of facial expressions

of emotion: Depression-related biases in the processing of positive affect. J. Abnorm.

Psychol. 118, 223 (2009).

16 Ortony, A., Clore, G. & Collins, A. The Cognitive Structure of Emotions. (Cambridge

University Press, 1988).

17 Matsumoto, D. & Ekman, P. American-Japanese cultural differences in intensity

ratings of facial expressions of emotion. Motiv. Emot. 13, 143-157 (1989).

18 Juslin, P. N. & Laukka, P. Impact of intended emotion intensity on cue utilization and

decoding accuracy in vocal expression of emotion. Emotion 1, 381 (2001).

19 Douglas-Cowie, E. et al. in International conference on affective computing and

intelligent interaction. 488-500 (Springer).

20 Monkul, E. S. et al. A social cognitive approach to emotional intensity judgment

deficits in schizophrenia. Schizophr. Res. 94, 245-252 (2007).

21 Smith, F. W. & Schyns, P. G. Smile through your fear and sadness transmitting and

identifying facial expression signals over a range of viewing distances. Psychol. Sci.

20, 1202-1208 (2009).

22 Schleidt, W. M. Tonic communication: continual effects of discrete signs in animal

communication systems. J. Theor. Biol. 42, 359-386 (1973).

23 Wilson, E. O. Animal communication. Sci. Am. 227, 52-63 (1972).

29 [Version for pre-print view only; revised in January 2021] Chen et al., 2021

24 Jack, R. E., Garrod, O. G. & Schyns, P. G. Dynamic Facial Expressions of Emotion

Transmit an Evolving Hierarchy of Signals over Time. Curr. Biol. 24, 187-192,

doi:10.1016/j.cub.2013.11.064 (2014).

25 Cott, H. B. Adaptive coloration in animals. (Oxford University Press, 1940).

26 Senju, A. & Johnson, M. H. Is eye contact the key to the social brain? Behav. Brain

Sci. 33, 458-459 (2010).

27 Jack, R. E., Garrod, O. G., Yu, H., Caldara, R. & Schyns, P. G. Facial expressions of

emotion are not culturally universal. Proc. Natl. Acad. Sci. 109, 7241-7244,

doi:10.1073/pnas.1200155109 (2012).

28 Elfenbein, H. A. Nonverbal Dialects and Accents in Facial Expressions of Emotion.

Emot. Rev. 5, 90-96, doi:10.1177/1754073912451332 (2013).

29 Jack, R. E., Crivelli, C. & Wheatley, T. Data-driven methods to diversify knowledge

of human psychology. Trends Cogn. Sci. 22, 1-5 (2018).

30 Henrich, J., Heine, S. & Norenzayan, A. The weirdest people in the world? Behav.

Brain Sci. 33, 61-83, doi:citeulike-article-id:7420323 (2010).

31 Ekman, P. & Friesen, W. V. Facial Action Coding System: Investigatoris Guide.

(Consulting Psychologists Press, 1978).

32 Ekman, P. Darwin and facial expression: A century of research in review. (The

Institute for the Study of Human Knowledge, 2006).

33 Russell, J. A. & Fernández-Dols, J. M. The psychology of facial expression.

(Cambridge university press, 1997).

34 Fridlund, A. J. Human facial expression: An evolutionary view. (Academic Press,

2014).

30 [Version for pre-print view only; revised in January 2021] Chen et al., 2021

35 Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M. & Pollak, S. D. Emotional

expressions reconsidered: Challenges to inferring emotion from human facial

movements. Psychol. Sci. Publ. Int. 20, 1-68 (2019).

36 Yu, H., Garrod, O. G. B. & Schyns, P. G. Perception-driven facial expression

synthesis. Comput. Graph. 36, 152-162 (2012).

37 Ince, R. A. et al. A statistical framework for neuroimaging data analysis based on

mutual information estimated via a gaussian copula. Hum. Brain Mapp. 38, 1541-

1573 (2017).

38 Jack, R. E. & Schyns, P. G. Toward a social psychophysics of face communication.

Annu. Rev. Psychol. 68, 269-297 (2017).

39 Jack, R. E. & Schyns, P. G. The Human Face as a Dynamic Tool for Social

Communication. Curr. Biol. 25, R621-R634 (2015).

40 Cover, T. M. & Thomas, J. A. Elements of information theory. (John Wiley & Sons,

2012).

41 Donhauser, P. W., Florin, E. & Baillet, S. Imaging of neural oscillations with

embedded inferential and group prevalence statistics. PLoS Comput. Biol. 14,

e1005990 (2018).

42 Ince, R. A., Kay, J. W. & Schyns, P. G. Bayesian inference of population prevalence.

Preprint at https://doi.org/10.1101/2020.07.08.191106. (2020).

43 Girard, J. M., Cohn, J. F., Yin, L. & Morency, L.-P. Reconsidering the Duchenne

Smile: Examining the Relationships between the Duchenne Marker, Smile Intensity,

and Positive Emotion. Preprint at https://psyarxiv.com/397af/. (2020).

44 Jack, R. E., Sun, W., Delis, I., Garrod, O. & Schyns, P. Four Not Six: Revealing

Culturally Common Facial Expressions of Emotion. J. Exp. Psychol. Gen. 145, 708-

730 (2016).

31 [Version for pre-print view only; revised in January 2021] Chen et al., 2021

45 Delis, I., Jack, R., Garrod, O., Panzeri, S. & Schyns, P. Characterizing the Manifolds

of Dynamic Facial Expression Categorization. J. Vision 14, 1384--1384 (2014).

46 Bradbury, J. W. & Vehrencamp, S. L. Principles of animal communication. (1998).

47 Hutton, P., Seymoure, B. M., McGraw, K. J., Ligon, R. A. & Simpson, R. K.

Dynamic color communication. Curr. Opin. Behav. Sci. 6, 41-49 (2015).

48 Whalen, P. J. et al. Human amygdala responsivity to masked fearful eye whites.

Science 306, 2061 (2004).

49 Kim, M. J. et al. Human amygdala tracks a feature-based valence signal embedded

within the facial expression of surprise. J. Neurosci. 37, 9510-9518 (2017).

50 Broadbent, D. E. Perception and communication. (Elsevier, 1958/2013).

51 Lachter, J., Forster, K. I. & Ruthruff, E. Forty-five years after Broadbent (1958): still

no identification without attention. Psychol. Rev. 111, 880 (2004).

52 Pratto, F. & John, O. P. Automatic vigilance: the attention-grabbing power of

negative social information. J. Pers. Soc. Psychol. 61, 380 (1991).

53 Jessen, S. & Grossmann, T. Unconscious discrimination of social cues from eye

whites in infants. Proc. Natl. Acad. Sci. 111, 16208-16213 (2014).

54 Schyns, P. G., Petro, L. S. & Smith, M. L. Dynamics of Visual Information

Integration in the Brain for Categorizing Facial Expressions. Curr. Biol. 17, 1580-

1585 (2007).

55 Schyns, P. G., Petro, L. S. & Smith, M. L. Transmission of Facial Expressions of

Emotion Co-Evolved with Their Efficient Decoding in the Brain: Behavioral and

Brain Evidence. PLoS One 4, e5625 (2009).

56 Hall, E. The Hidden Dimension. (Doubleday, 1966).

57 Nichols, T. E. & Holmes, A. P. Nonparametric permutation tests for functional

neuroimaging: a primer with examples. Hum. Brain Mapp. 15, 1-25 (2002).

32 [Version for pre-print view only; revised in January 2021] Chen et al., 2021

Acknowledgements

R.E.J. received support from the European Research Council [FACESYNTAX; 75858],

Economic and Social Research Council [ES/K001973/1 and ES/K00607X/1], British

Academy [SG113332] and John Robertson Bequest (University of Glasgow); D.S.M. received the National Science Foundation [IBSS-L 1620294] and the Institute of Education

Sciences [R324A180203]; C.C. received support from the Chinese Scholarship Council

[201306270029]; Y.D. received support from the Chinese Scholarship Council

[201606070109]; R.A.A.I. received support from the Wellcome Trust [214120/Z/18/Z];

P.G.S. received the support from the Wellcome Trust [Senior Investigator Award, UK;

107802] and the Multidisciplinary University Research Initiative/Engineering and Physical

Sciences Research Council [USA, UK; 172046-01]. The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Author contributions

P.G.S. and R.E.J. designed the research; R.E.J. collected the Western participant data in UK;

C.C. (Cheng Chen) collected the East Asian participant data in China; O.G.B.G. and P.G.S. developed the dynamic face movement generator; Y.D. and R.A.A.I. developed the analytical tools of Mutual Information; C.C. (Chaona Chen) analysed and interpreted the data under the supervision of R.E.J., P.G.S. and D.S.M.; C.C. (Chaona Chen) prepared the figures; C.C.

(Chaona Chen), P.G.S. and R.E.J. wrote the paper. D.S.M., Y.D., R.A.A.I., C.C. (Cheng

Chen), H.Y. and O.G.B.G. provided feedback on manuscript drafts.

Competing interests

The authors declare no competing interests.

33