Brain networks subserving functional core processes of emotions identified with componential modeling

Abstract Despite a lack of scientific consensus on the definition of emotions, they are generally considered to involve several modifications in the mind, body, and behavior. Although psychology theories emphasized multi-componential characteristics of emotions, little is known about the nature and neural architecture of such components in the brain. We used a multivariate data-driven approach to decompose a wide range of emotions into functional core processes and identify their neural organization. Twenty participants watched 40 emotional clips and rated 119 emotional moments in terms of 32 component features defined by a previously validated componential model. Results show how different emotions emerge from coordinated activity across a set of brain networks coding for component processes associated with valuation appraisal, hedonic experience, novelty, goal-relevance, approach/avoidance tendencies, and social concerns. Our study goes beyond previous research that focused on categorical or dimensional emotions, by highlighting how novel methodology combined with theory-driven modeling may provide new foundations for emotion neuroscience and unveil the functional architecture of human affective experiences.


B. Emotional segment selection
Although the initial assessment to select video clips was done on the basis of global judgments, to avoid any confound of multiple events related to different appraisal components within a clip, we analysed each single emotional event separately for the final fMRI study. The selection of emotional events was semi-manual and done through a separate experiment. In that experiment, five participants were asked to watch the video clips and continuously rate the emotional content using the CARMA tool (software for Continuous Affect Rating and Media Annotation) 2 . The ratings were quantified on a range from 0 to 100 and, for each participant, ratings above the mean value 32 (the value has been rounded) were selected as emotional moments. Events that were annotated as emotional by a majority of participants (3 or more participants) were selected as emotional segments. The start and end of each segment was manually adjusted to have the same length across all participants in the final fRMI study. Supplementary Table 2 lists the name of the films for each clip and the number of emotional events selected in that clip.
This selection of emotional events resulted in a non-uniform distribution of discrete emotions, as depicted in Figure 3a. This is unlike the global judgments made on the initial video clip dataset and used to select the final experimental movie list, indicating that such global assessment of video clips (as done in may studies) may not necessarily apply to all its single events within the clip, and thus highlighting the importance of focusing on short segments to evaluate felt emotions. Please note that this non-uniform distribution might be considered as one limitation of the current study but does not necessarily affect the main results since our main goal is to cover a wide range of componential space, not to elicit or compare specific discrete emotions. Table 2: High saliency brain areas. This table lists the name of the films for each clip, its duration, the original emotion label as collected in another study and number of events selected in each clip.

C. fMRI session
Each fMRI session started off by giving instructions regarding the experiment and fMRI acquisition protocol. Next, participants had to fill in the required forms and completed a 16item Brief Mood Introspection Scale (BMIS) mood questionnaire 3 . When the participant was ready, (s)he entered the scanner and the physiology collecting devices, EMG, headphones, and eye-tracker were set up. After checking all physiology signals and calibrating the eyetracker, the actual acquisition started. Each video clip was played during one single run and each run lasted ~164s including a short initial preparation time and final washout clip. There was an interval of ~30 s between consecutive runs. Supplementary Figure 1 illustrates the sequence of events inside the fMRI scanner. On average, the overall time inside the scanner for each session was about 32 minutes, excluding the setup time, calibration and structural MRI acquisition.

D. Behavioral session:
Similar to fMRI session, at the beginning of each behavioral session, participants were briefed on the procedure to evaluate their felt emotions. The experiment started by asking them to answer a 10-item Big Five Inventory (BFI) personality questionnaire 4 , followed by playing the same video clips as those seen in the fMRI session. Emotional segments were highlighted by a red frame to notify the participant of the video segment/event, (s)he had to assess. After   Table 3: High saliency brain areas. This table lists the brain areas (excluding brain stem) as defined in AAL atlas with high average saliencies (>3 or <-3) corresponding to each of the 6 latent variables. Average values across voxels of a given region may artificially increase or decrease the apparent importance of some regions relative to others.

A. Brain Saliencies:
The partial least square correlation method estimates saliency values for each voxel in the brain. To summarise the list of regions with the highest saliencies, we used Automated Anatomical Labelling (AAL) atlas 5 and reported the results in Supplementary Table 3

C. Comparison with classic emotion models
In order to compare our results obtained with our componential model with traditional accounts of emotions that mostly consider subjective experiential features (e.g. pleasantness or unpleasantness of particular stimuli or situations), we performed a similar PLSC analysis on the feeling component items which mainly describe valence and arousal features similar to classic bidimensional models (see Table 1  Specifically, we used all 34 behaviorial GRID features where all the items except those from the feeling component were randomly permuted to ensure their information is scrambled. This method allowed us to guarantee that results from the original PLSC could be comparable with the results from the feeling-based analysis and examine if a bidimensional model (valence x arousal) was sufficient to account for our data, and at the same time rule out that differences between these models could simply be due to their dimensionality. This second control analysis again produced only two significant latent variables where the first one mainly loaded on feeling items related to valence (p<0.0001, 11.5%) and the second loaded on the feeling items corresponding to arousal (p<0.0001, 7.1%), whereas all other GRID features produced null or non-discriminative loadings (see Supplementary Figure 6). The corresponding brain saliency maps for these two LVs showed virtually identical activation patterns as found with the first control analysis and illustrated in Supplementary Figure 5.
Importantly, comparing the explained covariance from the different models indicates that the model with the full components (original PLSC with 6 LVs) captures about 49% of the brainbehavior covariance in our data, while this reduces to about 19% when considering only feeling items and eliminating the information from other components, a difference that cannot be accounted for by the dimensionality of models.

D. Discrete emotion and latent variables correlation:
To examine the similarity between each latent variable and the discrete emotion profiles, a Pearson correlation analysis was performed. Supplementary Figure 3 depicts the significance of these correlations, representing the relative implication of each LV in discrete emotion categories.
Almost all discrete emotions showed high positive or negative correlations with the first latent variable LV1 (encoding appraisals of values or valence), except for sadness and surprise that showed weaker or no significant correlation. The second latent variable LV2 (attributed to novelty) showed a strong positive correlation with surprise, but negative correlation with sadness and anger, while LV3 (interpreted as hedonic impact) was positively correlated with joy, satisfaction, love and calm, but negatively correlated with anxiety and sadness. LV4 (related to goals and intentions) was positively associated with surprise and negatively with anger. Interestingly, the fifth latent variable LV5 was found to be significantly correlated with sadness followed by high non-significant correlation with love, two discrete emotions of opposite valence (but consistent with a dimension of caring for others and social concern). And finally, the last latent variable LV6 (encoding dimensions of curiosity and active approach vs avoidance) showed significant correlation with ratings of fear, followed by high non-significant correlations with anxiety and disgust (associated with the negative loadings on this dimension).. Taken together, these findings highlight that each of the different LVs identified by our data driven PLSC analysis contributed to different emotions, but to variable degrees, and also that they generally held meaningful relationships with discrete categorical labels. Importantly, however, single LVs cannot be reduced to particular emotion categories or unique orthogonal dimensions such as valence or arousal. Figure 7: Correlation analysis. Pearson correlation coefficient between latent variables (LV) and discrete emotion categories based on individual ratings of movies (** corresponds to p-value<.01 and * corresponds to p-value<.05).