Abstract

Emotional information can be conveyed by verbal and nonverbal cues with the latter often suggested to exert a greater influence in shaping our perceptions of others. The present functional magnetic resonance imaging study sought to explore attentional biases toward nonverbal signals by investigating the interaction of verbal and nonverbal cues. Results obtained in this study underline the previous suggestions of a “nonverbal dominance” in emotion communication by evidencing implicit effects of nonverbal cues on emotion judgements even when attention is directed away from nonverbal signals and focused on verbal cues. Attentional biases toward nonverbal signals appeared to be reflected in increasing activation of the dorsolateral prefrontal cortex (DLPFC) assumed to reflect increasing difficulties to suppress nonverbal cues during task conditions that asked to shift attention away from nonverbal signals. Aside the DLPFC, results suggest the right amygdala to play a role in attention control mechanisms related to the processing of emotional cues. Analyses conducted to determine the cerebral correlates of the individual ability to shift attention between verbal and nonverbal sources of information indicated that higher task-switching abilities seem to be associated with the up-regulation of right amygdala activation during explicit judgments of nonverbal cues, whereas difficulties in task-switching seem to be related to a down-regulation.

Introduction

Emotions play a central role in nearly every facet of human life: Emotions guide human behavior, shape social interaction, and define interpersonal relationships (Parkinson 1996; Van Kleef 2009) and, in a sense, understanding emotions—particularly those of our fellow human beings—may constitute the most challenging and rewarding task we are faced with on a day-to-day basis.

In deciphering the emotional states of communication partners, human beings rely on a variety of different cues exchanged during social interactions. Spoken words and different so-called nonverbal signals such as facial expressions, changes in the tone of a voice or nonverbal vocalizations like laughter and crying, provide us with the information needed to understand what others might feel, desire, or intent to do (Adolphs 2002; Meyer et al. 2005; Dietrich et al. 2006, 2007, 2008; Szameitat et al. 2010; Brück, Kreifelts, Wildgruber 2011; Jacob et al. 2012a). While a single word or sentence, or a single facial expression, may already “say it all” and suffice in getting the “message across,” everyday social interaction frequently requires considering and weighting several different affective cues observed at the same time. By integrating the various verbal and nonverbal signals received simultaneously, observers may not only exploit redundancies to increase the accuracy of their inferences, rather slight inconsistencies between verbal and nonverbal messages detected in the process may add further information and even alter our impression of the signals' sender's state of the mind. Signs of anger conveyed both through a facial expression (e.g. glaring eyes, flushed red face, tightened lips, and clenching brows) and a harsh tone of voice while speaking may enhance the effect of a threatening verbal message such as “Get out of here, or else!” and may convince the recipient of the sender's anger; whereas emotional discrepancies spotted between nonverbal expressions of envy and verbal testimonies of shared joy (e.g. “I'm so happy for you”), for instance, may reveal the speaker to be lying.

Considering the relative impact of the different sources of information, research findings obtained in different behavioral studies often identify nonverbal cues to “trump” verbal signals in shaping our perceptions of others' emotional states (Mehrabian and Ferris 1967; Mehrabian and Wiener 1967; Argyle et al. 1970, 1971; for review see Noller 1985). Presumably, this stronger impact of nonverbal cues on emotional communication is phylogenetically founded: Current theories of language development, in fact, assume that, in human evolution, nonverbal may have preceded verbal communication (McBride 1975; Dew and Jensen 1977), which enabled social exchange even before human beings were able to speak and thereby might have contributed to the survival of our species. Aside from phylogenetic aspects of language development, the suggested primacy of nonverbal communication also reflects itself in the ontogenetic development of human communication abilities where, again, nonverbal signals precede the use of verbal cues (i.e. language): As long as infants have not learned to speak, communication necessarily has to take place at nonverbal levels rendering nonverbal cues such as facial expressions, pointing gestures, or nonverbal vocalizations like crying the primary means of expressing and deciphering various needs (e.g. food, sleep, safety, and nurturing) that require supply or relief (McNeill 1970).

Based on the latter examples suggesting a superior importance of nonverbal signals, one may assume that nonverbal cues take precedence in information processing and bias information detection in the sense that they capture attention when competing with other sources of information. In corroboration of the latter assumption, behavioral observations obtained across a considerable number of studies evidence affective significance or emotional salience to guide attention (for review see Vuilleumier 2005; Pourtois et al. 2012) with recent discussions on the matter, suggesting that “[e]motional biases are probably [even] stronger with ‘biologically prepared’ stimuli” (Vuilleumier 2005, p. 586) such as faces etc. However, questions remain as to how this supposed nonverbal dominance in emotion communication (and the attention bias associated therewith) may be reflected in brain mechanisms underlying the processing of affective signals.

Considering modulating effects of emotional salience on information processing, recent reviews detailing brain mechanisms associated with the rapid selection of affectively significant stimuli (for review see Compton 2003; Vuilleumier 2005; Pourtois et al. 2012) outline the idea that “emotional attention” may rely on a distinct attention system centered around the amygdala that operates in addition to the voluntary attention system mediated by frontoparietal brain structures (Vuilleumier and Huang 2009; Pourtois et al. 2012): Severing as a central hub in a circuit of brain structures, output signals generated by the amygdala are assumed to boost the representation of emotionally salient information by modulating the activation of a broad network of sensory cortical parietal and frontal brain regions. Such “boosting effects”, in turn, may be amplified or attenuated by top-down modulations of several frontal brain regions such as the dorsolateral prefrontal cortex (DLPFC), ventromedial prefrontal cortex, or orbitofrontal cortex (OFC)—possibly reflecting cerebral mechanisms related to voluntary control and the allocation of attention resources to task-relevant aspects of the environment (for review see Compton 2003; Vuilleumier and Huang 2009; Pourtois et al. 2012).

Concluding from these models of emotional attention, one may assume attention biases related to a nonverbal dominance in emotion communication to be mirrored in 2 distinct cerebral mechanisms: 1) An increasing activation of several sensory areas along the processing path associated with an enhancement of sensory processing that is mediated through bottom-up inputs from subcortical pathways involving the amygdala, and 2) an increasing recruitment of dorsolateral and medial frontal “voluntary control areas” linked to increasing efforts to suppress the processing of nonverbal cues when nonverbal signals are competing with “weaker”—yet task-relevant—cues that need to be subjected to a more elaborate processing.

Proceeding from these outlined hypotheses, the present study aimed to address propositions of a nonverbal dominance by studying the interplay between nonverbal cues and attention exemplified in attentional biases toward nonverbal affective signals. Assuming that such biases in attention might be reflected in an inability to ignore or suppress the processing of nonverbal cues, our study approached the issue by examining and comparing brain responses associated with the explicit evaluation of nonverbal cues and the involuntary processing of nonverbal affective information when asked to suppress the analysis of nonverbal cues and, rather, to focus on verbal cues presented alongside these signals.

As far as questions regarding the cerebral substrates of such an involuntary processing of nonverbal cues are concerned, research findings delineating different cerebral networks to contribute to the “explicit” or “implicit” processing of nonverbal affective signals (i.e. facial and vocal-prosodic cues) may, in fact, provide first tentative clues: While the involuntary processing of nonverbal affective cues presented outside of attention focus has been suggested to be linked to increased limbic and medial frontal activation (Wildgruber et al. 2006, 2009; Brück, Kreifelts, Wildgruber 2011), the effortful, cognitive controlled evaluation of nonverbal cues, in contrast, has been associated with decreases in subcortical limbic activation paralleled by increasing the activation of a broad network of cortical brain structures (Wildgruber et al. 2006, 2009; Brück, Kreifelts, Wildgruber 2011), including the DLPFC and OFC, the posterior superior temporal cortex (pSTC), and cue-specific brain regions such as face-sensitive aspects of the fusiform gyrus (i.e. fusiform face areas, FFA; Kanwisher et al. 1997) and voice-sensitive areas of the mid-superior temporal cortex (i.e. temporal voice areas, TVA; Belin et al. 2000).

To investigate the cerebral processing of nonverbal affective cues under different processing conditions, we used functional magnetic resonance imaging (fMRI). Healthy volunteers were scanned while performing 2 different emotion judgment tasks based on a set of video stimuli capturing different speakers expressing different emotional states: One task required participants to base their respective judgments on nonverbal indicators displayed by the respective speakers (and ignore verbal content), while the other task asked to focus on spoken words (and ignore nonverbal messages) in reaching a decision. Investigations into the targeted brain responses were guided by contrasts evaluating stimulus-driven effects (i.e. brain activation elicited by emotional nonverbal cues irrespective of attention focus) as well as modulations of these effects by the 2 different task instructions employed in the study.

Based on previous research, we assumed that the perception of emotional nonverbal signals would rely on a widespread network of brain regions including limbic brain structures, in particular, the amygdala, as well as cortical brain areas, particularly, the anterior rostral medial frontal cortex (arMFC), FFA, TVA, pSTC, OFC, and DLPFC. Research findings published on the functional characteristics of several of these brain regions, moreover, suggest that the amygdala, FFA, and TVA may play a role in the stimulus-driven processing of affective nonverbal information occurring regardless of task instructions, while the right pSTC may, in contrast, contribute to a task-related analysis of nonverbal information when nonverbal affective cues are in the focus of attention (Wildgruber et al. 2006, 2009; Brück, Kreifelts, Wildgruber 2011). Finally, building on current research conducted on both the implicit processing of affective cues (Wildgruber et al. 2006, 2009; Brück, Kreifelts, Wildgruber 2011) and recent models of emotional attention (for review see Compton 2003; Vuilleumier 2005; Pourtois et al. 2012), we assumed that “control regions” located in the DLPFC as well as regions within the arMFC or the limbic system (particularly the amygdala), may play a role during the involuntary processing of affective nonverbal cues when nonverbal signals are not in the focus of attention.

Another concern of the study, however, was to explore how individual differences in the ability to shift attention between verbal and nonverbal sources of emotional information—or in other words the ability to modulate attentional biases toward nonverbal information—are related to brain activation associated with the processing of affective cues. To this end, we conducted correlation analyses aimed at unraveling relationships between brain responses and behavioral measures reflecting an individual's ability to shift her/his attention focus between verbal and nonverbal cues. Based on evidence provided in current reviews suggesting a predominate role of the amygdala in emotional attention (for review see Vuilleumier and Huang 2009; Pourtois et al. 2012), our analysis focused on modulations of amygdala activation as a potential cerebral correlate of the ability to shift attention among competing sources of emotional information.

Materials and Methods

Participants

Twenty-four healthy and right-handed (Edinburgh Handedness Inventory; Oldfield 1971) individuals volunteered to participate in the study [12 females; mean age = 26.42 years, standard deviation (SD) = ±2.57 years]. None of the participants reported any current or past substance abuse problems, neurological, or psychiatric illnesses, nor indicated any hearing difficulties or uncorrected vision impairments. Moreover, none of the participants reported to be taking any medication.

Ethics Statement

The study was performed in accordance with the ethical principles expressed in the Code of Ethics of the World Medical Association (Declaration of Helsinki), and the paradigms and protocol employed in this study were reviewed and approved by the local ethics committee. All participants gave their written informed consent prior to inclusion in the study and received a small financial compensation for their participation.

Stimulus Material, Tasks, and Procedure

The stimulus material employed in this study comprised a set of 120 short video films (mean duration = 1459 ms, SD = ±317 ms) capturing professional actors' portrayals of different emotional states expressed through both verbal statements and nonverbal behavior. Considering presented verbal cues, 10 actors (5 females) were instructed to speak 6 short German sentences conveying either a neutral, positive, or negative affective meaning [e.g. “Ich fühle mich gut” (“I feel good”)]. These 6 sentences were chosen from a pool of written sentences pretested with regard to their valence and frequency of use in daily life. The selection of the final 6 sentences was based on 2 selection criterions: 1) Valence ratings had to enable the arrangement of 5 distinct verbal valence categories (highly negative, negative, neutral, positive, and highly positive; Table 1) and 2) frequency of use ratings had to be >5 (1 = “never” to 9 = “very often”).

Table 1

Verbal valence values

Valence category Verbal valence valuesa 
Highly negative 
 Ich fühle mich erbärmlich (I feel awful) 1.5 (0.7) 
Negative 
 Ich fühle mich unwohl (I feel uncomfortable) 3.0 (1.0) 
Neutralb 
 Ich bin ruhig (I am calm) 5.4 (1.2) 
 Ich bin etwas aufgeregt (I am a bit excited)  
Positive 
 Ich fühle mich gut (I feel good) 7.3 (1.1) 
Highly positive 
 Ich fühle mich großartig (I feel great) 8.2 (0.8) 
Valence category Verbal valence valuesa 
Highly negative 
 Ich fühle mich erbärmlich (I feel awful) 1.5 (0.7) 
Negative 
 Ich fühle mich unwohl (I feel uncomfortable) 3.0 (1.0) 
Neutralb 
 Ich bin ruhig (I am calm) 5.4 (1.2) 
 Ich bin etwas aufgeregt (I am a bit excited)  
Positive 
 Ich fühle mich gut (I feel good) 7.3 (1.1) 
Highly positive 
 Ich fühle mich großartig (I feel great) 8.2 (0.8) 

Note. All the data included in this table are mean values with SDs in parentheses. The applied valence scale ranged from 1 to 9 (1 = “highly negative”; 5 = “neutral”; 9 = “highly positive”).

aRatings were derived from the prestudy in which participants had to assess the valence of written sentences.

bSince the nonverbal neutral category had no graded intensities, the averaged valence ratings of the 2 neutral sentences are reported.

Table 2

Nonverbal valence values

Valence category Nonverbal valence valuesa 
Highly negative 
 High-intensity anger 2.1 (0.5) 
Negative 
 Anger 3.3 (0.3) 
Neutral 
 No emotion 4.3 (0.4) 
Positive 
 Happiness 6.3 (0.4) 
Highly positive 
 High-intensity happiness 7.4 (0.5) 
Valence category Nonverbal valence valuesa 
Highly negative 
 High-intensity anger 2.1 (0.5) 
Negative 
 Anger 3.3 (0.3) 
Neutral 
 No emotion 4.3 (0.4) 
Positive 
 Happiness 6.3 (0.4) 
Highly positive 
 High-intensity happiness 7.4 (0.5) 

Note. All the data included in this table are mean values with SDs in parentheses. The applied valence scale ranged from 1 to 9 (1 = “highly negative”; 5 = “neutral”; 9 = “highly positive”).

aRatings were derived from the prestudy in which participants had to assess the valence of nonverbal signals. To minimize the influence of verbal information, only nonverbal valence ratings of stimuli where actors spoke neutral sentences were included into the calculation.

As far as nonverbal indicators are concerned, the actors were encouraged to express, while speaking, either a neutral, positive (i.e. happy), or a negative (i.e. angry) state of the mind by adapting their facial expression and tone of voice (i.e. speech prosody) to match the emotional state they were instructed to perform—regardless of whether the respective nonverbal behavior matched the verbal statements or not. Furthermore, instructions and materials provided to the actors aimed at producing nonverbal displays of lower and higher emotional intensity resulting in 5 distinct nonverbal valence categories (highly negative, negative, neutral, positive, and highly positive). Following the production process, the gathered video footage was postprocessed to ensure a comparable video (Adobe Premiere Pro CS3, Adobe Systems, Inc., San Jose, CA, USA) and sound (Cool Edit Pro 2.1, Syntrillium Software Corp., Phoenix, AZ, USA; PRAAT 5.1.07; Boersma 2001) quality across the different recordings. The 120 video films were chosen from a pool of videos pretested with regard to their valence and the authenticity of the actors' portrayals. The selection of the final 120 video films was based on 2 selection criterions: 1) Valence ratings had to enable the arrangement of 5 distinct nonverbal valence categories (highly negative, negative, neutral, positive, and highly positive; Table 2) that should be comparable with the 5 verbal valence categories (Table 1) and 2) authenticity ratings had to be >4 (1 = “low authenticity” to 9 = “high authenticity”). Of the 120 video films employed in this experiment, 48 provided matching emotional information at verbal and nonverbal levels, while for the remaining 72 video films, verbal and nonverbal cues differed with respect to the expressed emotions. For further details on stimulus production, editing, and selection please refer to Jacob et al. (2012b).

Preparing this fMRI experiment, the selected 120 videos films then were equally divided into 2 blocks with video sequences within each block balanced with respect to verbal and nonverbal valence, as well as actors' identity and gender. Each block presentation was accompanied by 1 of 2 different task instructions differing in how subjects had to evaluate presented stimuli. One block was presented with instructions to assess the valence of emotions expressed at the verbal level while disregarding nonverbal expressions (i.e. verbal valence task), whereas instructions provided for the other block required to assess the valence of the emotions expressed through nonverbal indicators and ignoring verbal information (i.e. nonverbal valence task). For both tasks, answers were given on a 4-point scale, offering participants to choose from one of the following response alternatives: −− = highly negative, − = negative, + = positive, ++ = highly positive. To avoid effects attributable to the arrangement of response alternatives, the response scale was flipped horizontally for half of the participants. Each participant was instructed to indicate her/his judgment as quickly as possible by pressing 1 of 4 buttons on a fiber optic response system (LUMItouch, Photon Control, Inc., Burnaby, BC, Canada) placed in her/his right hand. Answers were required within a time frame of 5 s following stimulus onset. Stimulus order within each block was randomized, and task order and block order were balanced among individuals. Half of the participants started with instructions to assess verbal valence, while the other half started with instructions to assess nonverbal cues. Of the 12 participants starting with judgments of verbal valence, 6 performed the respective judgments based on the 60 stimuli included in stimulation block 1, while the remaining 6 participants based their judgments of verbal valence on stimulus material assigned to block 2. Similarly, of the 12 participants starting with judgments of nonverbal cues, 6 performed the respective task based on stimuli included in block 1 and 6 based on stimuli included in block 2.

Stimulus presentation was controlled using the software “Presentation” (Neurobehavioral Systems, Inc., Albany, CA, USA). Video films employed in this experiment were projected onto a translucent screen placed behind the participant's head. The participants viewed the stimuli via a mirror system mounted onto the head coil. Sound was presented binaurally through a magnetic resonance compatible headphone (MR confon GmbH, Magdeburg, Germany). Stimulus onset was jittered relative to the scan onset in steps of 500 ms [1/4 of the repetition time (TR) = 2000 ms]. Ten null events per block were inserted randomly between stimulus presentations. Each task was measured in a separate run, and participants were reminded of the respective task instructions immediately before starting each run.

Subsequent to the main experiment, 2 functional localizer experiments were performed to identify brain regions linked to the perception of human voices and faces, considered regions of interest (ROIs) in this study. The functional localizer experiment employed to determine voice-sensitive brain structures relied on a passive listening paradigm, asking participants to listen to randomized blocks of human vocal sounds (speech, sighs, laughs, and cries) or animal or environmental sounds. Voice-sensitive brain areas were defined by contrasting brain responses to vocal sounds with brain activation associated with listening to animal or environmental sounds [voices > (animal, environment); for further details see Belin et al. 2000]. The localization experiment employed to identify face-sensitive brain areas, on the other hand, used a 1-back matching task performed on pictures from 4 different categories: Faces, houses, objects, and natural scenes. To focus attention on the different picture stimuli, participants were instructed to view presented picture sequences carefully and to press a button with their right index finger if the picture presented to them matched the one they had seen immediately before. Face-sensitive brain structures were determined by contrasting brain responses to faces with activation to houses, objects, and natural scenes [faces > (houses, objects, scenes); for further details see Kanwisher et al. 1997; Epstein et al. 1999].

Analysis of Behavioral Data

Valence ratings of the participants were used as outcome variables. The valence ratings were recoded from symbols to numeric values (−− 1, − = 2, + = 3, ++ = 4) and analyzed using the software package IBM SPSS Statistics Version 19 (IBM Corp., Armonk, NY, USA).

Effects of verbal and nonverbal information on valence ratings were evaluated by means of a repeated-measures analysis of variance (ANOVA) with nonverbal information (neutral, positive, and negative), verbal information (neutral, positive, and negative), and task (verbal valence task and nonverbal valence task) defined as within-subject factors. To account for violations of sphericity, all results were Greenhouse–Geisser corrected (Geisser and Greenhouse 1958).

To check for relations between the valence ratings obtained in the present study and those obtained in a previous study by our group (Jacob et al. 2012b), correlation analyses were conducted. To this end, ratings of verbal and nonverbal valence obtained for each stimulus in this study were correlated with the valence ratings derived from our previous study in which task instructions did not ask to focus on verbal or nonverbal signals, but rather to assess the overall emotional state of a speaker. Steiger's method was used to compare the respective correlation coefficients (Steiger 1980).

Aiming to analyze individual differences in the ability to shift attention between verbal and nonverbal sources of information, we used the gathered behavioral data to derive a composite measure indicative of a participant's task-switching performance. To this end, we calculated a nonverbal dominance index (INDI; Jacob et al. 2012b) for the verbal valence task (INDIV) as well as for the nonverbal valence task (INDINV) and subtracted the derived values (i.e. INDIDIFF = INDINV − INDIV) to gauge how well participants were able to modulate or control their tendency to regard nonverbal cues during task performance.

The nonverbal dominance index used in the process represents a measure that allows quantifying the relative impact of nonverbal information on the obtained judgments of emotional valence. To calculate the INDI, individual multiple linear regressions were performed with the participants' valence ratings of each stimulus as dependent variables and reference values for a stimulus's verbal valence (Table 1) and nonverbal valence (Table 2) as independent variables. In this manner, separate beta coefficients estimating the impact of verbal (betaverbal) and nonverbal information (betanonverbal) were gained. Based on these 2 separate beta coefficients, the INDI was then calculated as follows: 

$$\hbox{INDI = }\displaystyle{{\hbox{[(beta}_{{\rm nonverbal}} - \hbox{beta}_{{\rm verbal}} \hbox{)/(beta}_{{\rm nonverbal}} \hbox{+ beta}_{{\rm verbal}} \hbox{) + 1]}} \over 2}$$

Negative beta coefficients were set to zero since negative values indicate no influence of the respective parameter. Considering the range of INDI values, derived scores may vary between 0 and 100% with 0%, indicating that given valence judgments were driven solely by verbal information, and values of 100% indicating a complete reliance on nonverbal cues. With respect to the calculated INDI differences (INDIDIFF), low differences are assumed to reflect troubles in switching between both sources of emotional information, whereas high differences may imply that participants were able to shift their attention between verbal and nonverbal cues when instructed to do so.

Image Acquisition

Functional images were acquired on a Siemens MAGNETOM Verio 3-T whole-body MRI scanner (Siemens AG, Erlangen, Germany) using an echo-planar imaging (EPI) sequence [TR = 2000 ms, echo time (TE) = 30 ms, flip angle = 90°, field of view = 192 mm, 64 × 64 matrix, 34 slices, 3 mm thickness, 1 mm gap, oriented along the anterior commissure-posterior commissure plane]. Moreover, as an anatomical reference, high-resolution T1-weighted images were obtained using a magnetization-prepared rapid acquisition gradient echo sequence (TR = 1900 ms, TE = 2.52 ms, 176 slices, 1 mm thickness). For each scanning session, field homogeneity was optimized by a shimming sequence, and a gradient echo sequence (34 phase and magnitude images, TR = 488 ms, TE1 = 4.92 ms, TE2 = 7.38 ms, α = 60°) was acquired to calculate a field map used to correct geometric distortions in the echo-planar images during image analysis.

Image Analysis

Functional images were analyzed using the software package SPM8 (Wellcome Department of Imaging Neuroscience, London, UK; http://www.fil.ion.ucl.ac.uk/spm). The first 5 EPI images within each run were discarded from further analysis to exclude measurements preceding T1 equilibrium. The preprocessing of the images included realignment to correct for motion artifacts, unwarping using a static field map, coregistration with the obtained anatomical data, normalization into the Montreal Neurological Institute (MNI) space, and smoothing with a Gaussian kernel of 8-mm full-width at half maximum.

Statistical analyses were based on a general linear model with a separate regressor defined for each event using a stick function convolved with the hemodynamic response function. All events were time-locked to the onset of each stimulus. Low-frequency components were reduced by applying a high-pass filter with a cut-off frequency of 1/128 Hz. Moreover, to account for serial autocorrelations within the data, the error term was modeled as an autoregressive process (Friston et al. 2002).

As outlined in the Introduction section of this paper, the image analysis focused on exploring brain activation elicited by emotional nonverbal cues irrespective of task instructions (i.e. stimulus-driven effects) as well as the modulation of these stimulus-driven effects by the different task instructions (for the sake of completeness, the same analyses were performed for the verbal stimuli as well; see Supplementary data). To this end, we decided on 3 different t-contrasts as basis for our analysis: In order to evaluate stimulus-driven effects, brain responses to nonverbal expressions of emotions were contrasted to activation patterns elicited by neutral nonverbal signals (NVEMO > NVNEU)—irrespective of the verbal information presented alongside these cues. Aside these stimulus-driven effects, task-related differences in brain activation were evaluated by contrasting brain responses associated with the nonverbal valence task to brain activation induced by the verbal valence task (TASKNV > TASKV). Furthermore, to explore a modulation of stimulus-driven effects by the different task instructions employed in this study, interaction effects between task and stimulus-driven brain activation were examined using the following t-contrast: [TASKNV (NVEMO− NVNEU)− TASKV (NVEMO− NVNEU)]. Furthermore, to evaluate relationships between brain responses and individual differences in the ability to modulate attentional biases toward nonverbal signals, INDIDIFF values obtained for each participant based on the gathered behavioral data were correlated with task-related brain responses (TASKNV > TASKV).

The statistical evaluation of group data relied on a second-level random effects analysis and inferences included both whole-brain and ROI-based approaches. Building on previous findings suggesting a role of the amygdala as well as the FFA and TVA in the stimulus-driven processing of nonverbal affective cues, we considered these brain areas our ROIs. Masks used to localize the left and right amygdala were defined using the SPM toolbox WFU Pickatlas (Maldjian et al. 2003). Masks of the left and right FFA and TVA were generated based on the results of 2 separate functional localizer experiments (see above description, activation patterns thresholded at Puncorrected < 0.001). Results of the ROI analyses are reported using a height threshold of Puncorrected < 0.01 and an extent threshold of k ≥ 15 voxels. Statistical significance of activations within the respective ROIs was assessed based on random field theory (Friston et al. 1993) using small volume corrections (SVCs; Worsley et al. 1996) corresponding to P < 0.05, corrected for multiple comparisons within each ROI. Results of the whole-brain analyses are reported at a height threshold of Puncorrected < 0.001, and significance of the respective results was assessed at the cluster level using a significance criterion of P < 0.05 corrected for multiple comparisons across the whole brain.

Results

Behavioral Data

The behavioral results are depicted in Figures 1 and 2.

Figure 1.

(A) Effect of nonverbal information on valence ratings. (B) Effect of verbal information on valence ratings. Dark gray bars represent the mean valence ratings obtained during the verbal valence task. Light gray bars represent the mean valence ratings obtained during the nonverbal valence task. Error bars represent the standard errors of the means (N = 24). Results of post hoc comparisons computed to analyze the effects of unattended cues are marked in the figure (**P < 0.01, n.s. = not significant).

Figure 1.

(A) Effect of nonverbal information on valence ratings. (B) Effect of verbal information on valence ratings. Dark gray bars represent the mean valence ratings obtained during the verbal valence task. Light gray bars represent the mean valence ratings obtained during the nonverbal valence task. Error bars represent the standard errors of the means (N = 24). Results of post hoc comparisons computed to analyze the effects of unattended cues are marked in the figure (**P < 0.01, n.s. = not significant).

Figure 2.

(A) Mean values of INDIV , INDINV , and INDIDIFF . Error bars represent the standard errors of the means (N = 24). (B) Individual values of INDIV and INDINV displayed for all 24 participants.

Figure 2.

(A) Mean values of INDIV , INDINV , and INDIDIFF . Error bars represent the standard errors of the means (N = 24). (B) Individual values of INDIV and INDINV displayed for all 24 participants.

Valence Rating

The employed repeated-measures ANOVA indicated signifi-cant main effects of nonverbal valence (F1.24, 28.55 = 191.43, P < 0.001), verbal valence (F1.34, 30.71 = 94.37, P < 0.001), and task (F1.00, 23.00 = 16.39, P < 0.001) as well as significant interactions between task and nonverbal valence (F1.20, 27.51 = 63.39, P < 0.001) and task and verbal valence (F1.39, 32.02 = 92.70, P < 0.001). The behavioral results are depicted in Figure 1.

To further detail the task by nonverbal valence interaction, differences between ratings of stimuli with positive and negative nonverbal valence obtained during the verbal and the nonverbal valence task were calculated. A paired-samples t-test revealed that the rating differences obtained during the verbal valence task were significantly smaller (M = 0.31, SD = ±0.44) than during the nonverbal valence task (M = 1.65, SD = ±0.60; T(23) = −8.18, P < 0.001).

To further explore the task by verbal valence interaction, differences between ratings of stimuli with positive and negative verbal valence obtained during the verbal and the nonverbal valence task were calculated. A paired-samples t-test revealed that the rating differences obtained during the verbal valence task were significantly larger (M = 1.71, SD = ±0.70) than during the nonverbal valence task (M = 0.11, SD = ±0.38; T(23) = 10.55, P < 0.001).

Additional post hoc analyses were conducted to evaluate the effect of the unattended cues. As far as unattended nonverbal cues are concerned, significant differences emerged for comparisons between the nonverbal positive and nonverbal neutral categories (T(23) ≥ 3.54, P < 0.01; see Fig. 1A) as well as the nonverbal positive and nonverbal negative categories (T(23)≥ 3.43, P < 0.01; see Fig. 1A) during the verbal valence task (Fig. 1A). However, no difference was found between the nonverbal neutral and nonverbal negative categories (T(23)≥ 1.71, P > 0.05; see Fig. 1A). With regard to the effect of unattended verbal cues, no rating differences emerged among the verbal valence categories during the nonverbal valence task [all abs (T(23))≥ 0.68, P > 0.05; see Fig. 1B].

To check for correlations between the valence ratings obtained in this study and ratings obtained in a previous study by our group (Jacob et al. 2012b), correlation analyses were conducted. Significant positive correlations emerged between the valence ratings obtained in our previous study and those obtained during the verbal valence (rs = 0.35, P < 0.001, 2-tailed) and nonverbal valence task (rs = 0.94, P < 0.001, 2-tailed). Moreover, a Steiger's z-test performed on the correlation data revealed correlations with the valence ratings obtained during the verbal valence task to be significantly smaller in size compared with those obtained for the nonverbal valence task (z = −10.78, P < 0.001).

Individual Nonverbal Dominance Index

INDIV values averaged to 21.37% (SD = ±28.46%), whereas INDINV values averaged to 93.23% (SD = ±14.58%). A paired-samples t-test computed on obtained INDI values revealed significant task differences (T(23) = 11.11, P < 0.001). INDIDIFF values averaged to 71.86% (SD = ±31.69%). The distribution of INDIV, INDINV, and INDIDIFF is depicted in Figure 2.

fMRI Analysis

An overview of significant results is given in Tables 3 and 4.

Table 3

Clusters of activation within the whole-brain and ROI analyses

Contrast Analysis Anatomical definition Cluster size Ta Pcorrected Peak MNI coordinate
 
x y z 
NVEMO > NVNEU WB Posterior parts of the right temporal voice area 106 5.50 0.015 45 −36 
ROI Left amygdala 32 3.55 0.033 −27 −6 −18 
ROI Right amygdala 27 4.11 0.042 24 −9 −15 
ROI Right fusiform face area 19 3.45 0.033 39 −51 −24 
ROI Left temporal voice area 202 4.88 0.011 −51 −9 
ROI Right temporal voice area 457 5.50 0.000 45 −36 
TASK × NV WB Right superior frontal cortex/right middle frontal cortex 167 5.79 0.001 24 42 33 
WB Left middle frontal cortex 101 4.95 0.008 −45 30 39 
WB Right superior occipital cortex/right precuneus 96 4.12 0.010 24 −63 42 
Contrast Analysis Anatomical definition Cluster size Ta Pcorrected Peak MNI coordinate
 
x y z 
NVEMO > NVNEU WB Posterior parts of the right temporal voice area 106 5.50 0.015 45 −36 
ROI Left amygdala 32 3.55 0.033 −27 −6 −18 
ROI Right amygdala 27 4.11 0.042 24 −9 −15 
ROI Right fusiform face area 19 3.45 0.033 39 −51 −24 
ROI Left temporal voice area 202 4.88 0.011 −51 −9 
ROI Right temporal voice area 457 5.50 0.000 45 −36 
TASK × NV WB Right superior frontal cortex/right middle frontal cortex 167 5.79 0.001 24 42 33 
WB Left middle frontal cortex 101 4.95 0.008 −45 30 39 
WB Right superior occipital cortex/right precuneus 96 4.12 0.010 24 −63 42 

Note. The anatomical definition was carried out using the SPM toolbox Automated Anatomical Labeling (AAL; Tzourio-Mazoyer et al. 2002).

adf = 23.

WB = whole-brain and ROI = region of interest.

Table 4

Cerebral responses correlated with the INDIDIFF

Contrast Analysis Anatomical definition Cluster size Ta Pcorrected rsb Peak MNI coordinate
 
x y z 
TASKNV > TASKV ROI Right amygdala 26 4.02 0.044 0.54 30 −27 
Contrast Analysis Anatomical definition Cluster size Ta Pcorrected rsb Peak MNI coordinate
 
x y z 
TASKNV > TASKV ROI Right amygdala 26 4.02 0.044 0.54 30 −27 

adf = 23.

bSpearman's rho correlation coefficient between INDIDIFF and the mean parameter estimates of the respective cluster.

ROI = region of interest.

Effects of Nonverbal Emotional Cues (NVEMO > NVNEU)

As far as stimulus-driven effects are concerned, ROI analyses revealed significant activations of the bilateral amygdalae, right FFA, and bilateral TVA during the processing of nonverbal expressions of emotions when compared with neutral nonverbal signals (Pcorrected < 0.05; see Fig. 3AC) regardless of task instructions.

Figure 3.

Enhanced activation to nonverbal emotional when compared with neutral stimuli (NVEMO > NVNEU) obtained within (A) the bilateral amygdalae, (B) the right FFA, (C) the bilateral TV A, and (D) posterior parts of the right TV A. ROI activations (upper panel) thresholded at Puncorrected < 0.01. All displayed clusters were statistically significant at the chosen statistical threshold of P < 0.05, corrected for multiple comparisons within each ROI. Black contours outline the respective ROI.

Figure 3.

Enhanced activation to nonverbal emotional when compared with neutral stimuli (NVEMO > NVNEU) obtained within (A) the bilateral amygdalae, (B) the right FFA, (C) the bilateral TV A, and (D) posterior parts of the right TV A. ROI activations (upper panel) thresholded at Puncorrected < 0.01. All displayed clusters were statistically significant at the chosen statistical threshold of P < 0.05, corrected for multiple comparisons within each ROI. Black contours outline the respective ROI.

To check for intensity effects, beta estimates were extracted from activated voxels within each of the 5 clusters reported above. The extracted values were averaged across all voxels, sorted with regard to their intensity (low and high), and subjected to separate (i.e. one for each cluster) paired-samples t-tests. Post hoc comparisons evidenced that, compared with low-intensity expressions, high-intensity nonverbal expressions elicited stronger cerebral responses within all predefined ROIs (all P < 0.01).

To check for valence effects, beta estimates were extracted from activated voxels within each of the 5 clusters reported above. The extracted values were averaged across all voxels, sorted with regard to their valence (neutral, positive, and negative), and subjected to separate (i.e. one for each cluster) paired-samples t-tests. Post hoc comparisons evidenced that, compared with neutral nonverbal expressions, positive as well as negative nonverbal expressions elicited stronger cerebral responses within all predefined ROIs (all P < 0.05). No significant differences emerged between positive and negative stimuli (all P > 0.05).

At the whole-brain level, contrasts computed to elucidate stimulus-driven effects indicated enhanced responses within the posterior parts of the right TVA (Pcorrected < 0.05; see Fig. 3D).

To check for intensity effects, beta estimates were extracted from activated voxels within the posterior parts of the right TVA. The extracted values were averaged across all voxels, sorted with regard to their intensity (low and high), and subjected to a paired-samples t-test. Post hoc comparisons revealed stronger activation to high-intensity nonverbal cues when compared with low-intensity nonverbal signals within the posterior parts of the right TVA (Mhigh = 6.58, SDhigh = ±1.96; Mlow = 5.73, SDlow = ±1.78; T(23) = −5.31, P < 0.001).

To check for valence effects, beta estimates were extracted from activated voxels within the posterior parts of the right TVA. The extracted values were averaged across all voxels, sorted with regard to their valence (neutral, positive, and negative), and subjected to paired-samples t-tests. Post hoc comparisons once more indicated that, compared with neutral nonverbal expressions (M = 5.55, SD = ±1.59), both positive (M = 6.19, SD = ±1.91; T(23) = 5.35, P < 0.001) and negative nonverbal expressions (M = 6.12, SD = ±1.79; T(23) = 4.62, P < 0.001) elicited stronger cerebral responses within the posterior parts of the right TVA. However, no significant differences emerged between the processing of positive and negative cues (T(23) = 0.61, P = 0.55).

Effects of Task (TASKNV > TASKV)

Contrasts computed to explore task-related effects revealed no significant involvement of the predefined ROIs. Nor was any significant activation found at the whole-brain level.

Interaction of Task and Nonverbal Emotional Cues (TASK × NV)

Contrasts computed to explore interactions between task and nonverbal emotional cues did not reveal a significant involvement of the predefined ROIs.

At the whole-brain level, however, a significant interaction between task and nonverbal emotional cues was revealed for 3 different activation clusters located within the right superior frontal cortex extending into the right middle frontal cortex, the left middle frontal cortex, and the right superior occipital cortex extending into the right precuneus (all Pcorrected < 0.05, see Fig. 4AC).

Figure 4.

Renderings and bar diagrams detailing interactions between task and stimulus-driven activation (TASK × NV) in (A) the right superior frontal cortex extending into the right middle frontal cortex, (B) the left middle frontal cortex, and (C) the right superior occipital cortex extending into the right precuneus. Bars represent the averaged contrast estimates (in arbitrary units) for the neutral and the 2 emotional nonverbal information conditions in the verbal valence (blue) and nonverbal valence task (yellow). Error bars represent the standard errors of the means (N = 24). Significant differences are marked with asterisks (*P < 0.05, **P < 0.01).

Figure 4.

Renderings and bar diagrams detailing interactions between task and stimulus-driven activation (TASK × NV) in (A) the right superior frontal cortex extending into the right middle frontal cortex, (B) the left middle frontal cortex, and (C) the right superior occipital cortex extending into the right precuneus. Bars represent the averaged contrast estimates (in arbitrary units) for the neutral and the 2 emotional nonverbal information conditions in the verbal valence (blue) and nonverbal valence task (yellow). Error bars represent the standard errors of the means (N = 24). Significant differences are marked with asterisks (*P < 0.05, **P < 0.01).

To further elaborate observed interaction effects, beta estimates were extracted from activated voxels within each of the 3 clusters reported above, averaged across all voxels, and subjected to post hoc analyses. Using the extracted mean parameter estimates, separate repeated-measures ANOVAs (one for each cluster and task) with nonverbal cues (neutral, positive, and negative) as within-subject factors were calculated. The conducted analyses revealed similar patterns of results for all 3 analyzed clusters. While nonverbal expressions did not affect brain activity during the nonverbal valence task, results revealed a main effect of nonverbal cues during the verbal valence task (right superior frontal cortex: F1.85, 42.64 = 5.72, P < 0.01; left middle frontal cortex: F1.80, 41.40 = 7.17, P < 0.01; and right superior occipital cortex: F1.91, 43.88 = 5.90, P < 0.01; see Fig. 4AC), indicating that nonverbal expressions of emotions (compared with neutral nonverbal signals) encountered during task performance enhanced activity within these regions.

Neural Correlates of Attention-Switching Ability

The ROI analyses revealed a significant correlation between the behavioral measure of task-switching ability (INDIDIFF) and task-related brain responses (TASKNV > TASKV) in the right amygdala (rs = 0.54, P < 0.01, 2-tailed; see Fig. 5A).

Figure 5.

(A) Correlation of INDIDIFF and right amygdala activity elicited during the nonverbal when compared with the verbal valence task (TASKNV > TASKV). ROI activation is thresholded at Puncorrected < 0.01. The displayed cluster was statistically significant at the chosen statistical threshold of P < 0.05, corrected for multiple comparisons within the ROI. Black contour outlines the respective ROI. (B) Bars represent the averaged contrast estimates (in arbitrary units) in the right amygdala during task performance when compared with rest. The results are depicted separately for both the low INDIDIFF group and high INDIDIFF group in the verbal valence (blue) and nonverbal valence task (yellow), respectively. Error bars represent the standard errors of the means (n = 12). Significant differences are marked with asterisks (*P < 0.05).

Figure 5.

(A) Correlation of INDIDIFF and right amygdala activity elicited during the nonverbal when compared with the verbal valence task (TASKNV > TASKV). ROI activation is thresholded at Puncorrected < 0.01. The displayed cluster was statistically significant at the chosen statistical threshold of P < 0.05, corrected for multiple comparisons within the ROI. Black contour outlines the respective ROI. (B) Bars represent the averaged contrast estimates (in arbitrary units) in the right amygdala during task performance when compared with rest. The results are depicted separately for both the low INDIDIFF group and high INDIDIFF group in the verbal valence (blue) and nonverbal valence task (yellow), respectively. Error bars represent the standard errors of the means (n = 12). Significant differences are marked with asterisks (*P < 0.05).

To further explore this finding, participants were separated in 2 groups, one low INDIDIFF group (M = 47.42%; SD = ±28.12%) and another high INDIDIFF group (M = 96.30%; SD = ±2.22%), based on a median split performed on INDIDIFF values (Mdn = 92.15%). Subsequently, a repeated-measures ANOVA with nonverbal cues (neutral, positive, and negative) and task (verbal valence task and nonverbal valence task) as within-subject factors and group (low INDIDIFF and high INDIDIFF) as between-subject factor was computed. The latter calculation was based on parameter estimates extracted from voxels within the right amygdala that showed a significant positive correlation with INDIDIFF: Results indicated a significant main effect of nonverbal cues (F1.42, 31.24 = 5.54, P < 0.05) and a significant interaction between task and group (F1.00, 22.00 = 6.90, P < 0.05; see Fig. 5B). Post hoc analysis conducted to further elucidate the obtained task by group interaction revealed that, compared with the low INDIDIFF group (M = 0.90, SD = ±0.64), the high INDIDIFF group showed increased right amygdala responses during the explicit evaluation of nonverbal cues (M = 1.70, SD = ±0.93; T(22) = −2.45, P < 0.05; see Fig. 5B).

Whole-brain analyses revealed no further significant associations between brain activation and INDIDIFF.

Discussion

Aiming to further explore the idea of a nonverbal dominance in the processing of affective cues, the goal of our study was to examine cerebral mechanisms underlying the stimulus-driven processing of nonverbal affective signals and its modulation by different task instructions manipulating the degree of voluntary attention devoted to nonverbal information.

Effects of Verbal and Nonverbal Cues on Valence Ratings

Pioneered in a series of behavioral experiments suggesting that, when competing with others (e.g. verbal) sources of information, nonverbal cues take precedence in shaping our perception of others, the issue of a nonverbal dominance has become a well-regarded theme of research with numerous studies corroborating the initial claims (Mehrabian and Ferris 1967; Mehrabian and Wiener 1967; Argyle et al. 1970, 1971; for review see Noller 1985). Behavioral findings obtained in this experiment may, in fact, further add to this body of evidence. Obtained behavioral data showed that while performing the verbal valence task participants based their judgments mainly on the verbal information, whereas during the nonverbal valence task participants' judgments were based mainly on the nonverbal information. These results indicate that overall, participants were able to follow the employed instructions correctly and voluntarily shifted their attention between both sources of information. However, interaction effects observed between task instructions and both sources of information revealed that valence ratings obtained during the verbal valence task were influenced by the nonverbal cues, whereas the ratings obtained during the nonverbal valence task were not influenced by the verbal information. A previous study by our group (Jacob et al. 2012b) corroborates this general pattern of results and indicated that even when attention is not explicitly drawn toward nonverbal cues, nonverbal information shapes our impression of the overall emotional state of others (cf. INDI = 89.46%, INDINV = 93.23%, and INDIV = 21.37%). These findings are further supported by the observation that the mean valence ratings obtained for each stimulus in our previous study correlated more strongly with the mean valence ratings obtained during the nonverbal valence task (as compared to correlations with the verbal valence task). Taken together, these findings argue for a strong implicit influence of nonverbal cues on judgments of verbal and overall valence ratings, which may be attributed to the greater salience of nonverbal cues compared with verbal signals. The assumed greater salience of nonverbal signals may, in turn, render nonverbal information more difficult to suppress thus affecting our judgements even in situations where nonverbal cues are not in the focus of attention.

Cerebral Activations

Building on previous research findings delineating functional characteristics of several brain regions involved in the perception of affective nonverbal cues (Wildgruber et al. 2006, 2009; Brück, Kreifelts, Wildgruber 2011; Jacob et al. 2012a), our first hypothesis, with regard to cerebral activations, was that the stimulus-driven processing of affective nonverbal information would lead to stronger responses within the bilateral amygdalae, FFA, and TVA. With exception of the left FFA, this hypothesis was confirmed. The absence of significant responses within the left FFA might be due to the small ROI mask for the left FFA, whereby the search volume was greatly reduced. Whole-brain analysis, moreover, indicated stronger responses of the posterior parts of the right TVA. Post hoc comparisons conducted to further explore the observed effects identified the respective stimulus-driven activation to be modulated by the intensity not the valence of the respective cue—suggesting this task-independent boosting of brain responses to occur to a broad spectrum of nonverbal emotional expressions, including both displays of positive (i.e. happy) and negative (i.e. angry) emotional states. Combining these observations with previous suggestions linking the activated brain regions to perceptual analysis (e.g. FFA, TVA; Wildgruber et al. 2006, 2009; Brück, Kreifelts, Wildgruber 2011), our results, in turn, may serve to underline the recent notion that emotional attention in part operates by the amplification of sensory processing (Vuilleumier 2005; Vuilleumier and Huang 2009; Pourtois et al. 2012), generating “a more robust and sustained representation of emotional stimuli within the sensory pathways, yielding a stronger weight in the competition for attentional resources and prioritized access to awareness, relative to the weaker signals generated by any competing neutral stimuli” (Vuilleumier 2005, p. 586).

In a second stage of the analysis, we were interested to explore how these brain responses associated with the processing of nonverbal emotional cues might be influenced by different processing conditions modulating the degree of voluntary attention devoted to nonverbal signals. Building on previous research findings (Wildgruber et al. 2006, 2009; Brück, Kreifelts, Wildgruber 2011), we hypothesized that while the amygdala, FFA, and TVA may play a role in the processing of nonverbal information regardless of task instructions, the right pSTC, on the other hand, may contribute to a task-related analysis of nonverbal information when nonverbal affective cues are in the focus of attention. This hypothesis, could not be confirmed by our data.

Considering, however, processing conditions during which attention is deliberately directed away from nonverbal cues, we assumed that—due to their heightened salience in emotion communication—nonverbal affective cues would be registered even when outside the focus of attention, and that these involuntary modes of processing would be reflected in increasing activation of the DLPFC, arMFC, and limbic system (particularly the amygdala). The hypothesis could be confirmed only for the DLPFC (activation clusters: Right superior frontal cortex extending into the right middle frontal cortex and left middle frontal cortex): While nonverbal cues did not affect DLPFC activity during explicit processing conditions (i.e. nonverbal valence task), nonverbal expressions of emotions encountered when asked to disregard nonverbal cues (i.e. verbal valence task) increased DLPFC activation.

As previously discussed in the context of the obtained behavioral data, the proposed greater salience of nonverbal signals might be reflected in an attentional bias toward nonverbal cues associated with increasing difficulties to suppress nonverbal signals. Latter efforts to suppress the processing of nonverbal signals, in turn, might be mirrored in an increasing recruitment of brain regions implicated in attentional control mechanisms, such as the DLPFC, for instance. This assumption is in line with the proposition that DLPFC involvement is of particular importance in situations where it is harder to disregard task-irrelevant salient distractors (Compton 2003; Compton et al. 2003) and thus, “attentional control by the DLPFC […] is a need to override automatic or intrinsic attentional biases” (Banich et al. 2000, p. 7). At this point, it seems noteworthy that, even despite the recruitment of additional cerebral resources related to attentional control, behavioral measures obtained during these processing conditions indicated that the influence of nonverbal cues on behavior could not be entirely counteracted. Again, this is indicative of the presence of a nonverbal dominance in emotion perception.

Another goal of our study, however, was to examine individual differences in the ability to shift attention among competing sources of emotional information and to explore cerebral correlates of this ability. Results obtained in this study indicate a significant positive correlation between task-dependent right amygdala activation and individual differences in the ability to shift attention between verbal and nonverbal cues (INDIDIFF): Participants who were able to easily switch between both sources of information showed higher right amygdala responses during the explicit evaluation of nonverbal signals when compared with those who experienced difficulties in switching their focus of attention. However, the amygdala responses observed for the 2 groups did not differ during the verbal valence task. Thus, the different activation patterns during the nonverbal valence task indicate a variable contribution of the right amygdala during explicit processing of nonverbal emotional cues, which, in turn, might be attributed to different individual characteristics. Namely, a highly developed ability to switch attention between verbal and nonverbal cues seems to be associated with the up-regulation of the right amygdala during explicit emotional judgments of nonverbal cues, whereas poorer task-switching abilities seem to be associated with the down-regulation of this limbic structure during explicit evaluation of nonverbally conveyed emotions.

The observed right-lateralized amygdala activity (for a detailed discussion on amygdala lateralization, see Baas et al. 2004; Costafreda et al. 2008; Sergerie et al. 2008) is in line with the assumption “that the right amygdala is part of a dynamic emotional stimulus detection system” (Wright et al. 2001, p. 379). The right amygdala is related to the fast processing of visual emotional stimuli (Markowitsch 1998) and may subserve the initiation of an automatic reaction (Gläscher and Adolphs 2003), whereas left amygdala activity is related to the processing of language (Markowitsch 1998; Costafreda et al. 2008). However, as no statistical analyses were performed to directly test for laterality differences, our observations suggesting right-lateralized amygdala responses ought to be interpreted rather cautiously.

Beside underlining a role of the amygdala in attention modulation, regulations of amygdala activation linked to differences in a person's task-switching ability may further support the idea that individual differences may influence responses of this brain structure: Amygdala activations associated with the processing of emotional signals, for instance, is known to be affected by personality characteristics (Canli et al. 2002; Brück, Kreifelts, Kaza, et al. 2011). Besides the Big Five personality traits, factors such as the participants' anxiety levels may also modulate amygdala activity (Bishop et al. 2004; Etkin et al. 2004; Cooney et al. 2006).

Conclusions

Both behavioral results and cerebral activation patterns obtained in this study revealed implicit nonverbal effects on judgments of verbal valence ratings, suggesting an involuntary processing of nonverbal affective signals even during situations where attention was deliberately directed away from nonverbal cues. These implicit nonverbal effects may indicate a greater salience of nonverbal when compared with verbal cues, which could have induced an attentional bias toward nonverbal signals associated with difficulties to suppress the nonverbal cues. An attentional bias toward nonverbal cues on the one hand might be beneficial in communicational situations. Nonverbal signals are said to be less controllable—therefore sometimes termed as “leaky” cues (Ekman and Friesen 1969; Rosenthal and DePaulo 1979)—and thus, are assumed to convey emotions more authentically than verbal expressions. As a consequence, relying on nonverbal information presumably enables more successful social interactions. On the other hand, there might exist situations demanding to shift the focus of attention away from nonverbal stimuli and instead focus on other coocuring stimuli. A car driver, for example, has to focus on environmental (and especially nonliving) stimuli such as cars, traffic lights, and road signs. The driver needs to focus on these stimuli and to ignore, for instance, nonverbal emotional expressions of her/his front-seat passenger or of pedestrians. Focusing, for example, on an angry face of a pedestrian waiting impatiently for a green light, instead of focusing on the traffic, might result in collision with a braking car in front of her/him. Therefore, the ability to modulate the amygdala—for which “a general role […] in the detection of innate, biologically and socially relevant information” (Sergerie et al. 2008, p. 823)—has been suggested is needed to shift the attention away from biologically predetermined stimuli to other relevant cues. This allows a more flexible, dynamic behavior and adaptions to changes in the environment. In sum, there might exist a trade-off between a “basic disposition” of the emotional attention system, on the one hand, and attentional flexibility, on the other hand. Building on these results in healthy participants, future studies should include psychiatric patients with impairments in emotion communication. Of particular interest, in this context, might be the examination of patient groups diagnosed with anxiety disorders and/or depression. These patients often show pronounced attentional biases. Using the current experimental paradigm, one might be able to determine whether these patients also differ from healthy controls with regard to an attention bias toward nonverbal cues and to shed light upon the underlying neural correlates. Linking anatomical structures to pathological biases might not only broaden our understanding of the cerebral mechanisms that underlie these behaviors, rather such approaches might also help develop training programs. Neurofeedback, for instance, could help these patients to self-regulate these structures and by that to shift the attention bias toward nonverbal cues to a comparable range of values in healthy participants.

Notes

Conflict of Interest: None declared.

References

Adolphs
R
Recognizing emotion from facial expressions: psychological and neurological mechanisms
Behav Cogn Neurosci Rev
 , 
2002
, vol. 
1
 (pg. 
21
-
62
)
Argyle
M
Alkema
F
Gilmour
R
The communication of friendly and hostile attitudes by verbal and non-verbal signals
Eur J Soc Psychol
 , 
1971
, vol. 
1
 (pg. 
385
-
402
)
Argyle
M
Salter
V
Nicholson
H
Williams
M
Burgess
P
The communication of inferior and superior attitudes by verbal and non-verbal signals
Br J Soc Clin Psychol
 , 
1970
, vol. 
9
 (pg. 
222
-
231
)
Baas
D
Aleman
A
Kahn
RS
Lateralization of amygdala activation: a systematic review of functional neuroimaging studies
Brain Res Brain Res Rev
 , 
2004
, vol. 
45
 (pg. 
96
-
103
)
Banich
MT
Milham
MP
Atchley
RA
Cohen
NJ
Webb
A
Wszalek
T
Kramer
AF
Liang
Z
Barad
V
Gullett
D
, et al.  . 
Prefrontal regions play a predominant role in imposing an attentional “set”: evidence from fMRI
Brain Res Cogn Brain Res
 , 
2000
, vol. 
10
 (pg. 
1
-
9
)
Belin
P
Zatorre
RJ
Lafaille
P
Ahad
P
Pike
B
Voice-selective areas in human auditory cortex
Nature
 , 
2000
, vol. 
403
 (pg. 
309
-
312
)
Bishop
SJ
Duncan
J
Lawrence
AD
State anxiety modulation of the amygdala response to unattended threat-related stimuli
J Neurosci
 , 
2004
, vol. 
24
 (pg. 
10364
-
10368
)
Boersma
P
Praat, a system for doing phonetics by computer
Glot Int
 , 
2001
, vol. 
5
 (pg. 
341
-
345
)
Brück
C
Kreifelts
B
Kaza
E
Lotze
M
Wildgruber
D
Impact of personality on the cerebral processing of emotional prosody
Neuroimage
 , 
2011
, vol. 
58
 (pg. 
259
-
268
)
Brück
C
Kreifelts
B
Wildgruber
D
Emotional voices in context: a neurobiological model of multimodal affective information processing
Phys Life Rev
 , 
2011
, vol. 
8
 (pg. 
383
-
403
)
Canli
T
Sivers
H
Whitfield
SL
Gotlib
IH
Gabrieli
JD
Amygdala response to happy faces as a function of extraversion
Science
 , 
2002
, vol. 
296
 pg. 
2191
 
Compton
RJ
The interface between emotion and attention: a review of evidence from psychology and neuroscience
Behav Cogn Neurosci Rev
 , 
2003
, vol. 
2
 (pg. 
115
-
129
)
Compton
RJ
Banich
MT
Mohanty
A
Milham
MP
Herrington
J
Miller
GA
Scalf
PE
Webb
A
Heller
W
Paying attention to emotion: an fMRI investigation of cognitive and emotional stroop tasks
Cogn Affect Behav Neurosci
 , 
2003
, vol. 
3
 (pg. 
81
-
96
)
Cooney
RE
Atlas
LY
Joormann
J
Eugene
F
Gotlib
IH
Amygdala activation in the processing of neutral faces in social anxiety disorder: is neutral really neutral?
Psychiatry Res
 , 
2006
, vol. 
148
 (pg. 
55
-
59
)
Costafreda
SG
Brammer
MJ
David
AS
Fu
CH
Predictors of amygdala activation during the processing of emotional stimuli: a meta-analysis of 385 PET and fMRI studies
Brain Res Rev
 , 
2008
, vol. 
58
 (pg. 
57
-
70
)
Dew
D
Jensen
PJ
Phonetic processing: the dynamics of speech
 , 
1977
Columbus
Charles E. Merill Publishing Company
Dietrich
S
Ackermann
H
Szameitat
DP
Alter
K
Psychoacoustic studies on the processing of vocal interjections: how to disentangle lexical and prosodic information?
Prog Brain Res
 , 
2006
, vol. 
156
 (pg. 
295
-
302
)
Dietrich
S
Hertrich
I
Alter
K
Ischebeck
A
Ackermann
H
Semiotic aspects of human nonverbal vocalizations: a functional imaging study
Neuroreport
 , 
2007
, vol. 
18
 (pg. 
1891
-
1894
)
Dietrich
S
Hertrich
I
Alter
K
Ischebeck
A
Ackermann
H
Understanding the emotional expression of verbal interjections: a functional MRI study
Neuroreport
 , 
2008
, vol. 
19
 (pg. 
1751
-
1755
)
Ekman
P
Friesen
WV
Nonverbal leakage and clues to deception
Psychiatry
 , 
1969
, vol. 
32
 (pg. 
88
-
106
)
Epstein
R
Harris
A
Stanley
D
Kanwisher
N
The parahippocampal place area: recognition, navigation, or encoding?
Neuron
 , 
1999
, vol. 
23
 (pg. 
115
-
125
)
Etkin
A
Klemenhagen
KC
Dudman
JT
Rogan
MT
Hen
R
Kandel
ER
Hirsch
J
Individual differences in trait anxiety predict the response of the basolateral amygdala to unconsciously processed fearful faces
Neuron
 , 
2004
, vol. 
44
 (pg. 
1043
-
1055
)
Friston
KJ
Glaser
DE
Henson
RN
Kiebel
S
Phillips
C
Ashburner
J
Classical and Bayesian inference in neuroimaging: applications
Neuroimage
 , 
2002
, vol. 
16
 (pg. 
484
-
512
)
Friston
KJ
Worsley
KJ
Frackowiak
RSJ
Mazziotta
JC
Evans
AC
Assessing the significance of focal activations using their spatial extent
Hum Brain Mapp
 , 
1993
, vol. 
1
 (pg. 
210
-
220
)
Geisser
S
Greenhouse
SW
An extension of Box's results on the use of the F distribution in multivariate analysis
Ann Math Statist
 , 
1958
, vol. 
29
 (pg. 
885
-
891
)
Gläscher
J
Adolphs
R
Processing of the arousal of subliminal and supraliminal emotional stimuli by the human amygdala
J Neurosci
 , 
2003
, vol. 
23
 (pg. 
10274
-
10282
)
Jacob
H
Kreifelts
B
Brück
C
Erb
M
Hösl
F
Wildgruber
D
Cerebral integration of verbal and nonverbal emotional cues: impact of individual nonverbal dominance
Neuroimage
 , 
2012a
, vol. 
61
 (pg. 
738
-
747
)
Jacob
H
Kreifelts
B
Brück
C
Nizielski
S
Schütz
A
Wildgruber
D
Nonverbal signals speak up: Association between perceptual nonverbal dominance and emotional intelligence
Cogn Emot
 , 
2012b
Kanwisher
N
McDermott
J
Chun
MM
The fusiform face area: a module in human extrastriate cortex specialized for face perception
J Neurosci
 , 
1997
, vol. 
17
 (pg. 
4302
-
4311
)
Maldjian
JA
Laurienti
PJ
Kraft
RA
Burdette
JH
An automated method for neuroanatomic and cytoarchitectonic atlas-based interrogation of fMRI data sets
Neuroimage
 , 
2003
, vol. 
19
 (pg. 
1233
-
1239
)
Markowitsch
HJ
Differential contribution of right and left amygdala to affective information processing
Behav Neurol
 , 
1998
, vol. 
11
 (pg. 
233
-
244
)
McBride
G
Kendon
A
Harris
RM
Key
MR
Interactions and the control of behavior
Organization of behavior in face-to-face interaction
 , 
1975
1st ed
The Hague
Mouton & Co
(pg. 
415
-
423
)
McNeill
D
The acquisition of language: the study of developmental psycholinguistics
 , 
1970
New York
Harper & Row
Mehrabian
A
Ferris
SR
Inference of attitudes from nonverbal communication in two channels
J Consult Psychol
 , 
1967
, vol. 
31
 (pg. 
248
-
252
)
Mehrabian
A
Wiener
M
Decoding of inconsistent communications
J Pers Soc Psychol
 , 
1967
, vol. 
6
 (pg. 
109
-
114
)
Meyer
M
Zysset
S
von Cramon
DY
Alter
K
Distinct fMRI responses to laughter, speech, and sounds along the human peri-sylvian cortex
Brain Res Cogn Brain Res
 , 
2005
, vol. 
24
 (pg. 
291
-
306
)
Noller
P
Video primacy—a further look
J Nonverbal Behav
 , 
1985
, vol. 
9
 (pg. 
28
-
47
)
Oldfield
RC
The assessment and analysis of handedness: the Edinburgh inventory
Neuropsychologia
 , 
1971
, vol. 
9
 (pg. 
97
-
113
)
Parkinson
B
Emotions are social
Br J Psychol
 , 
1996
, vol. 
87
 (pg. 
663
-
683
)
Pourtois
G
Schettino
A
Vuilleumier
P
Brain mechanisms for emotional influences on perception and attention: What is magic and what is not
Biol Psychol
 , 
2012
Rosenthal
R
DePaulo
BM
Sex differences in eavesdropping on nonverbal cues
J Pers Soc Psychol
 , 
1979
, vol. 
37
 (pg. 
273
-
285
)
Sergerie
K
Chochol
C
Armony
JL
The role of the amygdala in emotional processing: a quantitative meta-analysis of functional neuroimaging studies
Neurosci Biobehav Rev
 , 
2008
, vol. 
32
 (pg. 
811
-
830
)
Steiger
JH
Tests for comparing elements of a correlation matrix
Psychol Bull
 , 
1980
, vol. 
87
 (pg. 
245
-
251
)
Szameitat
DP
Kreifelts
B
Alter
K
Szameitat
AJ
Sterr
A
Grodd
W
Wildgruber
D
It is not always tickling: distinct cerebral responses during perception of different laughter types
Neuroimage
 , 
2010
, vol. 
53
 (pg. 
1264
-
1271
)
Tzourio-Mazoyer
N
Landeau
B
Papathanassiou
D
Crivello
F
Etard
O
Delcroix
N
Mazoyer
B
Joliot
M
Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain
Neuroimage
 , 
2002
, vol. 
15
 (pg. 
273
-
289
)
Van Kleef
GA
How emotions regulate social life
Curr Dir Psychol Sci
 , 
2009
, vol. 
18
 (pg. 
184
-
188
)
Vuilleumier
P
How brains beware: neural mechanisms of emotional attention
Trends Cogn Sci
 , 
2005
, vol. 
9
 (pg. 
585
-
594
)
Vuilleumier
P
Huang
Y-M
Emotional attention
Curr Dir Psychol Sci
 , 
2009
, vol. 
18
 (pg. 
148
-
152
)
Wildgruber
D
Ackermann
H
Kreifelts
B
Ethofer
T
Cerebral processing of linguistic and emotional prosody: fMRI studies
Prog Brain Res
 , 
2006
, vol. 
156
 (pg. 
249
-
268
)
Wildgruber
D
Ethofer
T
Grandjean
D
Kreifelts
B
A cerebral network model of speech prosody comprehension
Int J Speech Lang Pathol
 , 
2009
, vol. 
11
 (pg. 
277
-
281
)
Worsley
KJ
Marrett
S
Neelin
P
Vandal
AC
Friston
KJ
Evans
AC
A unified statistical approach for determining significant signals in images of cerebral activation
Hum Brain Mapp
 , 
1996
, vol. 
4
 (pg. 
58
-
73
)
Wright
CI
Fischer
H
Whalen
PJ
McInerney
SC
Shin
LM
Rauch
SL
Differential prefrontal cortex and amygdala habituation to repeatedly presented emotional stimuli
Neuroreport
 , 
2001
, vol. 
12
 (pg. 
379
-
383
)

Author notes

H.J. and C.B. contributed equally to this work.