Abstract

The detection of threat is a role that the amygdala plays well, evidenced by its increased response to fearful faces in human neuroimaging studies. A critical element of the fearful face is an increase in eye white area (EWA), hypothesized to be a significant cue in activating the amygdala. However, another important social signal that can increase EWA is a lateral shift in gaze direction, which also serves to orient attention to potential threats. It is unknown how the amygdala differentiates between these increases in EWA and those that are specifically associated with fear. Using functional magnetic resonance imaging, we show that the left amygdala distinguished between fearful eyes and gaze shifts despite similar EWA increases whereas the right amygdala was less discriminatory. Additional analyses also revealed selective hemispheric response patterns in the left fusiform gyrus. Our data show clear hemispheric differences in EWA-based fear activation, suggesting the existence of parallel mechanisms that code for emotional face information.

INTRODUCTION

In humans, the ability to recognize facial expressions is critical for the transmission of emotional and social information (Adolphs, 1999). Detecting emotion—especially fear—from another's face can aid in identifying potential threats and allow for a rapid formulation of appropriate behavioral responses. Previous neuroimaging studies have demonstrated that fearful faces strongly and reproducibly activate the amygdala, even when subjects are not aware a fearful face was presented (Whalen et al., 1998; Morris et al., 1999). Additionally, patients with lesions to the amygdala have been reported to show greater deficits in the recognition of fearful faces relative to other emotions (Adolphs et al., 1994). These deficits have been suggested to occur because patients are unable to use the information displayed by the eyes, as guiding them to look at the eye region leads to a temporary increase in the recognition of fearful expressions (Adolphs et al., 2005). In healthy individuals, information from the eye area appears to be especially useful in discriminating fear from other emotional expressions (Smith et al., 2005). Furthermore, fearful eyes shown in isolation and without context elicit functional magnetic resonance imaging (fMRI) activation in the amygdala (Morris et al., 2002; Whalen et al., 2004), further underscoring the importance of the eyes in the detection of fear.

What makes fearful eyes so salient? It has been suggested that the increase in scleral field size, or eye white area (EWA), alone is enough to elicit amygdala activation (Whalen et al., 2004). In particular, Whalen and colleagues reported amygdala activity when fearful eyes were shown in a backward-masking paradigm, indicating that the amygdala can detect changes in EWA even when subjects are unaware of the stimuli being presented. The mechanism underlying the processing of EWA changes in the amygdala is unclear, although there is evidence that the amygdala can respond to coarse representations of faces made of low spatial frequency information as opposed to those depicting only high spatial frequency information (Vuilleumier et al., 2003). This implies that the amygdala may be using crude visual information in order to facilitate rapid detection and, therefore, may act as a simple detector of EWA increases since the whites of the fearful eye are extremely salient—even when seen at a distance. From an evolutionary point of view, such a mechanism could be extremely advantageous in that it is strikingly visible, can be detected rapidly, and does not require fine detail processing thus allowing for a quick evasive response if necessary.

Yet, fear is not the only facial expression that exhibits an increase in EWA. A change in gaze direction, for example, caused by a shift in iris and pupil position can increase EWA as well. Similar to fear, gaze shifts can function as exogenous cues to indicate the presence of potential threats thus it is not surprising that the amygdala has been shown to be sensitive to gaze direction as well as gaze shifts (Kawashima et al., 1999; Hooker et al., 2003). What is not known is whether the amygdala is able to differentiate fear from other expressions that also create increases in EWA. If the amygdala responds only to fearful cues, it should activate preferentially to eyes depicting fear and not those associated with gaze shifts. On the other hand, if the amygdala relies on low spatial frequency information, as indicated by Vuilleumier et al. (2003), it may lack the capacity to distinguish between fear and other similar conditions associated with increases in EWA.

Here we used event-related fMRI to investigate how the amygdala responds to different types of EWA change. Instead of showing static images of eyes, the impression of a dynamic facial expression change was created by presenting eyes with a neutral expression both immediately before and after the presentation of each condition. Subjects viewed eye stimuli that portrayed an increase in EWA (fear, lateral shift in gaze), a decrease in EWA (happy) or no EWA change (control). We hypothesized that if the amygdala uses a mechanism that only detects increases in EWA, we should see similar activation to both fear and gaze shift conditions. Alternatively, if the amygdala is selective for fearful eyes exclusively, there should be a greater response to fear than to the gaze shift.

METHODS

Participants

Thirteen neurologically normal subjects (six female, all right-handed, aged 22–33 years) consented to participate in a study approved by the Institutional Review Board of West Virginia University. All subjects had either normal or corrected-to-normal vision.

Stimuli

Stimuli were selected from the JACFEE/JACNeuF series of faces by Ekman and Matsumoto (Paul Ekman Group LLC, Berkeley, CA, USA) and could belong to one of four categories (conditions): fear, gaze, happy and motion control (see Supplementary Figure 1). All images were altered in Photohop 7.0 with an initial conversion to grayscale and then a uniform cropping to leave an 11° × 3° rectangle centered on the eyes. Eyebrows and other surrounding facial information were cropped out as these can act as important cues in face processing (Sadr et al., 2003). Emotional stimuli were created using six fearful and six happy faces; six gaze-shifted stimuli were created from neutral faces that were altered to look either to the left or the right of the observer. All categories used six different identities and were equally balanced with respect to gender and race. Motion control stimuli were created by shifting the cropping area 0.25° upwards or downwards on each face while maintaining the central positioning of the rectangle. Neutral eye stimuli for each facial identity for each condition were also created using the same process. Mean luminance and contrast were equated for all stimuli. Stimuli were then presented on a black background subtending 30° × 23° of visual angle.

Both the happy and motion control eyes served as additional stimulus conditions that would allow us to interpret the different possible mechanisms used by the amygdala when processing changes in EWA: (i) If the amygdala responds to a net change in EWA, (regardless of direction of this change), then it should also respond similarly to the fear, gaze shifts and happy conditions; (ii) if the amygdala responds to an increase in EWA only, then it should respond to only the fear and gaze shifts and not to happy eyes or the motion control or (iii) if the amygdala responds simply to rapid facial changes in the general vicinity of the eyes, then it should respond similarly to all conditions. While this last mechanism is inconsistent with the amygdala being selective for fear, a number of studies have suggested that the amygdala may respond to pattern motion or dynamic changes to the face (van der Gaag et al., 2007).

To determine the amount of EWA change from neutral for each of the changes in expression (fear, gaze shift and happiness), the eye white perimeter was manually traced and the number of pixels within this area was determined. For each identity, the EWA pixel difference between the neutral and condition stimuli was calculated. These pixel differences were then averaged within their respective conditions to ensure that there were no significant differences in EWA changes between the fear and gaze conditions (Figure 1).

Mean (± s.e.m., n = 6) percent EWA change from neutral plotted as a function of condition. Fear and surprise were not significantly different from one another but were significantly different from happy. As the motion control consisted of a vertical translation of neutral eyes, there was no net change in EWA.
Fig. 1

Mean (± s.e.m., n = 6) percent EWA change from neutral plotted as a function of condition. Fear and surprise were not significantly different from one another but were significantly different from happy. As the motion control consisted of a vertical translation of neutral eyes, there was no net change in EWA.

Task design

The task consisted of an event-related design spanning five runs. Each run contained 40 trials, and only one of the four conditions was shown per trial (Figure 2). Each trial began with a red circle or square (both 5.5° × 5.5°) presented centrally for 500 ms. At the offset of the shape, a train of eye stimuli were presented for 1900 ms: neutral eyes first appeared for 300–1200 ms and were then followed by one of the conditions for 400 ms, with the neutral eyes appearing again for the remaining 300–1200 ms. At the end of each trial, a response screen appeared prompting subjects to press a button indicating, which shape they saw at the beginning of the trial. Within a single trial, the identity of the condition stimulus matched that of the neutral eyes presented immediately before and after each condition; this allowed for a smooth transition between the eye stimuli and also limited motion. Trials were separated by an intertrial interval consisting of a black screen for 2–7 s. Each condition was shown 50 times across all five runs [50 fear; 50 gaze (25 left, 25 right); 50 happy; 50 motion control (25 up, 25 down)]. Stimuli were delivered using Presentation software (Version 9.90, Neurobehavioral Systems, Albany, CA, USA) through Avotec Silent Vision 4000 fiber-optic eyepieces (Avotec Inc., Stuart, FL, USA) mounted on the scanner headcoil.

Example of a single trial. Subjects viewed one of the four conditions, in this case fear, between sets of neutral eyes, all presented for a total of 1900 ms. Eye stimuli were presented during the delay period of a simple match-to-sample task.
Fig. 2

Example of a single trial. Subjects viewed one of the four conditions, in this case fear, between sets of neutral eyes, all presented for a total of 1900 ms. Eye stimuli were presented during the delay period of a simple match-to-sample task.

Subjects were told that they would see images of eyes and were not given any further information about these images. Instead, they were instructed to focus on the (task-relevant) shapes presented at the beginning and end of each trial as well as to fixate on the center of the screen. Instructing subjects to actively search for information within eyes and faces can alter or bias activation in the amygdala (Hooker et al., 2003; Phillips et al., 2004), and to avoid this possible confound a delayed match-to-sample shape task was used for this experiment. At the end of the experiment, participants were debriefed about their viewing experience and showed no consistency in the ability to report the presence of emotions from the eye stimuli, suggesting that they were indeed paying attention to the delayed match-to-sample task involving non-eye stimuli and were not aware of the aims of the experiment.

Imaging procedure and analysis

Functional whole-brain axial volumes of BOLD activity were acquired on a 3 Tesla Horizon LX MRI scanner (GE Medical Systems, Milwaukee, WI, USA). Twenty-two axial slices (4 mm thick, 1 mm gap) were obtained using the following parameters: TE/TR = 25/2000 s; FOV = 240 mm (in-plane resolution = 1.875 mm2); bandwidth = 125. We also acquired high-resolution spoil gradient-recalled volumes (SPGR; FOV = 240 mm; matrix = 256 × 256; voxel size = 1.2 mm × 0.9375 mm × 0.9375 mm; 124 slices with 50% overlap). Functional images were acquired using a gradient echo spiral in-out sequence (Glover and Law, 2001) for 240 volumes/run. Reconstructed functional images were composed of spiral in-out trajectories, optimizing sampling from brain regions prone to susceptibility artifacts and MR signal drop out.

Data were analyzed using SPM2 (Wellcome Department of Imaging Neuroscience, London, UK). Functional volumes were coregistered to anatomical images and then corrected for motion and slice-timing differences. The SPGR volume was normalized to the Montreal Neurological Institute (MNI) template and resliced to 2 mm3 isovoxel resolution. The parameters determined for this normalization and reslicing were applied to functional images. Data from the functional volumes were smoothed with an 8 mm FWHM Gaussian kernel. A high-pass temporal filter of 1/128 s was applied to the fMRI data to remove any potential low-frequency drifts in MR signal. In order to determine the response to each condition, we separately modeled the hemodynamic response to each stimulus type as a delta function located at the time point within each trial that each condition occurred and then convolved these time courses with a hemodynamic response function response. Also included in the regression model were six motion covariates (three translation and three rotation parameters) determined from motion correction and a constant term to account for potential drift. We examined positive responses to each of the conditions as well as contrasts between the conditions.

Hypothesis-generated search volumes of interest (VOIs) were selected for analysis. For the left and right amygdala VOIs, two 8 mm radius spheres were created, centered at MNI coordinates ± 20, 0, −20, based on a review of previous studies that reported coordinates of activation of the amygdala to emotional expression and eye gaze (see Supplementary Table 1). The WFU_PickAtlas software (ANSIR Core; Wake Forest School of Medicine, Wake Forest, NC, USA) was then used to create VOIs for each fusiform gyrus (FG), frontal (orbital and inferior) cortex (Inf/OFC), intraparietal sulcus (IPS) and superior temporal sulcus (STS) in order to examine the response to the different eye-change stimuli in regions that have been demonstrated to be modulated by emotional facial expressions (Narumoto et al., 2001; for review, see Adolphs, 2002).

For the amygdala VOIs, the mean response to each stimulus type relative to baseline was examined using a significance threshold of P < 0.05 (corrected for the search VOI). For comparisons between conditions in the amygdala, voxels that showed a positive response for each condition vs baseline at P < 0.05 (uncorrected) were identified, and then contrasts between conditions were thresholded at P < 0.05 (uncorrected) and more than four contiguous voxels. A liberal threshold for examining the differences between conditions within the amygdala was chosen in order to minimize the risk of Type II errors, given that this region and the possibility of its selectivity were the subject of the explicit hypotheses of this study. By only comparing positive responses to each of the stimuli we minimized the risk that differences found were due to negative responses to one or more of the eye-change conditions. As the other VOIs, we examined were not subject to explicit hypotheses, more conservative thresholds were used. For VOIs other than the amygdala, the four conditions relative to baseline were compared using a significance threshold of P < 0.05 (corrected for search VOI). For comparisons between conditions, voxels that showed a significant positive response for each condition vs baseline at P < 0.05 (uncorrected) were included. Contrasts between the conditions were then thresholded for significance at P < 0.05 (corrected for search VOI) and more than four contiguous voxels.

RESULTS

Behavioral data

Percent accuracy on the delayed match-to-sample task was high and did not differ as a function of condition (mean ± s.d.: fear = 99.54 ± 0.88); gaze shift = 99.69 ± 0.75; happy = 99.03 ± 1.59; motion control = 99.54 ± 0.88; F[3,36] = 0.99, P > 0.40). Reaction time (ms) also did not vary as a function of condition (mean ± s.d.: fear 557.78 ± 154.10; gaze shift = 555.78 ± 154.10; happy = 551.68 ± 127.54; motion control = 556.25 ± 139.11; F[3,36] = 0.82, P > 0.50).

FMRI ACTIVATION

Amygdala

We first examined the amygdala responses to EWA differences by analyzing the blood oxygen-level dependent (BOLD) response to each of the four conditions. The left amygdala activated only to fearful eyes (n = 3; paired t-tests one-tailed; P-values corrected for multiple comparisons within amygdala VOI; Table 1; Figure 3). Contrasts confirmed that the left amygdala response to fear was significantly greater than the responses to any other condition (P-values uncorrected; Table 1). Somewhat unexpectedly, the right amygdala responded to all conditions (P-values corrected; Table 1; Figure 3). Contrasts between conditions indicated that there were no significant differences between fear and other conditions in the right amygdala (P-values uncorrected; Table 1).

Amygdala group activation patterns and percent MR signal change. (A) Left and right amygdala activation compared to baseline. Left amygdala activated only to the fearful eye condition (top left panel) whereas the right amygdala responded to all conditions (all panels). Activation is overlaid on a coronal slice (MNI coordinate: y = −4) from a representative subject. Color scale at bottom indicates t-values. L, Left; R, Right. (B) Percent MR signal change from local cluster maxima for VOIs in the left (x = −22, y = 6, z = −18) and right (x = 24, y = −6, z = −18) amygdala.
Fig. 3

Amygdala group activation patterns and percent MR signal change. (A) Left and right amygdala activation compared to baseline. Left amygdala activated only to the fearful eye condition (top left panel) whereas the right amygdala responded to all conditions (all panels). Activation is overlaid on a coronal slice (MNI coordinate: y = −4) from a representative subject. Color scale at bottom indicates t-values. L, Left; R, Right. (B) Percent MR signal change from local cluster maxima for VOIs in the left (x = −22, y = 6, z = −18) and right (x = 24, y = −6, z = −18) amygdala.

Table 1

List of coordinates, t-values, P-uncorrected and P-corrected (FWE) values for centers of activation within each contrast type in the left and right amygdala and FG

Brain regionContrastMNI coordinatest-valueP-uncorrectedP-FWE- corrected
AmygdalaFear26, −6, −183.990.001<0.043
−22, −4, −184.100.001<0.037
Gaze24, −6, −204.400.001<0.031
−22, −6, −182.950.006=0.186
Happy24, −6, −184.630.001<0.029
−24, −6, −183.280.003=0.152
Control24, −6, −184.030.001=0.062
−22, −6, −183.160.004=0.179
Fear vs Gaze22, 2, −122.30=0.080.192
−20, 4, −143.16<0.0040.057
Fear vs Happy28, −4, −221.65=0.0120.359
−20, 4, −143.28<0.0030.037
Fear vs Control22, 2, −121.93=0.060.277
−20, 4, −142.48<0.0140.135
FusiformFear36, −44, −248.170.001<0.001
−40, −56, −225.810.001<0.009
Gaze36, −44, −246.050.001<0.009
−36, −74, −184.670.001<0.048
Happy36, −44, −245.780.001<0.015
−40, −58, −224.580.001<0.047
Control36 −44, −246.140.001<0.010
−36, −70, −165.650.001<0.018
Fear vs Gaze36, −48, −164.480.001=0.067
−36, −70, −165.340.001<0.023
Fear vs Happy40, −42, −123.320.003=0.208
−28, −76, −125.490.001<0.012
Fear vs Control28, −60, −163.000.006=0.348
−36, −52, −205.110.001<0.023
Brain regionContrastMNI coordinatest-valueP-uncorrectedP-FWE- corrected
AmygdalaFear26, −6, −183.990.001<0.043
−22, −4, −184.100.001<0.037
Gaze24, −6, −204.400.001<0.031
−22, −6, −182.950.006=0.186
Happy24, −6, −184.630.001<0.029
−24, −6, −183.280.003=0.152
Control24, −6, −184.030.001=0.062
−22, −6, −183.160.004=0.179
Fear vs Gaze22, 2, −122.30=0.080.192
−20, 4, −143.16<0.0040.057
Fear vs Happy28, −4, −221.65=0.0120.359
−20, 4, −143.28<0.0030.037
Fear vs Control22, 2, −121.93=0.060.277
−20, 4, −142.48<0.0140.135
FusiformFear36, −44, −248.170.001<0.001
−40, −56, −225.810.001<0.009
Gaze36, −44, −246.050.001<0.009
−36, −74, −184.670.001<0.048
Happy36, −44, −245.780.001<0.015
−40, −58, −224.580.001<0.047
Control36 −44, −246.140.001<0.010
−36, −70, −165.650.001<0.018
Fear vs Gaze36, −48, −164.480.001=0.067
−36, −70, −165.340.001<0.023
Fear vs Happy40, −42, −123.320.003=0.208
−28, −76, −125.490.001<0.012
Fear vs Control28, −60, −163.000.006=0.348
−36, −52, −205.110.001<0.023
Table 1

List of coordinates, t-values, P-uncorrected and P-corrected (FWE) values for centers of activation within each contrast type in the left and right amygdala and FG

Brain regionContrastMNI coordinatest-valueP-uncorrectedP-FWE- corrected
AmygdalaFear26, −6, −183.990.001<0.043
−22, −4, −184.100.001<0.037
Gaze24, −6, −204.400.001<0.031
−22, −6, −182.950.006=0.186
Happy24, −6, −184.630.001<0.029
−24, −6, −183.280.003=0.152
Control24, −6, −184.030.001=0.062
−22, −6, −183.160.004=0.179
Fear vs Gaze22, 2, −122.30=0.080.192
−20, 4, −143.16<0.0040.057
Fear vs Happy28, −4, −221.65=0.0120.359
−20, 4, −143.28<0.0030.037
Fear vs Control22, 2, −121.93=0.060.277
−20, 4, −142.48<0.0140.135
FusiformFear36, −44, −248.170.001<0.001
−40, −56, −225.810.001<0.009
Gaze36, −44, −246.050.001<0.009
−36, −74, −184.670.001<0.048
Happy36, −44, −245.780.001<0.015
−40, −58, −224.580.001<0.047
Control36 −44, −246.140.001<0.010
−36, −70, −165.650.001<0.018
Fear vs Gaze36, −48, −164.480.001=0.067
−36, −70, −165.340.001<0.023
Fear vs Happy40, −42, −123.320.003=0.208
−28, −76, −125.490.001<0.012
Fear vs Control28, −60, −163.000.006=0.348
−36, −52, −205.110.001<0.023
Brain regionContrastMNI coordinatest-valueP-uncorrectedP-FWE- corrected
AmygdalaFear26, −6, −183.990.001<0.043
−22, −4, −184.100.001<0.037
Gaze24, −6, −204.400.001<0.031
−22, −6, −182.950.006=0.186
Happy24, −6, −184.630.001<0.029
−24, −6, −183.280.003=0.152
Control24, −6, −184.030.001=0.062
−22, −6, −183.160.004=0.179
Fear vs Gaze22, 2, −122.30=0.080.192
−20, 4, −143.16<0.0040.057
Fear vs Happy28, −4, −221.65=0.0120.359
−20, 4, −143.28<0.0030.037
Fear vs Control22, 2, −121.93=0.060.277
−20, 4, −142.48<0.0140.135
FusiformFear36, −44, −248.170.001<0.001
−40, −56, −225.810.001<0.009
Gaze36, −44, −246.050.001<0.009
−36, −74, −184.670.001<0.048
Happy36, −44, −245.780.001<0.015
−40, −58, −224.580.001<0.047
Control36 −44, −246.140.001<0.010
−36, −70, −165.650.001<0.018
Fear vs Gaze36, −48, −164.480.001=0.067
−36, −70, −165.340.001<0.023
Fear vs Happy40, −42, −123.320.003=0.208
−28, −76, −125.490.001<0.012
Fear vs Control28, −60, −163.000.006=0.348
−36, −52, −205.110.001<0.023

Our data clearly showed differences in the way the amygdala processes eye information in order to detect changes in EWA, in particular those associated with fear. To further examine this observation using a post hoc analysis, we took the average β-values describing the response for each subject to each of the four conditions from voxels in the left amygdala that showed significant differences between fear and any of the other conditions at the group level using an uncorrected significance level of P < 0.05. We then took the average β-values for each subject for each of the four conditions from voxels from the corresponding location in the right amygdala. A two-way analysis of variance (ANOVA) revealed a significant Hemisphere × Condition interaction (F[3,36] = 2.92, P < 0.04). The main effect of Hemisphere was significant (F[1,12] = 6.70, P < 0.02), indicating that the overall response to the four conditions was consistently larger in the right amygdala than in the left. Additionally, the main effect of Condition was significant (F[3,36] = 3.17, P < 0.03), which was not surprising as this factor contributed to the selection of voxels for the left amygdala.

FG

We examined the behavior of other brain regions also known to be involved in gaze and/or face processing to determine whether hemispheric differences between conditions existed outside the amygdala, including the FG. Here, we found bilateral activation to all conditions relative to baseline (P-values corrected; Table 1; Figure 4A). In the left FG, contrasts between the conditions indicated that the response to fearful eyes was greater than to all other conditions (P-values corrected; Table 1) similar to what was observed in the left amygdala. However, unlike the left amygdala, the left FG responded to all conditions, not just to fear. In the right FG, the difference between fear and the other conditions was not significant (P-values corrected; Table 1).

Percent MR signal change from FG and IPS. (A) Percent MR signal change from local cluster maxima for VOIs in left (x = −32, y = −58, z = −20) and right (x = 36, y = −44, z = −24) FG. (B) Percent MR signal change from local cluster maxima for VOIs in the left posterior IPS; (x = −18, y = −86, z = −4) and right anterior IPS; (x = 34, y = −38, z = −50).
Fig. 4

Percent MR signal change from FG and IPS. (A) Percent MR signal change from local cluster maxima for VOIs in left (x = −32, y = −58, z = −20) and right (x = 36, y = −44, z = −24) FG. (B) Percent MR signal change from local cluster maxima for VOIs in the left posterior IPS; (x = −18, y = −86, z = −4) and right anterior IPS; (x = 34, y = −38, z = −50).

Once again, we compared the left and right FG in the same manner as the amygdala by extracting the average β-values from each subject for each of the conditions from voxels in the left FG that showed a significant difference between fear and any of the other conditions, and voxels from the corresponding location in the right FG. Two-way ANOVA revealed an interaction between Condition and Hemisphere that trended towards significance (F[3,36] = 2.57, P = 0.07). The main effect of Condition was significant (F[3,36] = 2.87, P < 0.05), as expected, but the main effect of Hemisphere was not (F[1,12] = 2.44, P > 0.1).

IPS, STS, Inf/OFC

We found bilateral activation to all conditions in the left and right IPS (P-values corrected; Figure 4B; Supplementary Table 2). In the left posterior IPS (pIPS), contrasts between conditions indicated significant differences between fear and gaze but not fear and control (P-values corrected; Supplementary Table 2). Hence, the left hemisphere showed a degree of differentiation between conditions. Contrasts between the conditions in the right anterior intraparietal sulcus (aIPS) did not reveal differences between conditions (P-values corrected; Supplementary Table 2), similar to the right FG and amygdala. Finally, activation in both the left and right STS and the left and right Inf/OFC was not significantly modulated by Condition (P < 0.05, corrected), and responded robustly and similarly in both hemispheres.

DISCUSSION

There is evidence to suggest that the amygdala uses simple EWA increases to detect the presence of fear in the face of another (Whalen et al., 2004), however such a mechanism indicates that the amygdala might respond to other increases in EWA that are not associated with fear. Our data suggest distinct differences in how the left and right amygdala detect such changes in EWA. The right amygdala showed a significant response to fear and gaze shifts, which were closely matched for EWA increase, as well as to happy and control eyes where EWA decreased or did not change, respectively. Furthermore, there was no significant difference between the strength of response to these conditions in the right amygdala. In contrast, the left amygdala showed a significant response only to fear and this activation was greater relative to that of the other conditions. A post hoc comparison revealed that there were hemispheric differences in the selectivity of the amygdala to changes in the eye region associated with different expressions. These results provide evidence that the right amygdala may act as a course detector of eye change, regardless of the emotional and behavioral significance behind the change. In contrast, the left amygdala showed selectivity to eye changes typically associated with fear, suggesting that the activation may be driven by more than just increases in scleral field size and that other features, such as iris and pupil position, may also contribute to the response.

A number of studies have reported unilateral activity in the amygdala, yet the issue of laterality is often not directly addressed due to differences in individual experiments with respect to stimuli, task-design and data analysis. Thus, it remains unclear how the left and right amygdala roles differ with respect to processing emotional information. Nonetheless, our data are consistent with previous fear recognition experiments that both directly and indirectly demonstrate differences between the left and right amygdala. Morris et al. (1999) show the right amygdala rapidly and non-selectively detects stimuli that pose a potential threat to the observer. Additionally, it can mediate the processing of emotional stimuli without awareness (Morris et al., 1999), can be activated by any arousing stimulus (Glascher and Adolphs, 2003) and habituates faster than the left amygdala (Wright et al., 2001), affirming the lack of selectivity by the right amygdala and suggesting that it acts as a general detector of overall change. The left amygdala, on the other hand, has indirectly been shown to discriminate between different emotional expressions (Morris et al., 1996; Kim et al., 2003; Whalen et al., 2004) and its response to fearful eyes can be mediated by the facial context in which the eyes appear (Morris et al., 2002). It has also been shown to be sensitive to the interaction between gaze direction and emotional expression (Adams et al., 2003), illustrating a higher level of discrimination compared with that of the right amygdala.

Additionally, overall greater activation in the right amygdala than the left to fearful faces compared with neutral faces has been reported in fMRI studies (Noesselt et al., 2005). Such findings are congruous with behavioral results indicating that subjects are faster at identifying fearful vs neutral faces when they are presented to the left visual field (Benowitz et al., 1983). Consistent with these findings, we found mainly that the response to fearful eyes was greater in the right hemisphere than the left hemisphere however, the more selective response in the left compared with the right suggests that the role of each hemisphere in threat processing is more complicated than has been previously considered. It is possible that the right amygdala activates to all conditions simply because a change is occurring to the eyes, a notion supported by an emotional information processing model proposed by Glascher and Adolphs (2003). This model suggests that the right amygdala acts in an automatic, rapid manner and is responsible for initiating a general level of arousal in response to stimuli. Such a mechanism of detection could be mediated by subcortical inputs from the superior colliculus and pulvinar into the amygdala (Amaral and Insausti, 1992) as these projections would be primarily magnocellular with a strong preference for low spatial frequency visual information (Bisti and Sireteanu, 1976; Vuilleumier et al., 2003). This would allow for the detection of coarse eye changes—i.e. the occurrence of EWA changes—but not provide the sufficient spatial detail that would allow for discrimination between similar conditions, such as fear and a shift in gaze, where EWA increases. The same model by Glascher and Adolphs (2003) posits that the left amygdala is more involved in the representation of stimuli that are emotionally stimulating and can better differentiate between stimuli that display varying levels of arousal.

While the above evidence supports our current findings, the question of what could cause the left amygdala to be selective only to fearful eyes remains. We examined our data for other brain regions that showed analogous patterns of specificity and are known to be modulated by emotion as potential areas with which the amygdala could communicate. The FG, a cortical region that has long been implicated in the processing of facial information (Puce et al., 1995; Kanwisher et al., 1997), exhibited a similar response pattern to that of the right and left amygdala in that the right FG activation did not significantly vary between conditions—although unlike the left amygdala it also responded to the other conditions—while the left FG showed a significantly greater response to fear relative to the other conditions. A similar pattern was also observed in the left pIPS.

Given this similarity in the left FG and left amygdala response, it is possible that these two regions work in tandem to process eye information on a more detailed level. Because the pathway from the lateral geniculate nucleus to the visual cortex receives fine-grained inputs, the FG could be supplied with the spatial detail needed for the processing of facial features (Merigan et al., 1991). In non-human primates, there are clear anatomical connections from area TE to the amygdala and connections from the amygdala to multiple regions in extrastriate visual cortex (Iwai et al., 1987). The existence of such connections in humans are supported both by imaging studies of patients with amygdala lesions (Vuilleumier et al., 2004) and those showing a correlation of fusiform and amygdala activation to fearful faces in healthy subjects (Morris et al., 1998). The traditional view of face processing posits that visual information first travels through higher-level visual areas, such as the STS and FG, and then progresses forward to the amygdala (for a review, see Adolphs and Spezio, 2006). More recent neuroimaging studies indicate that the amygdala may actually exert influence on many stages of visual processing and in turn modulate activity in the FG, not the other way around (Morris et al., 1998; Vuilleumier et al., 2003). Vuilleumier et al. (2003) illustrated this by using low- and high-pass filtered faces to see how this information affected amygdala and FG activation. While the FG alone activated selectively to faces made of high spatial frequency information, its activity was modulated by the low spatial frequency-driven response of the amygdala, demonstrating that the direction of information appears to flow from the amygdala to the FG. Given the evidence illustrating that the amygdala and FG actively communicate, it is possible this differs between the left and right hemisphere, potentially explaining the variation in activation between the two amygdalae.

The present study provides new insights into how the human brain detects the presence of threat using information from another's eyes. We suggest that the left and right amygdala differ in terms of their selectivity to changes in the eyes as well as to the presence of EWA increases. A coarsely tuned mechanism (right amygdala) would allow for the rapid detection of possible danger sources while at the same time, a finely tuned and detailed mechanism (left amygdala) would provide a more accurate determination of whether the potential threat is real. The present study not only lends more evidence to the existence of such parallel mechanisms, but also highlights a substantial difference between the response of the left and right amygdala to changes in the eyes. The interaction between presumably rapid subcortical pathways that provide crude detail about threatening stimuli and the slower, cortically mediated pathways that provide greater detail is a topic that is clearly in need of further exploration.

SUPPLEMENTARY DATA

Supplementary data are available at SCAN online.

REFERENCES

Adams
RB
Gordon
HL
Baird
AA
Ambady
N
Kleck
RE
,
Effects of gaze on amygdala sensitivity to anger and fear faces
Science
,
2003
, vol.
300
pg.
1536
Adolphs
R
,
Social cognition and the human brain
Trends in Cognitive Sciences
,
1999
, vol.
11
(pg.
469
-
79
)
Adolphs
R
,
Neural systems for recognizing emotion
Current Opinion in Neurobiology
,
2002
, vol.
12
(pg.
169
-
77
)
Adolphs
R
Spezio
M
,
Role of the amygdala in processing visual social stimuli
Progress in Brain Research
,
2006
, vol.
156
(pg.
363
-
78
)
Adolphs
R
Tranel
D
Damasio
H
Damasio
A
,
Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala
Nature
,
1994
, vol.
372
(pg.
669
-
72
)
Adolphs
R
Gosselin
F
Buchanan
TW
Tranel
D
Schyns
P
Damasio
AR
,
A mechanism for impaired fear recognition after amygdala damage
Nature
,
2005
, vol.
433
(pg.
68
-
72
)
Amaral
DG
Insausti
R
,
Retrograde transport of D-[3H]-aspartate injected into the monkey amygdaloid complex
Experimental Brain Research
,
1992
, vol.
88
(pg.
375
-
88
)
Benowitz
LI
Bear
DM
Rosenthal
R
Mesulam
MM
Zaidel
E
Sperry
RW
,
Hemispheric specialization in nonverbal communication
Cortex
,
1983
, vol.
19
(pg.
5
-
11
)
Bisti
S
Sireteanu
RC
,
Sensitivity to spatial frequency and contrast of visual cells in the cat superior colliculus
Vision Research
,
1976
, vol.
16
(pg.
247
-
51
)
Glascher
J
Adolphs
R
,
Processing of the arousal of subliminal and supraliminal emotional stimuli by the human amygdala
Journal of Neuroscience
,
2003
, vol.
23
(pg.
10274
-
82
)
Glover
GH
Law
CS
,
Spiral-in/out BOLD fMRI for increased SNR and reduced suspectibility artifacts
Magnetic Resonance in Imaging
,
2001
, vol.
46
(pg.
512
-
5
)
Hooker
CI
Paller
KA
Gitelman
DR
Parrish
TB
Mesulam
MM
Reber
PJ
,
Brain networks for analyzing eye gaze
Cognitive Brain Research
,
2003
, vol.
17
(pg.
406
-
18
)
Iwai
E
Yukie
M
Suyama
H
Shirakawa
S
,
Amygdalar connections with middle and inferior temporal gyri of the monkey
Neuroscience Letters
,
1987
, vol.
83
(pg.
25
-
9
)
Kanwisher
N
McDermott
J
Chun
MM
,
The fusiform face area: a module in human extrastriate cortex specialized for face perception
Journal of Neuroscience
,
1997
, vol.
17
(pg.
4302
-
11
)
Kawashima
R
Sugiura
M
Kato
T
Nakamura
A
Hatano
K
Ito
K
et al.
,
The human amygdala plays an important role in gaze monitoring. A PET study
Brain
,
1999
, vol.
122
(pg.
779
-
83
)
Kim
H
Somerville
LH
Johnstone
T
Alexander
AL
Whalen
PJ
,
Inverse amygdala and medial prefrontal cortex response to surprised faces
Neuroreport
,
2003
, vol.
14
(pg.
2317
-
22
)
Merigan
WH
Katz
LM
Maunsell
JH
,
The effects of parvocellular lateral geniculate lesions of the acquity and contrast sensitivity of macaque monkeys
Journal of Neuroscience
,
1991
, vol.
11
(pg.
994
-
1001
)
Morris
JS
deBonis
M
Dolan
RJ
,
Human amygdala responses to fearful eyes
Neuroimage
,
2002
, vol.
28
(pg.
249
-
55
)
Morris
JS
Ohman
A
Dolan
RJ
,
A subcortical pathway to the right amygdala mediating “unseen” fear
Proceedings of the National Academy of Sciences of the United States of America
,
1999
, vol.
96
(pg.
1680
-
5
)
Morris
JS
Frith
CD
Perrett
DI
Rowland
D
Young
AW
Calder
AJ
et al.
,
A differential response in the human amygdala to fearful and happy facial expressions
Nature
,
1996
, vol.
383
(pg.
812
-
5
)
Morris
JS
Friston
KJ
Buchel
C
Frith
CD
Young
AW
Calder
AJ
et al.
,
A neuromodulatory role for the human amygdala in processing emotional facial expressions
Brain
,
1998
, vol.
121
(pg.
47
-
57
)
Narumoto
J
Okada
T
Sadato
N
Fukui
K
Yonekura
Y
,
Attention to emotion modulates fMRI activity in human right superior temporal sulcus
Cognitive Brain Research
,
2001
, vol.
12
(pg.
225
-
31
)
Noesselt
T
Driver
J
Heinze
HJ
Dolan
R
,
Assymetrical activation in the human brain during processing of fearful faces
Current Biology
,
2005
, vol.
15
(pg.
424
-
9
)
Phillips
ML
Williams
LM
Heining
M
Herba
CM
Russell
T
Andrew
T
et al.
,
Differential neural responses to overt and covert presentations of facial expressions of fear and disgust
Neuroimage
,
2004
, vol.
21
(pg.
1484
-
96
)
Puce
A
Allison
T
Gore
JC
McCarthy
G
,
Face-sensitive regions in human extrastriate cortex studied by functional MRI
Journal of Neurophysiology
,
1995
, vol.
74
(pg.
1192
-
9
)
Sadr
J
Jarudi
I
Sinha
P
,
The role of the eyebrows in face recognition
Perception
,
2003
, vol.
32
(pg.
285
-
93
)
Smith
ML
Cottrell
GW
Gosselin
F
Schyns
PG
,
Transmitting a decoding facial expressions
Nature Neuroscience
,
2005
, vol.
16
(pg.
184
-
9
)
van der Gaag
C
Minderaa
RB
Keysers
C
,
The BOLD signal in the amygdala does not differentiate between dynamic facial expressions
Social Cognitive and Affective Neuroscience
,
2007
, vol.
2
(pg.
93
-
103
)
Vuilleumier
P
Armony
JL
Driver
J
Dolan
RJ
,
Distinct spatial frequency sensitivities for processing faces and emotional expressions
Nature Neuroscience
,
2003
, vol.
6
(pg.
624
-
31
)
Vuilleumier
P
Richardson
MP
Armony
JL
Driver
J
Dolan
RJ
,
Distant influences of amygdala lesion of visual cortical activation during emotional face processing
Nature Neuroscience
,
2004
, vol.
7
(pg.
1271
-
8
)
Whalen
PJ
Rauch
SL
Etcoff
NL
McInerney
SC
Lee
MB
Jenike
MA
,
Masked presentations of emotional facial expressions modulate amygdala activity without explicit knowledge
Journal of Neuroscience
,
1998
, vol.
18
(pg.
411
-
8
)
Whalen
PJ
Kagan
J
Cook
RG
Davis
FC
Kim
H
Polis
S
et al.
,
Human amygdala responsivity to masked fearful eye whites
Science
,
2004
, vol.
306
pg.
2061
Wright
CI
Fischer
H
Whalen
PJ
McInerney
SC
Shin
LM
Rauch
SL
,
Differential prefrontal cortex and amygdala habituation to repeatedly presented emotional stimuli
Neuroreport
,
2001
, vol.
12
(pg.
379
-
83
)
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact [email protected]