Rapid accurate categorization of the emotional state of our peers is of critical importance and as such many have proposed that facial expressions of emotion can be processed without conscious awareness. Typically, studies focus selectively on fearful expressions due to their evolutionary significance, leaving the subliminal processing of other facial expressions largely unexplored. Here, I investigated the time course of processing of 3 facial expressions (fearful, disgusted, and happy) plus an emotionally neutral face, during objectively unaware and aware perception. Participants completed the challenging “which expression?” task in response to briefly presented backward-masked expressive faces. Although participant’s behavioral responses did not differentiate between the emotional content of the stimuli in the unaware condition, activity over frontal and occipitotemporal (OT) brain regions indicated an emotional modulation of the neuronal response. Over frontal regions this was driven by negative facial expressions and was present on all emotional trials independent of later categorization. Whereas the N170 component, recorded on lateral OT electrodes, was enhanced for all facial expressions but only on trials that would later be categorized as emotional. The results indicate that emotional faces, not only fearful, are processed without conscious awareness at an early stage and highlight the critical importance of considering categorization response when studying subliminal perception.
There can be no question as to the critical importance of expressions of emotion in the functioning of humans. Via expressions, we confer and infer the state of mind of ourselves and our peers to permit highly adaptive social behavior. Faces are crucially important in this process, with distinct spatial frequency–specific information from facial expressions of emotion providing the primary source of this critical information (e.g., wide open eyes indicating fear, wrinkled nose indicating disgust, Smith et al. 2005). The highly adaptive nature of facial expressions, particularly those which depict threatening situations (fear and anger) or materials (disgust), have led many to argue that facial expressions can be processed rapidly without awareness potentially via a dedicated subcortical pathway to the amygdala (for a recent review, see Tamietto and de Gelder 2010).
Supporting evidence for the rapid evaluation of emotional expression without conscious awareness, however, remains mixed. Some of the strongest evidence comes from patients displaying affective blindsight: An ability to correctly “guess” at above chance levels the expression of a face shown despite the absence of primary visual cortex and any ability to consciously perceive the faces (de Gelder et al. 1999; Pegna et al. 2005), along with emotion selective responses in subcortical limbic structures (Vuilleumier et al. 2002; Pegna et al. 2005). Similarly, emotion selective brain responses have been observed in healthy participants in paradigms where subjective awareness of an expressive face is disrupted by backward masking paradigms (e.g., Morris et al. 1998, 1999; Whalen et al. 1998, 2004; Liddell et al. 2004) in which a briefly presented target face is masked by the subsequent presentation of a second image (typically a neutral face), which prevents the target image reaching conscious awareness (Esteves and Ohman 1993). However, doubts have been raised as to the effectiveness of the masking used in these earlier studies, with studies applying more rigorous metrics (d′, Green and Swets 1966) indicating residual objective awareness of masked emotional faces even at very short presentation times in some observers (Szczepanowski and Pessoa 2007), and emotion selective responses in the amygdala and fusiform gyrus occurring only for those observers demonstrating above chance performance in the emotion classification tasks (Pessoa et al. 2006).
Turning to event-related potential (ERP) studies in which the time course of processing can be explored with millisecond accuracy, the picture remains mixed. Several recent ERP studies provide convincing evidence of the early processing of successfully masked fearful facial expressions while the absence of any residual awareness was rigorously monitored. Enhanced responses were observed to fearful (vs. neutral) faces over frontal (Kiss and Eimer 2008) and lateral occipitotemporal (OT) (Pegna et al. 2008) brain regions within 200 ms of stimulus onset, even though participants were objectively unaware of the emotional content of the presented face. However, further studies appear to temper these findings by indicating that they are to some extent driven by the overt response abilities of the participants (Eimer et al. 2008; Japee et al. 2009), leaving unclear the question of whether these early emotion effects are in fact driven by the bottom-up content of the fearful visual stimulus per se or by awareness and accurate perception of the masked stimulus as containing a fearful face.
Due to its high adaptive importance, most studies of the time course of facial expression perception have focused on the processing of fearful faces (notable exceptions include Batty and Taylor 2003; Eimer et al. 2003; Schyns et al. 2007). However, blindsight patients are able to reliably distinguish a greater range of emotional expressions (e.g., fear, anger, and happiness: Pegna et al. 2005), and the amygdala responds to a range of facial expressions not only fear. It therefore stands to reason that any subconscious processing of emotional faces should also extend to other emotion categories. To date, no ERP studies have explored the processing of nonfearful facial expressions of emotion under objectively recorded levels of awareness. Thus leaving open the question of whether any early subconscious emotion effects are specialized to this one critical emotion category (fear) or serve as a broad purpose system for a range of affective facial stimuli. I therefore investigated the processing of a range of facial expressions of emotion (fearful, disgusted, and happy) along with an emotionally neutral face at varying levels of objective awareness by masking the face stimulus after a very short (8 ms) or longer (132 ms) time interval. Rather than assume unawareness for the rapidly presented emotional stimuli, levels of objective awareness of the emotional content of the stimuli were recorded by asking the participants to perform a fixed choice emotion discrimination task on every trial of the experiment.
To ensure that participants had to attend to and fully process each face stimulus in turn, they completed the more challenging full expression categorization task as opposed to a simpler binary classification (emotional vs. not emotional). With a binary categorization design, participants may develop strategies based on their expectations as to the key visual features to focus on to complete the task, for example, focusing only on the eyes for a fearful versus neutral discrimination or the wrinkles around the nose for a disgusted versus neutral discrimination (Smith et al. 2005) rather than processing the full face without bias. Additionally, with a 2 response categorization task, it is possible that any observed differential effects could be a result of the overt categorization itself (target category present vs. absent) rather than resulting from the emotional content of the stimuli per se. A 4 alternative forced choice categorization should minimize these concerns by ensuring that there is no longer only one target category.
In the fully aware condition, I set out to determine if the documented enhanced frontal activation to emotional faces (Eimer et al. 2003) remained in this more challenging “which expression” design and then to establish how this effect was modulated by the participants awareness of the stimuli. Differential responses to fearful, disgusted, and/or happy faces in the objectively unaware condition would indicate apparent processing of emotional affect outside of conscious awareness. In addition, I investigated the effects of emotion and awareness on the face selective N170 component (Bentin et al. 1996). Debate remains as to the modulation of the N170 in response to facial expressions of emotion, with the established view that it is not modulated by the emotional content of faces (Krolak-Salmon et al. 2001; Eimer and Holmes 2002; Ashley et al. 2004) being challenged by more recent findings (Batty and Taylor 2003; Blau et al. 2007; Schyns et al. 2007).
In a second analysis, I explored the event-related responses in the objectively unaware condition to each response category in turn: hits (emotional face categorized as emotional), misses (emotional face categorized as neutral), correct rejections (neutral face categorized as neutral), and false alarms (neutral face categorized as emotional). If categorization of the face rather than the face itself drives emotion selective neuronal responses, we would expect to see significant effects for trials in which participants believed the emotional face to be present (hits and false alarms), whereas if it is the emotional information in the stimulus that is driving the emotional neuronal response independently of the resulting categorization, we would expect to see significant emotion effects for both hit and miss trials.
The goals of the study were then 2-fold: to explore the time course of processing for a range of facial expressions of emotion under varying levels of awareness and to establish the contributions of the input visual information and resulting categorization choice to these effects.
Materials and Methods
Fifteen healthy right-handed volunteers (5 male, mean age 28.7 years) participated in the study which was approved by the Faculty for Information and Mathematical Sciences Ethics Committee at the University of Glasgow. All participants had normal or corrected to normal vision, were naïve to the purpose of the study, and gave informed consent.
Stimuli were gray-level images of expressive faces (fearful, neutral, happy, and disgusted) posed by 6 actors (3 male) taken from the California Facial Expressions Database (Dailey et al. 2001, see Fig. 1, for example stimuli from each category). All stimuli met the Facial Action Coding Scheme criteria (Ekman and Friesen 1975) for the expressions shown. Masks were generated from the neutral facial expressions following the procedure outlined by Kiss and Eimer (2008) in which the internal facial region was decomposed into a rectangular array of 25 (5 × 5) small tiles (see Fig. 1). The arrangement of these tiles was randomized on a trial per trial basis to disrupt any global facial structure. Only neutral faces were used to generate the masks to ensure no expression-specific information was present that might influence processing of the target expressive stimulus (e.g., the wide open eyes in fear, the open mouth in happy). Stimuli subtended 5.4 × 3.7 degrees of visual angle at a viewing distance of 1 m from a computer monitor (viewing distance fixed by use of a table mounted chin and head rest).
Participants were required to categorize briefly presented (8 ms) expressive faces by expression (neutral, happy, fearful, or disgusted) in the presence of backward scrambled face masks. While the expressive face always remained onscreen for one screen refresh (8.3 ms, 120 Hz CRT monitor) and the mask for 180 ms, on each trial, the delay between target and mask could be 1 of 4 durations: 0, 25, 58, and 124 ms permitting 8, 33, 66, or 132 ms of processing time for the face stimulus. After 180 ms, the mask was replaced by a uniform gray screen until response (see Fig. 1 for a schematic of the trial design). Participants were instructed to press the appropriately labeled response key to indicate the emotion shown and when unsure to guess (all responses were made with the participant’s right hand). After their response and a variable pause (mean 1 s, standard deviation [SD] 100 ms), the subsequent trial began with a 500 ms fixation cross.
Stimuli were presented in a fully randomized order, with participants completing 1440 experimental trials (90 trials × 4 expressions × 4 temporal delays). Short breaks were provided every 144 trials, with a longer break (10 min) in the middle of the experiment. Prior to the experiment proper, participants were presented with the stimuli in order to ensure that they could accurately categorize the faces by expression (95% correct admission criterion) and completed a small number of habituation trials at the shortest delays to prepare them for the nature of the task.
Behavioral responses were analyzed using the signal detection metric of d-prime (d′, Green and Swets 1966), which provides a more objective measure of behavioral performance than hit rates alone, particularly for the shorter temporal duration condition where participants must guess and may have inflated hit rates for any one response category simply due to an underlying response bias. The d′ statistic was calculated for each participant in the experiment for each of the face categories shown (fearful, disgusted, happy, and neutral) from the rate of correct responses for that category (hits, e.g., fearful face trials categorized as fearful) and the rate of false alarms for that category (e.g., nonfearful face trials categorized as fearful).
Scalp electrical activity (electroencephalography [EEG]) was recorded with 64 sintered silver/silver chloride electrodes mounted in an electrode cap (QuickCap; Neuroscan) placed in accordance with the International standard 10–20 system along with intermediate positions. Linked mastoids served as initial common reference and the AFz electrode as ground. Vertical electrooculogram (vEOG) was bipolarly registered above and below the dominant eye and horizontal electrooculogram (hEOG) at the outer canthi of the eyes. Signals were continuously acquired at 1024 Hz, and the electrode impedance was kept below 10 kΩ throughout.
Signals were rereferenced off-line to average reference (excluding the EOG channels) and analysis epochs generated starting 200 ms prior to stimulus onset and continuing for 700 ms. Channels identified as bad were removed from the data set and replaced by interpolated channels (Delorme and Makeig 2004). Trials containing EEG or EOG artifacts (e.g., blinks and eye movements) were removed by applying standard artifact rejection software (Delorme and Makeig 2004). After artifact rejection, there were on average 78 (σ = 1) trials per condition. For 4 subjects, one electrode was interpolated from its nearest neighbors. Artifact free trials were baseline corrected using the mean value in the 200 ms prestimulus interval and low pass filtered at 35 Hz.
Early Emotion Effects
In line with previous studies, I first set out to explore the effects of emotional expression over frontocentral (Eimer and Holmes 2002; Eimer et al. 2003) and lateral OT (for the N170, Bentin et al. 1996) brain regions. To this end, I computed ERP mean amplitudes for electrode sets over frontopolar (F7, F8, and Fz), frontocentral (AF3, AF4, and FPz), and OT (P7 and P8) regions. I initially selected 2 early time intervals from the well-defined peaks of the global field power and averaged the response of each participant to the neutral and emotional faces in these intervals. The global field power (Skrandies 1990) represents with one value the ERP activity over all electrodes at each measurement time point and is computed as the spatial standard deviation of the voltage values. The global field power was calculated separately for each emotional category and presentation time, and the average curve used to select the time windows for further study (105–125 and 160–180 ms, see Fig. 3). The second time interval employed encompasses both the N170 face selective ERP component occurring over lateral occititotemporal locations and the early emotional effects detected over frontal sensors from ∼150 ms (Eimer and Holmes 2002; Eimer et al. 2003). Divergent activation over later time intervals (210 ms onward) was explored in a separate robust analysis due to the ill-defined and overlapping nature of the global field power peaks across the 2 presentation conditions.
Initial omnibus repeated measures analyses of variance (ANOVAs) explored the factors of electrode location (2: frontocentral and frontopolar), laterality (3: left, right, and midline), emotion (4: neutral, happy, fear, and disgust), and presentation time (2: 8 and 132 ms) over frontal electrodes and laterality (2: left and right), emotion (4), and presentation time (2) over lateral OT electrodes. As the onset of the masking stimulus will unavoidably generate visual ERPs at different times across the different presentation times, separate ANOVAs were also conducted with the remaining factors (frontal: electrode location, laterality, and emotion and OT: laterality and emotion) for each presentation time in turn. Post hoc planned contrasts were performed to explore any differential brain response to each of the emotional categories (fearful, happy, and disgusted) against neutral whenever appropriate. Violations of sphericity were corrected using the Greenhouse–Geiser correction to the degrees of freedom.
Later Emotion Effect
While common practice would suggest selecting extended time windows (e.g., 210–300 ms) and averaging across conditions to explore later processing of the emotional information in the faces, the absence of well-defined components and the clear divergence in response profiles across the different presentation times suggests an alternative approach. I continued to focus on the same 8 key electrodes but now searched each electrode and presentation time independently, without a priori assumptions, for a main effect of emotion over a sliding 20 ms time window. To this end, independently for each electrode of interest, I averaged the evoked response to each emotional category over a 20 ms time window and performed a one factor repeated measures ANOVA on the resulting responses and stored the associated F-statistic value. I then moved the time window forward by 1 ms and repeated the process until the F-statistic corresponding to a main effect of emotion had been computed for all time points and electrodes of interest. To establish where and when significant effects of emotion were present, independently per presentation time, a 99.9% confidence interval was established around the null by randomly permuting the emotional condition labels over 1000 iterations to establish a P < 0.001 significance threshold. To establish the underlying differences in the processing of each individual facial expression of emotion compared with neutral, the appropriate planned contrasts were performed for each electrode location and time window encompassing a main effect of emotion. A random permutation bootstrap (computed independently for each emotion condition, P < 0.005 significance threshold) accounted for multiple comparisons. Significant differences in the measured response to fearful, disgusted, and happy faces were then depicted in separate color-coded plots.
By averaging the response to all neutral faces and comparing that to the response to all emotional faces in a 2 alternative forced choice experiment (e.g., fear vs. neutral), researchers are collapsing trials in which participants categorized the emotional face stimulus as emotional (hits) with trials in which they thought a neutral face was present (miss) and comparing that to the response to neutral face trials in which participants falsely detect an emotional face (false alarm) combined with trials where participants categorized the face as being neutral (neutral hits/emotional correct rejections). If false alarms also result in emotion-like responses as have been seen in the amygdala and fusiform face area (FFA) (Pessoa et al. 2006), collapsing all neutral face trials (emotional false alarms plus correct neutral categorizations) before comparing with emotional face trials will minimize the possibility of detecting any underlying emotional effect. With a 4 alternative force choice design, emotional trials will comprise a mixture of correct identifications, other emotional false alarms, and missed emotional trials where the participant categorized the emotional stimulus as neutral. It is only on these missed emotional trials that we can really be sure to be measuring a neuronal response to the subliminal presentation of an emotional face that was truly independent of any non stimulus driven factors, that is, which was independent of a top-down belief that an emotional face was present whether it was or not.
To explore the relationship between overt categorization response and the early unaware processing of emotion over frontal and OT regions, I split the unaware trials (8 ms presentation, immediately masked) as a function of response category (correct emotional, correct neutral, false alarm emotional, and miss emotional) and conducted repeated measures ANOVAs over the initial 2 time windows at each of the 8 selected electrodes with behavioral response as a further within subjects factor. Similarly to the first analysis, an initial omnibus ANOVA had factors of categorization response (4 levels: correct emotional, correct neutral, false alarm emotional, and miss emotional), laterality (3 or 2 levels for frontal and OT, respectively), and for frontal sensors, the additional factor of electrode location (2 levels). Post hoc planned contrasts were performed to explore any differential brain response for each of the response categories in turn.
A repeated measures ANOVA on the d′ measures, with factors of expression (4: fearful, disgusted, happy, and neutral) and processing time (4: 8, 33, 66, and 132 ms), indicated a main effect of processing time (F1.9,26.6 = 292, P < 0.001), a main effect of emotional expression (F1.8,25.4 = 23.6, P < 0.001), and a significant interaction between the factors (F9,126 = 5.60, P < 0.001). Participants were significantly better at detecting happy faces over all others at 33, 66, and 132 ms (all t14 > 3, P < 0.01), with a trend for the same effect at 8 ms (t14 > 2.5, P < 0.03 for happy vs. neutral and disgust, t14 = 2, P = 0.055 for happy vs. fear performance). Additionally, at the longer processing times of 66 and 132 ms, participants performed significantly better for neutral than fearful (t14 > 3.2, P < 0.006) and for neutral than disgust at 132 ms (t14 = 3.4, P = 0.005). There was no difference in accuracy for fearful versus disgusted faces at any temporal duration (t14 < 1.2, P > 0.25).
At the shortest delay, perception of neutral, fearful, and disgusted faces did not differ significantly from chance performance (d′ = 0.07 [neutral], 0.11 [fearful], and 0.11 [disgusted], all t14 < 0.8, P > 0.45), while perception of happy faces showed a trend for greater than chance performance (d′ = 0.44, t14 = 1.89, P = 0.08). Considering each individual in turn, for each expression, 2 participants exhibited slightly greater than chance d′ scores for happy facial expressions (P < 0.05, binomial two-tailed test) and were removed from any further analysis. The resulting d′ performance for the remaining participants (see Fig. 2) remained at the chance level for each of the emotion categories (d′ = −0.06 [neutral], −0.06 [fearful], −0.07 [disgusted], and 0.24 [happy], t12 < 1.1, P > 0.28 for all). Thus, as participants are objectively unaware of the emotional content of the stimuli, visual presentation in the shortest temporal delay condition can be considered to be subliminal for all emotion categories. At the 25 ms delay (33 ms processing time) all expressions were detected at rates significantly greater than chance (t12 > 7.6, P < 0.001 for all) but at performance levels significantly less than the longer delays, with a mixture of awareness levels across the individual expressions and individuals.
Turning to the reaction times (see Fig. 2), a 2 × 2 repeated measures ANOVA on the median response time for each condition (processing time and emotion) found significant main effects of emotion (F3,36 = 13.2, P < 0.001), processing time (F1.01,13.15 = 7.6, P < 0.02), and an interaction between the 2 factors (F3.1,36.8 = 2.8, P = 0.05). Participants were significantly faster to identify happy facial expressions from any other at 33, 66, and 132 ms (t12 > 2.3, P < 0.05 for all), while there were no significant differences in response times to fearful, disgusted, or neutral expressions (t12 < 1.5, P > 0.16). At the shortest temporal delay, there were trends for participants to be faster in response to happy images than on fearful or disgusted trials (t12 = 1.8, 1.9, P = 0.097, 0.08, respectively) with no other significant differences (t12 > 1.2, P > 0.2). Prior to the analysis, the reaction time data and subsequent ERP data were cleaned to remove any trials corresponding to overly long or very fast reaction times by removing any trials with reaction times more than 2 SD away from the median values.
In summary as the delay between stimulus and mask increased, performance measures improved, with faster and more accurate responses for every expression. At the shortest temporal delay, participant’s performance did not differ significantly from chance for any emotional expression, and there was no significant difference in reaction times. Overall participants were faster and more accurate on happy trials than for any other emotion, even showing trends for enhanced performance at the shortest delay, potentially indicating some remaining residual minimal awareness for happy faces in some of the participants. This effect may be a result of the lower spatial frequency diagnostic information underlying happy categorizations (the broad smiling mouth) in comparison with the small high spatial frequency features required to categorize fearful and disgusted expressions (see Smith et al. 2005).
For ease of interpretation and in order to ensure equivalent performance levels across the conditions of interest, the EEG analysis focused on the 2 most extreme temporal delay conditions (8 and 132 ms processing time) corresponding to uniform low and high levels of awareness of the emotional content of the stimuli in the participants. Figure 3 shows the ERPs measured in response to neutral, fearful, disgusted, and happy stimuli at each of the selected frontal and OT sensors for the 2 temporal delay conditions.
Early Effects: P100
The initial omnibus ANOVAs over the time window of the first peak, the P100, indicated no significant effects of emotional content over either frontal (P > 0.12) or OT (P > 0.28) electrodes. There were, however, main effects of presentation time at both anterior and posterior locations (frontal: F1,12 = 8.2, P < 0.02 and OT: F1,12 = 14.1, P < 0.005), which further interacted with electrode location and laterality over frontal channels (F1.84, 22.14 = 4.51, P = 0.025) indicating a differential pattern of activity independent of the emotion shown, most likely a result of the differential onset of the masking stimulus across the 2 conditions. Individual ANOVAs for each presentation time confirmed the absence of any significant emotional effects over the P100 (frontal: all P > 0.14, OT: all P > 0.27). Although the P100 is strong over P7 and P8, it peaks at adjacent electrodes PO7 and PO8. To confirm the absence of emotional effects in this time window, the aforementioned ANOVAs were recomputed using the mean amplitude recorded at PO7 and PO8 in place of P7 and P8. Again, there was a main effect of presentation time (F1.00,12.00 = 11.5, P = 0.005) but no effects of emotion or interactions of emotion with the remaining factors (F < 1, P > 0.4). Similarly, the reduced factor ANOVAs conducted at each presentation time indicated no significant main effects of emotion or interactions of emotion with laterality (F < 1.1, P > 0.34)).
Early Effects: N170
Turning to the more interesting second time window (160–180 ms), visual inspection of the ERP peaks (Fig. 3) indicates apparent differences between emotional and neutral faces in both the aware and unaware presentation conditions over the time course of the N170 component. These strong effects of emotion were confirmed by a main effect of emotion in the omnibus ANOVA (F2.34,28.02 = 13.38, P < 0.001) that did not further interact with presentation time or laterality (F < 1.1, P > 0.35) and remained in the independent ANOVAs conducted for each presentation delay (8 ms: F3,36 = 4.68, P < 0.005 and 132 ms: F2.72,32.62 = 8.11, P < 0.001). Planned contrasts indicated that these effects were driven by significantly larger responses to all emotional faces versus neutral in the fully aware condition (F1,12 = 20.6 [fearful], 9.7 [disgusted], and 11.9 [happy], P = 0.001, 0.009, and 0.005, respectively) and primarily to fearful faces in the unaware condition (F1,12 = 41.2, P < 0.001), although strong trends existed for the remaining emotions (F1,12 = 4.5 [disgusted] and 4.7 [happy], P = 0.054 and 0.052).
To confirm that this early response to emotions was not driven by any residual awareness in participants in the short delay condition, I correlated the size of the emotional ERP effect (neutral–emotional) with the individual d′ measures. There was no significant positive relationship between levels of awareness and increasing size of the emotional effect on the N170 for fearful (r2 = −0.27, P = 0.37), disgusted (r2 = −0.41, P = 0.16), or happy (r2 = −0.46, P = 0.11) faces. If anything, there appeared to be a trend toward the opposite, for happy faces in particular, where those participants with the lowest d′ scores (indicating higher false alarm rates than correct hits) actually exhibited the largest emotion differences. As a further check, the participants were split into 2 groups as a function of awareness measures (d′ > 0 [N = 5] and d′ < 0 [N = 8]), and the ANOVA recomputed with this grouping as a between subject factor. Again, there was a significant main effect of emotion (F3,33 = 4.4, P = 0.01) but no indication of an interaction with awareness grouping (F3,33 = 1.1, P = 0.36).
To confirm that the measured effect is indeed a modulation of the N170 topography and not a result of the superposition of a separate emotion component, following Schacht and Sommer (2009), I conducted a repeated measures ANOVA on the normalized amplitude values with the factors of electrode (58: all EEG channels minus eye channels) and ERP effect (4: neutral N170, happy–neutral emotional effect, fear–neutral emotional effect, and disgust–neutral emotional effect) independently for each presentation time. If the topography of the emotion effect differed from the topography elicited by the presentation of a neutral face, there would be a significant interaction of electrode with ERP effect in the aforementioned ANOVAs. There was no indication of such an interaction at either presentation time (presentation time 1: F7.17, 86 = 1.3, P = 0.25 and presentation time 2: F6.68, 80.2 = 1.13, P = 0.36) confirming that there is no significant statistical difference in the topography of the N170 and the topography of the emotional effect.
Taken together, these results indicate that a range of emotional faces do indeed modulate neuronal activation over the N170 component and that this modulation occurs for each emotion tested, independently of levels of awareness in the participants.
Early Effects: Enhanced Frontal Positivity
As expected, ERP effects to emotional faces appeared to result in an enhanced positivity as measured over frontocentral and frontopolar locations around the peak of the component in both the aware and unaware conditions (see Fig. 3). Again, the omnibus ANOVA confirmed the presence of a highly significant main effect of emotion over all frontal channels tested (F3,36 = 7.1, P < 0.001), which further interacted with presentation time and laterality (F6,72 = 2.76, P < 0.02). The reduced factor ANOVAs (electrode location, laterality, and emotion) confirmed the presence of a main effect of emotion (F3,36 = 3.6, P = 0.02) for the unaware condition, which did not interact further with laterality or electrode location (F < 1.5, P > 0.2 for all). Planned comparisons indicated that this main effect of emotion was driven by significantly increased responses to fearful faces (F1,12 = 11, P = 0.006), with a trend for the same effect for disgusted faces (F1,12 = 4, P = 0.068) but no effect for happy faces (F1,12 = 1.75, P > 0.2). As for the N170 component, to confirm that this emotional effect was not driven by any residual awareness in the participants, I correlated their individual d′ scores with the emotional effect for fearful faces (r2 = −0.33, P > 0.25) which again indicated no positive correlation between awareness and size of the emotional effect. Rerunning the analysis with the additional within subjects factor of awareness group also indicated no interaction of this factor (F3,33 = 1.35, P = 0.28) with the main effect of emotion (F3,33 = 2.9, P < 0.05).
For the aware condition, there was a main effect of emotion (F1.95,23.36 = 3.5, P < 0.05), which interacted further with laterality (F6,72 = 2.6, P = 0.025) but not location (F < 1, P > 0.5). To explore this laterality effect, ANOVAs were computed independently for the 2 electrodes on the left (AF3 and F7), the 2 on the right (AF4 and F8), and the 2 midline electrodes (Fz and FPz). There were main effects of emotion over both left and right lateralized electrodes (left: F3,36 = 4.8, P < 0.01; right: F3,36 = 3.9, P < 0.02), which were driven by significantly greater responses to the emotion categories (left: F1,12 = 6.3, 11.7, and 4.4, P = 0.027, 0.005, and 0.057; right: F1,12 = 11.87, 10.89, and 6.7, P = 0.005, 0.006, and 0.024 for fearful, disgusted, and happy faces, respectively). Along the midline, there was a trend for a main effect of emotion (F1.95,22.37 = 2.7, P = 0.091), which was driven solely by increased responses to the negative emotions (fear and disgust: F1,12 = 9.28 and 3.6, P = 0.01 and 0.08; happy: F1,12 < 1, P > 0.45).
These results indicate that a range of emotional faces do indeed modulate neuronal activation over frontal electrodes within 170 ms of stimulus onset when participants are both objectively aware and unaware of their presence. Unlike the posterior OT effects, the enhanced frontal emotional positivity is significant only for the negative emotions and fear in particular in the unaware condition, while all 3 emotion categories differ significantly from neutral within this time range during aware perception.
ERPs: Later Effects
Post 200 ms, there are obvious differences in the response profile of the ERPs in the aware and unaware conditions (see Fig. 3 Global Field Power plots) making it difficult to extract well-defined time windows with which to compare activation. As such, I used a robust analysis approach over the 8 electrodes of interest to identify any significant emotional effects across the entire time range (see Fig. 4). The neuronal response to the emotional content of the face stimuli begins to significantly diverge between the unaware and aware conditions after 200 ms of processing. The enhanced activation to emotional faces is sustained over both frontal and OT brain regions for a further 150 ms in the fully aware condition, while the processing of each emotion category becomes indistinguishable after the initial 170 ms peak in the unaware condition. The topographic pattern of this effect in the aware condition (see Fig. 3, Emotion response, 210–300 ms) with a sustained enhanced positivity over frontal regions and accompanying sustained negativity over OT regions is similar to the enhanced posterior negativity (EPN) component seen in response to emotional faces, words, and affective pictures (Schupp et al. 2003; Schacht and Sommer 2009).
After 80 ms, a second peak at 250 ms (corresponding to an early N2) differentiates processing of the emotional faces in the subliminal condition. However, there was no evidence that this N2 modulation was a specific marker of the subliminal processing of fearful emotional faces as has been proposed previously (Liddell et al. 2004; Kiss and Eimer 2008).
ERPs: Response Effects
In the present experiment, trials in which an emotional face is presented can result in 3 distinct response types. Participants may correctly categorize the emotion shown (an emotional hit), incorrectly guess another emotion (other emotion false alarm), or incorrectly conclude the absence of any emotion in the face (neutral false alarm/emotional miss). While neutral faces may be correctly categorized (a neutral hit/emotional correct rejection) or falsely categorized as emotional (emotional false alarm). Low trial numbers prohibit an extensive analysis considering all possible response types for each emotional category presented, hence to ensure an adequate signal to noise ratio, the 2 negative emotion conditions (fearful and disgusted) were collapsed into a single emotional condition. Happy trials were excluded from this analysis as there was no indication of a significant effect of happy expressions in the unaware condition over frontal sensors and the behavioral data provided some indication of residual awareness for this specific emotion category over and above the others. Thus, 4 response categories were considered: emotional hits (HIT) where the emotional face is categorized with the correct label, neutral hits (i.e., emotional correct rejections, CR) where a neutral face is categorized with the correct label, emotional false alarms (FA) where a neutral face is categorized as emotional, and finally emotional miss trials (MISS) where an emotional face is categorized as neutral. Participant’s performance levels in the aware condition are high resulting in very few emotional false alarm or miss trials; hence, the investigation of the effect of categorization response was possible only in the unaware condition. A further 4 participants were removed from the response effects analysis as they provided insufficient artifact free trials in all of the response conditions (<20 trials per condition), leaving 9 participants and on average 40 correct neutral hits, 49 correct emotional hits, 25 emotional false alarms, and 50 missed emotional trials. Figure 5 illustrates the evoked potentials for each response category over the 8 selected electrodes.
As expected, there was no main effect of response type in the earliest time interval (105–125 ms) over frontal (F < 1.2, P > 0.35) or OT electrodes (F < 1.22, P > 0.3) but main effects of response type over both the N170 (F3,24 = 4.7, P = 0.01) and frontal sites (F3,24 = 3.7, P = 0.025) during the second time interval of interest (160–180 ms).
The main effect of response type was present across all frontal electrodes (interaction with location and/or laterality F < 1, P > 0.65) and was driven by significant differences between both emotional hits and missed emotional trials to correctly identified neutral trials (HIT > CR: F1,8 = 8.1, P < 0.05; MISS > CR: F1,8 = 5.4, P < 0.05) with no significant difference between hit and miss emotional trials (P > 0.2). This indicates that over frontal channels, the neuronal response is indistinguishable on trials in which participants “correctly” categorized the face as emotional and on trials in which they reported a neutral face perception. The presence of the negative emotion face is enough to result in an enhanced positivity independent of awareness and categorization response (see Fig. 5). There was also a trend for increased activation on neutral trials that were classified as containing an emotion (emotional false alarms) over neutral trials categorized as neutral (FA > CR: F1,8 = 3.8, P = 0.088) and no significant difference in neuronal response between false alarm trials, hits, or misses (P > 0.25 for all), which may indicate a role for top-down factors in these early emotion effects.
Over OT electrodes, the differences in response as a function of categorization showed a different pattern. As expected, significant effects were observed between correctly categorized emotional and neutral trials (HIT > CR: F1,8 = 19.5, P < 0.005) but also between emotional hits and missed emotional trials (HIT > MISS: F1,8 = 5.8, P < 0.05) with no significant differences observed between neutral hits (CR) and missed emotion trials (P > 0.25). This indicates that it is the categorization of the presented face as emotional which drives/is driven by the neuronal modulation underlying the enhancement of the N170 component rather than the presence of the emotional face per se. There were no other significant differences as a function of response type (P > 0.14), but there was a trend for the main effect of response type to interact with laterality (F3,24 = 2.3, P < 0.1). Exploring this interaction further indicated that for electrodes over the left hemisphere, false alarm trials resulted in significantly greater activation than correct neutral detections (F1,8 = 11.6, P < 0.02) with a strong trend for false alarm trials to also result in greater activation than missed emotional trials (F1,8 = 4.8, P = 0.059). There was no significant difference between the 2 neutral response types (CR vs. MISS, F < 0.5, P > 0.85). Thus, over the left hemisphere, trials on which participants believed that they saw an emotional face resulted in/from a greater enhancement of the N170 component than trials on which a neutral face was correctly identified or an emotional face was presented but missed.
To confirm that the measured response effects are indeed modulations of the emotional effects described in the preceding sections and not the result of a separate categorization component, I conducted a further repeated measures ANOVA to compare the topography of the emotion effect (emotional face stimuli–neutral face stimuli) with the topography of the categorization effect (emotional response–neutral response). If the categorization effect results from a separate component, superimposed, or overlapping with the emotional effect, there would be a significant interaction of effect (categorization, emotional) with electrodes in the time window considered. There was no such interaction of categorization/emotional effect with electrodes (F3.12,24.96 = 0.372, P = 0.782). A final ANOVA comparing the topography elicited by a neutral face to the topography of the categorization effect in the N170 time window again found no indication of a significant interaction of the effect with electrodes (F2.56,20.48 = 0.878, P = 0.454). Taken together these results indicate that there is no significant statistical difference in the topography of the emotion effect and the categorization effect and that neither effect can be statistically distinguished from the neutral face topography in this time interval.
The main objective in this study was to examine the processing of a range of facial expressions of emotion under conditions in which they could and could not consciously be discriminated from each other. To this end, I presented participants very briefly (8 ms) with 3 different facial expressions of emotion (fearful, disgusted, and happy) plus neutral before applying a backward mask after a very short (0 ms) or longer (125 ms) delay. Although participants were unable to correctly discriminate the different expressions of emotion when immediately masked, that is, their performance on a forced choice discrimination task did not differ significantly from chance (d′ = 0), in line with previous studies (Kiss and Eimer 2008; Pegna et al. 2008), there remained evidence of the emotional content of the stimuli in the associated ERP responses over both frontal and lateral OT regions within 170 ms of presentation. Splitting the trials as a function of overt behavioral response indicated that the enhanced emotional activity for subliminally presented emotional faces differed in its relationship to latter behavioral response across frontal and OT regions.
On supraliminal trials, where participants were able to correctly categorize the emotional faces, all emotional categories resulted in significantly enhanced activation over both frontal and lateral OT regions for an extended period of time from 150 ms following stimulus onset. In contrast to the subliminally presented faces which resulted in 2 brief significant emotional responses (at around 170 and 250 ms), these emotional effects continued over frontal and OT regions for a further 150 ms until approximately 350 ms following stimulus onset, with the strongest effects for fearful faces.
Processing of Emotional Content over OT Regions (Emotional Modulation of the N170)
Over lateral OT regions, subliminal presentation of all 3 emotional categories resulted in an enhanced N170 component, though this effect was most marked for fearful faces. Although the effect appeared to be independent of any residual awareness in the participants with no positive correlation between overall detection ability (d′) and the size of the emotional versus neutral ERP difference, closer examination of the overt behavioral responses indicated that enhanced N170s occurred only in response to emotional faces which were latterly categorized as emotional. In contrast to the early frontal emotional effects, there was no indication of enhanced activation on emotional trials later categorized as neutral (i.e., misses).
Generators of the N170 component are typically found in the fusiform gyrus (Deffke et al. 2007), which has previously been shown to respond to subliminally presented fearful faces only when residual awareness remains, that is, in a high-performing subset of participants for stimuli presented for 33 ms (Pessoa et al. 2006; Japee et al. 2009). However, others have observed significant fearful emotion enhancement of the N170 in the absence of awareness in all participants when presented for 16 ms (Pegna et al. 2008). The present results are in line with these findings but also indicate a close link between categorization choice and the enhanced N170 activation. Although participant’s performance is no better than chance overall (d′ = 0 for each emotion category), it is only those trials on which participants go on to categorize the emotional face as emotional that reveal an N170 enhancement. However, it is impossible to say whether it is some awareness or top-down expectation of the emotional face that drives this enhanced N170 response or if the neuronal modulations underlying an enhanced N170 response signals the presence of the emotional face. In broad agreement with the present findings, previous studies have also indicated a direct relationship between face recognition performance and M170 modulations (the magnetic equivalent of the N170, Tanskanen et al. 2007) and activity in the fusiform gyrus with performance in object discrimination tasks (Grill-Spector et al. 2000; Bar et al. 2001).
Our results also lend support to recent evidence challenging the established view that emotional content is not processed over the N170 component (Krolak-Salmon et al. 2001; Eimer and Holmes 2007). In line with several studies (Batty and Taylor 2003; Blau et al. 2007; Schyns et al. 2007, 2009), the present results indicate that a range of facial expressions of emotion do result in modulations in activation over the time course of the N170. Unlike the earlier Batty and Taylor (2003) study, this modulation was observed with a strictly controlled stimulus set where all external features were standardized and individual performance levels were recorded. Furthermore, for the first time, this study has shown that this emotional modulation of the N170 to a range of emotions is present during both subliminal and supraliminal presentations but only on those trials later categorized as emotional.
Subliminal Processing of Negative Emotions over Frontal Regions
In the unaware condition over frontocentral and frontopolar regions, there was an increased activation to negative emotional faces, particularly fearful, with respect to neutral faces between 160 and 180 ms. This effect, which was independent of any residual awareness in the participants, was present for the negative emotional faces (fearful and disgusted) regardless of their latter categorization, that is, negative emotional faces resulted in increased activation over frontal regions in trials later classified as emotional or as neutral. It can therefore be concluded that this enhanced neuronal response over frontal regions appears to genuinely reflect an automatic processing of negative facial expressions of emotion, which is not tied to any very minimal residual awareness in the participants or to their categorization behavior.
This enhanced emotional positivity, recorded over frontal cortex for both supra- and subliminally presented faces, has been proposed to reflect the rapid detection of facial expressions of emotion (Eimer and Holmes 2007) but is unlikely to be driven directly by amygdala activation as it persists in cases of amygdala damage (Rotstein et al. 2010). Rather, it is thought to originate in prefrontal or anterior cingulate cortical regions (Eimer and Holmes 2007), which have been closely linked to the processing of negative emotions and aversive stimuli (Kawasaki et al. 2001; Etkin et al. 2011). Long-range connections link ventrolateral prefrontal cortex to regions in the ventral visual cortex (Rempel-Clower and Barbas 2000) and have been suggested to rapidly transmit low-spatial frequency coarse information from early visual areas to frontal regions to enable rapid processing of incoming visual stimuli (Bar 2003). Based primarily on the output of magnocellular cells (Kveraga et al. 2007; Barrett and Bar 2009), this short-cut route may be less susceptible to manipulations of awareness and attention (Pessoa and Adolphs 2010), resulting in the type of activation pattern observed in the current study. That is, that regardless of objective awareness and latter categorization choice, negative emotional faces differentially modulate activity over frontal regions soon after stimulus presentation.
The shape and location of the amygdala make it essentially immune to detection with EEG and ERP measures; however, studies with amygdale damaged populations suggest that its activity directly modulates ERPs in 2 distinct time windows from 100 to 150 ms and from 500 to 600 ms after presentation of a fearful face (Rotstein et al. 2010). There was no evidence of emotional modulation of ERP components during the early time window as one might expect of the hypothetical subcortical processing route. This route which is purported to bypass the ventral visual stream and send coarse low-frequency information directly to the amygdala via the superior colliculus and pulvinar outside the bounds of attention and awareness should be activated by the rapidly presented emotional faces in the present study. However, the present findings indicate that across both anterior and posterior brain regions, the processing of emotional information begins approximately 150 ms following stimulus onset in a time interval found to be unaffected by amygdala damage (Rotstein et al. 2010). The differential processing observed across these 2 regions (frontal: only negative emotions, primarily fearful, and with no apparent tie to categorization and OT: all emotional categories, but only for trials later categorized as emotional) appears to support the recent multiple pathways model of visual information processing (Pessoa and Adolphs 2010). In this model, it is proposed that visual information is processed simultaneously along parallel cortical routes to enable rapid accurate categorization without the need for a subcortical route passing through the amygdala.
The Importance of False Alarms
In their functional magnetic resonance imaging (fMRI) study, Pessoa et al. (2006) not only observed significant emotional effects in the amygdala and fusiform gyrus as a function of awareness (normal vs. super responders) but also as a function of behavioral response. They observed increased activation for both correct detections of fearful faces and also to emotional false alarms where neutral faces were incorrectly classified as fearful. Similarly, in the present study, strong trends indicated the importance of emotional false alarms. Over both frontal and left OT regions, the evidence suggested that emotional false alarm trials resulted in a similar neuronal enhancement as correctly classified emotion hits. Although the prevailing view would suggest that the N170, and its generators in the FFA, are driven by the bottom-up processing of stimulus information, there is growing evidence to suggest a modulation of the N170 response by top-down factors as well. An enhanced N170 is observed when participants search for and detect faces in noise alone stimuli (Wild and Busey 2004), and the FFA is activated by participants imagining a face (Mechelli et al. 2004), or when they misclassify house stimuli as containing a face (Summerfield et al. 2005). Similarly, responses in prefrontal brain regions have shown increased activation on trials in which participants believe that they are detecting a face stimulus (though a degraded house image was presented) or are simply set the task of searching for a face (Summerfield et al. 2006).
In the context of predictive coding models of perception, the hypothesis would be that areas in prefrontal cortex generate predictions as to the possible content of a visual stimulus via rapid course projections from early visual areas (Bar 2003; Bar et al. 2006; Summerfield et al. 2006). Category selective regions, for example, the fusiform face area (Kanwisher et al. 1997) in the fusiform gyrus, would then interpret any ambiguous incoming sensory evidence in the light of these predictions to appropriately categorize the incoming stimulus. Given the adaptive importance of negative facial expressions, and fear in particular, it is perhaps unsurprising that the enhanced frontal responses seen here were significantly present only for the negative emotions in the subliminal case. With such a challenging emotion discrimination task, processing efficiency would suggest initially focusing on the more evolutionary important fearful or disgusted face detection at the expense of the more minor happy versus neutral distinction. The N170 component, however, is more likely driven primarily by the bottom-up processing of visual information from the stimulus and thus exhibits increased responses to all emotional categories. In this highly ambiguous situation, the combination of frontal predictions and bottom-up processing in the ventral visual stream would then determine the categorization outcome, with emotional predictions over frontal regions on miss trials dismissed by a lack of bottom-up evidence from the parallel visual route.
Considering the effect of false alarm trials may also go some way to explaining the apparently divergent results obtained in earlier studies (Eimer et al. 2008; Kiss and Eimer 2008; Pegna et al. 2008; Japee et al. 2009). When trials are split solely as a function of the presented stimulus, emotional hits are mixed in with miss trials, while neutral trials encompass both correct rejections (neutral hits) and emotional false alarms. If the proportion of false alarms is large enough (e.g., in very poor responders), it may well be that the ERP response to neutral faces is in fact so driven by an enhanced response on the false alarm trials that no difference exists to the emotional trial condition, leading to no observable difference between the presented conditions. In high-performing responders, the proportion of false alarms will necessarily be lower (d′ = hits–false alarms), resulting in a neutral ERP driven more by correct rejections (neutral hits), which in turn would be seen to differ from the emotional trials. The resulting picture would then be of a differential neuronal response to subliminally presented faces only in the high performing group.
Although further studies that induce a greater number of emotion false alarm trials in the participants (and thus higher signal to noise ratio) are needed to support these conclusions, it seems that simply collapsing trials regardless of overt response is a dangerous strategy in studies of subliminal information processing. To fully explore the processing of subliminally presented information, it is insufficient to dismiss overt response and any top-down biases present in the participants. Rather, trials must be considered as a function of the categorization response in order to explore truly automatic processing (emotional miss trials) independent of top-down expectations and beliefs.
Processing of Emotional Information after 200 ms
Subsequent to the rapid effects over frontal and OT regions to emotional facial expressions, the robust analysis indicated a sustained posterior negativity with associated frontal positivity lasting for a further 150 ms to all 3 emotional face categories. The timing and topographic pattern are indicative of an EPN component, which has previously been observed to emotional faces (Eimer et al. 2003; Schacht and Sommer 2009), words (Schacht and Sommer 2009), and pictures (Schupp et al. 2004) and is typically interpreted as an indicator of attention to emotionally arousing stimuli (Schupp, Stockburger, Bublatzky, et al. 2007). In the current context, an EPN for emotional faces (compared with neutral) was found only when emotional faces could be reliably detected, that is, in the aware condition, supporting similar findings for fearful stimuli (Eimer et al. 2008) and suggesting an underlying brain response that is linked to the conscious representation of emotional content. The EPN component, however, may in fact be a marker of a more generic selection of task relevant information (Schupp, Stockburger, Codispoti, et al. 2007), and future work should seek to explore the functionality of this component in more detail.
Emotional Specificity and Choice of Task
An important motivation of the current study was to extend the study of the subliminal processing of facial expressions of emotion beyond the single category of fear. There is a clear interest in establishing the general nature of any subliminal processing route for a range of other emotion categories. For the first time, this study has shown that subliminally presented disgusted and happy faces are also processed soon after stimulus onset in the absence of objective awareness in the participants. While fearful facial expressions of emotion produce the largest increases in the neuronal response compared with neutral faces, a similar pattern of enhancement is observed for disgusted faces over frontal regions and both happy and disgusted faces over the N170. When presented supraliminally, all 3 emotion categories result in an enhanced frontal positivity, indicating that the generating brain structures are not specialized to the processing of negative facial expressions per se but that they are activated to happy facial expressions only when awareness is present. Studies of patients with affective blindsight and fMRI masking studies in healthy controls indicate that angry and sad faces also result in enhanced neuronal responses in the absence of awareness (Killgore and Yurgelun-Todd 2004; Pegna et al. 2005), and further work should investigate the time course of processing of the remaining 3 basic expressions of emotion (angry, sad, and surprise) during both aware and unaware perception conditions.
Uniquely to the present study, participants performed an overt “which expression?” rather than an “expressive or not?” task to engage them in fully processing each of the presented faces. There is clear evidence that qualitatively different visual information is used when participants are simply asked to say if a face is expressive or not versus explicitly judging the expression shown, with the latter engaging more low spatial frequency information and the former more high (Oliva and Schyns 1999). Furthermore in everyday life we continually make decisions about the emotional state of our peers without the prior expectation that the faces will fall into 1 of 2 categories. A “which expression?” categorization task is therefore a more realistic way to study the dynamics of facial expression processing and accurate categorization in the brain.
In this study, I investigated the time course of facial expression processing during subliminal and supraliminal presentation of a range of emotions. Unlike the majority of studies, participants were presented with 3 different emotional faces, happy, fearful, and disgusted, along with neutral in an unblocked design. Behavioral responses indicated that participants were objectively unaware of the subliminally presented emotional faces, but there were clear neural correlates of the emotional content of each face within 170 ms of presentation. Enhanced frontal activation corresponding to the negative facial expressions was present for hit and miss trials alike. Over lateral OT regions, all emotion categories resulted in an enhanced N170 component, but this enhancement was tied to those trials which were latterly classified as emotional. False alarm trials, where neutral faces were incorrectly categorized as emotional, hinted at the role of top-down predictions in the modulation of both frontal and OT activity. Not only do these results inform the debate on processing of facial expressions of emotion without awareness, they indicate the importance of considering behavioral response when assigning trials to experimental conditions for comparison.
In the broader context of facial expressions processing, future studies should explore the impact of individual differences on the processing of emotional faces outside of conscious awareness over frontal and OT regions. Recent evidence has shown that in autistic individuals, who typically exhibit social interaction deficits and abnormal emotion processing (Harms et al. 2010), masked fearful faces fail to engage the subcortical brain regions typically associated with automatic emotion processing but do activate, in a reduced manner, the fusiform gyrus, indicating some residual functionality of rapid emotional face processing mechanisms (Kleinhans et al. 2011). Additionally, anxious individuals, who exhibit a tendency to orient and sustain attention toward threatening facial expressions (e.g., fearful and angry expressions, Bradley et al. 1998), show abnormal neuronal responses to emotional faces, which has been taken as evidence of an early hypervigilance to threatening faces (Holmes et al. 2008; Mueller et al. 2009) and may result in an enhanced processing of emotional faces at the limits of conscious awareness (Japee et al. 2009).
This study was supported in part by a British Academy small research grant (grant SG100746 to M.L.S).
Thanks to Luisa Frei for help with running the ERP studies and to 3 anonymous reviewers. Conflict of Interest: None declared.