It is controversially discussed whether or not mood-congruent recall (i.e., superior recall for mood-congruent material) reflects memory encoding processes or reduces to processes during retrieval. We therefore investigated the neurophysiological correlates of mood-dependent memory during emotional word encoding. Event-related potentials (ERPs) were recorded while participants in good or bad mood states encoded words of positive and negative valence. Words were either complete or had to be generated from fragments. Participants had to memorize words for subsequent recall. Mood-congruent recall tended to be largest in good mood for generated words. Starting at 200 ms, mood-congruent ERP effects of word valence were obtained in good, but not in bad mood. Only for good mood, source analysis revealed valence-related activity in ventral temporal cortex and for generated words also in prefrontal cortex. These areas are known to be involved in semantic processing. Our findings are consistent with the view that mood-congruent recall depends on the activation of mood-congruent semantic knowledge during encoding. Incoming stimuli are more readily transformed according to stored knowledge structures in good mood particularly during generative encoding tasks. The present results therefore show that mood-congruent memory originates already during encoding and cannot be reduced to strategic processes during retrieval.
Modulatory effects of emotion on cognition and behavior have been convincingly demonstrated in the past decades (for an overview, see Fiedler 1991; Ashby and others 1999). Neuroimaging studies have started to elucidate the brain circuits mediating the interaction between cognition and emotion (for an overview, see Phelps 2004). One such behavioral effect is mood-congruent recall: Stimuli that are congruent with subjects' mood state are remembered better than incongruent stimuli. It is debated whether emotional influences on memory performance reflect memory processes or can be reduced to heuristic response biases during retrieval. In order to contribute to this debate, the present research used event-related potential (ERP) recordings to examine the spatial–temporal pattern of brain activity during emotional word encoding.
In behavioral research, 2 basic classes of mood influences on cognition have been the focus of past research, the processing advantage of mood-congruent information (Bower 1981) and the different processing styles triggered by good versus bad mood (Fiedler 1988). Mood-congruency effects reflect a tendency to perceive, encode, retrieve, and utilize more pleasant than unpleasant information in good than in bad mood. As it concerns different processing styles, numerous studies show that people in good mood can be characterized by a more creative and spontaneous cognitive style than people in bad mood, who tend to be more careful and controlled (for an overview see, Fiedler 2001).
One persistent source of divergence between different theories on mood and cognition lies in the role attributed to semantic processes for mood-congrueny effects in episodic memory tasks (for the relation between semantic and episodic memory, see Tulving 1972, 1999). On the one hand, several theorists assume that a major site of the mood-cognition interface is located in memory proper (Isen 1984; Forgas 1995; Fiedler and others 2001). Without denying other influences stemming from motivation and strategic behavior, these authors assume that, first of all, mood influences memory functions independently of the individual's strategies. Such theories postulate associative paths in semantic memory that connect positive emotional states with the semantic meaning of positive memory contents, and negative emotional states with negative contents (Bower 1981).
On the other hand, several theorists have given up the assumption that memory processes are necessary to understand the empirical phenomena (Schwarz and Clore 1988; Clore and others 2001). These authors suggest that people in positive mood may arrive at favorable judgments because they make direct heuristic inferences from their current state (“I feel good”) to the judgment target (“must be good”). These heuristic approaches can provide a sensible explanation for mood influences on cognitive style. Feeling good signals that things are OK and that action can be taken without gathering further data—hence the intuitive and creative style. Differences in memory performance can be explained by assuming that people in good mood will specifically search for positive information in memory compared with people in bad mood.
Against this background we can now introduce our own theoretical conception (Fiedler 1991, 2001). Our theory assumes that different affective states trigger different adaptive functions. Positive states support assimilation, whereas negative states support accommodation, as defined in Piaget's (1954) theory of cognitive development. Accommodation is basically a stimulus-driven bottom-up process by which organisms adapt to the stimulus input and assess the environment as accurately and carefully as possible. Conversely, assimilation is a top-down process by which organisms actively transform the input according to internal knowledge structures.
The notion of assimilation and accommodation functions and their relation to emotional states corresponds with the distinction of approach and withdrawal behavior in neuropsychological and neurobiological theories of emotion (Gabriel and others 1987; LeDoux 1996). It has been shown that appetitive and aversive situations elicit activity in at least partially distinguishable neural circuits associated with reward and punishment, respectively (LeDoux 1996; Ashby and others 1999; Rolls 1999). Appetitive situations signal reward and induce approach and exploration behavior, which means to try out new options (i.e., assimilation). In contrast, aversive situations indicate punishment and induce withdrawal behavior, requiring the organism to be attentive and to avoid mistakes (i.e., accommodation). The orbitofrontal cortex and dopaminergic projections passing though the nucleus accumbens supposedly play important roles in mediating reward, whereas the amygdala has been suggested to be crucially involved in signaling punishment (Blood and Zatorre 2001; Erk and others 2003).
Assimilation and accommodation are in principle involved in every episodic memory task, but depending on the task to different degrees and with different consequences for memory performance. Both assimilation and accommodation result in the formation of new episodic memory traces, but differ in the way how new information is integrated into stored internal knowledge structures through the application of semantic knowledge: During accommodative encoding processes the incoming information is transformed very little, and the internal knowledge structures are altered to fit the new information. Accordingly, accommodative encoding does not involve the activation of semantic knowledge for transforming the incoming information. During assimilative encoding processes, in contrast, semantic knowledge structures are activated and used to actively transform incoming information. As a consequence, the incoming information is altered to fit the internal knowledge. Hence, accommodative and assimilative processes crucially differ with respect to the involvement of semantic processing. From these considerations one can infer that mood states should have a systematic effect on memory performance depending on the memory task. In fact, positive mood facilitates performance on generative (i.e., assimilative) memory tasks like free recall (Kensinger and others 2002; Erk and others 2003) when memory cues actively generated from internal knowledge structures are used to retrieve stored information. Bad mood, in contrast, facilitates performance on accommodative memory tasks like recognition (e.g., Cahill and McGaugh 1998) that require the retrieval of information based on external memory cues. Similarly, discrimination learning, an accommodative task that affords careful evaluation of the incoming stimuli, is particularly rapid within a negative emotional context triggering avoidance behavior (Gabriel and others 1980, 1987). Hence, the assumptions of the assimilation–accommodation theory are entirely consistent with Gabriel's neurophysiological theory of avoidance learning.
Like the heuristic approach, the present framework provides a natural account of mood-dependent processing styles. However, with reference to mood-congruency effects, the assimilation–accommodation notion leads to distinct implications that can hardly be predicted by other approaches. As mood-congruency is by definition an assimilative phenomenon that reflects the activation of mood-congruent semantic knowledge structures, it follows that mood-congruent memory effects are stronger in good than in bad mood (e.g., Isen 1984) and increase for assimilative memory tasks like free recall and decrease for accommodative memory tasks like recognition (Gerrig and Bower 1982).
The assimilation–accommodation approach can be tested in a straightforward manner in the generation effect paradigm (e.g., Dosher and Russo 1976), which incorporates the assimilation–accommodation distinction as a design factor. In this paradigm, some words in a learning list are presented completely, whereas others are presented as fragments (“v - - t - ry”), and participants have to actively generate the semantic stimulus meaning (“victory”). Typically, memory for self-generated information is found to be superior to memory for passively received, experimenter-provided information. When mood-congruency is examined within this paradigm (Fiedler 1991; Fiedler and others 2003), the findings obtained in such a design are consistent with our theoretical account. A 3-way interaction between mood, valence, and encoding task shows that mood congruency is most apparent in participants with good mood for self-generated information, which is in nice agreement with the theory's specific implications. Hence, mood-congruent memory effects only emerge when information is actively manipulated during encoding, that is in tasks and mood states, which support activation of semantic knowledge.
In several neuroimaging studies the neural correlates underlying the interaction between emotion and memory have been determined (for an overview, see Phelps 2004). However, there are just a few studies, which assessed the influences of mood or emotional context during encoding. Erk and others (2003) investigated the influence of the emotional context (positive, negative, and neutral emotional pictures) on the encoding of emotionally neutral words using functional magnetic resonance imaging (fMRI). At a behavioral level, they found superior recall for words encoded within a positive than in a negative context as predicted by the assimilation–accommodation approach. At a neural level, Erk and others observed in a positive emotional context brain activity predictive for subsequent recall (subsequent memory effect) in ventro-medial temporal lobe structures (parahippocampal, fusiform, and lingual gyri), brain regions known to be involved in semantic processing (e.g., Vandenberghe and others 1996) as well as in episodic memory encoding (Wagner and others 1998). In a negative emotional context, in contrast, activity in the amygdala was predictive for subsequent recall, an area involved in signaling punishment and in fear processing (LeDoux 1996). The common involvement of brain areas in semantic processing and in successful episodic memory encoding (see also Lepage and others 2000), particularly in a positive emotional context, is in line with the suggestion of the assimilation–accommodation approach that good mood supports activation of semantic knowledge structures during episodic memory encoding. As “deep” semantic encoding results in superior memory performance than “shallow,” nonsemantic encoding (Craik and Lockhart 1972), words encoded within a positive emotional context might subsequently be recalled more frequently than words encoded within a negative context. A recent fMRI study by Lewis and others (2005) assessed brain mechanisms of mood-congruent memory for emotionally valenced words by varying mood states at retrieval. They looked at common activity during encoding and retrieval for mood-congruent words. Lewis and others found common activity in the subgenual cingulate for positive words and in the posteriorlateral orbitofrontal cortex for negative words. The discrepant neuroanatomical locations of mood/emotional context effects in these 2 studies might be attributed to the fact that Erk and others varied the emotional context during encoding, whereas Lewis and others manipulated the mood state/emotional context during retrieval.
As a complement to fMRI, ERP recordings capture brain activity online within the time range of milliseconds although their spatial resolution is poorer. An ERP deflection, which bears relevance on studying mood influences on the activation of semantic knowledge structures, is the N400 ERP component. The N400 is a negative potential over the centro-parietal scalp peaking at about 400 ms after stimulus onset (Kutas and Hillyard 1980). The N400 has been shown to be sensitive to semantic deviations with larger N400 amplitudes for stimuli incongruent with the semantic context (e.g., Kutas and Hillyard 1980; Bentin and others 1985; Kiefer 2002). N400 amplitude is smaller when the semantic meaning is already activated by a preceding semantic context. Hence, N400 modulation indexes the activation of semantic knowledge structures. The centro-parietal N400 scalp potential has been related to activity in the ventro-medial temporal lobe by intercranial ERP recordings (Nobre and McCarthy 1995). As mentioned above, the significance of this area for semantic processing as well for episodic memory encoding has been demonstrated in several fMRI studies. In addition to the centro-parietal N400, ERP effects associated with semantic processing have also been observed over fronto-temporal regions (Snyder and others 1995; Kiefer and others 1998). These ERP effects have been related to activity of the inferior prefrontal cortex, a brain region known to be involved in controlled retrieval of semantic information (Wagner and others 2001). As semantic retrieval supports episodic encoding (Tulving 1999), activity in inferior prefrontal cortex during encoding is predictive for subsequent recall performance (Wagner and others 1998).
In a previous study (Chung and others 1996), ERPs have been used to examine mood state–dependent semantic expectancy. Chung and others investigated participants in positive and negative mood states, respectively, reading brief stories of life events whose final word represented either a good or a bad outcome of that story or was semantically incongruent. Semantically incongruent words elicited the largest N400 potential, but mood-incongruent outcomes were associated with a larger N400 than mood-congruent outcomes. Chung and others conclude from these results that mood states impose an emotional constraint on the access of semantic word meaning. However, as Chung and others did not probe memory for the story outcomes, the relation between the mood-congruent N400 effects and mood-congruent memory cannot be determined.
The neural correlates of mood-congruent memory encoding have received little attention in the past. The studies so far have mainly focused on the influences of mood or emotional context during retrieval (Maratos and others 2000, 2001; Lewis and others 2005). This is surprising as the investigation of mood-dependent ERP effects during memory encoding directly bears on the controversial issue whether mood-congruent recall has a cognitive origin in memory processes or simply reflects heuristic output biases during retrieval. In order to contribute to a clarification of this debate, we investigated the electrophysiological correlates of mood-congruent memory encoding processes in the generation effect paradigm. ERPs were recorded while participants in good or bad mood states were presented with words of positive and negative valence. Participants had to memorize words for subsequent recall. In one encoding condition, words were fragmented and had to be actively generated, in a second condition words were intact and thus were received passively. ERPs were subjected to source analyses in order to estimate the brain areas involved in the corresponding processes. If mood-congruent recall depends on activation of semantic knowledge structures during encoding and does not solely reflect a strategic output bias during retrieval, encoding processes are expected to vary in part with participants' emotional mood state. More specifically, mood-congruent semantic processes during encoding should be reflected in a modulation of the N400 ERP component.
We hypothesized that the individual mood state activates mood-congruent knowledge structures hereby providing a semantic context for the words to be encoded. Words of incongruent valence with the present mood state were expected to elicit a larger N400 amplitude than congruent words. More specifically, as outlined above, the assimilation–accommodation approach to mood and memory predicts a triple interaction between mood, valence, and encoding task: The mood-congruency effect on the N400 should be mainly present in positive mood states and larger for generated than for received words. That is, N400 amplitude should be less pronounced for positively valenced, generated words in good mood compared with any other condition. We expected that this ERP effect depends on sources in the ventro-medial temporal lobe and inferior prefrontal areas, particularly in good mood for actively generated material.
Thirty-eight right-handed volunteers (8 male, 30 female; mean age 26 years) with normal or corrected-to-normal vision participated in the study. Handedness was assessed with the Oldfield Handedness Inventory (Oldfield 1971). Participants were native German speakers without any history of neurological or psychiatric illnesses according to the results of a structured interview. Gender was identically distributed (15 female/4 male) in the 2 participant groups who received an induction of a good or bad mood state, respectively (for the mood induction procedure see below). All participants signed a written consent after the nature and the consequences of the experiment had been explained. The study has been approved by the local Ethical Committee.
One hundred sixty adjectives referring to personality traits served as stimuli. Adjectives were drawn form the German Handbook of Word Norms (Hager and Hasselhorn 1994), which contains words rated on Osgood's dimension (Osgood and others 1957) of valence and arousal by large normative samples. Half of the words were of positive, the other half of negative valence. Only words with strong positive or negative valence were included. Table 1 shows examples of the adjectives. Average valence for the positive and negative words was +2 and −2, respectively (on a rating scale ranging from −3 to +3). Absolute mean values did not differ significantly between valence conditions (t < 1). Moreover, positive and negative words were also matched in word frequency (9.1 per million for both valence conditions, t < 1) and word length (positive: 8.8 letters; negative: 8.5 letters, t < 1). We equated our stimuli with respect to arousal as closely as possible (positive: 0.99, negative: 0.30; on a scale from 0 to 5). However, in order to obtain 80 adjectives per valence condition matched for a variety of linguistic variables, the matching in arousal was not perfect (t(158) = 4.7, P < 0.001.). As we were mainly interested in interactions between valence and mood states and not in the effect of valence per se, it was acceptable to us that valence and arousal were confounded to some extent. Furthermore, a pilot study without mood induction (see below) showed that valence conditions were comparable with regard to recall difficulty.
|Positive valence||Negative valence|
|Read||begabt (gifted)||grausam (cruel)|
|Generate||a_zieh_nd (attractive)||ar_og_nt (arrogant)|
|Positive valence||Negative valence|
|Read||begabt (gifted)||grausam (cruel)|
|Generate||a_zieh_nd (attractive)||ar_og_nt (arrogant)|
Notes: The entire stimulus list can be obtained from the authors.
All 160 words were fragmentized by removing 1–3 letters from each word, depending on word length. The first letter was never removed. It was controlled that word fragments were unequivocal. To test for difficulty of positive and negative word fragments, a pretest was conducted on the fragmentized words. Five participants had to learn 8 lists of 20 words each and performed a free recall test after each list. Words were presented on a computer screen, one at a time. Participants were instructed to press a key as fast as possible when they recognized the word and to name it thereafter. As dependent measures, reaction time (RT), error rate (ER), and recall performance were analyzed. ER was low (<0.01%) and did not differ significantly between valence conditions, but positive word fragments were recognized more quickly than the negative ones (800 and 906 ms, respectively; t(158) = 9.8, P < 0.01). The RT advantage for positive words does not necessarily imply that the fragments differed in difficulty because a similar advantage for positive over negative material is often found for complete words, too (“repression effect,” see Fiedler 1991). Most importantly, an equal number of positive (32.8) and negative word fragments (33.2) were remembered in the free recall test (t < 1). Thus, word fragments were comparable with respect to recall difficulty.
Adjectives were divided into 8 lists of 20 words each. Each list contained an equal number of positive and negative words, and orthogonally to word valence, an equal number of complete and fragmentized word stimuli. Mean positive and negative valence, word frequency, and word length were matched across lists. Stimulus order within each list and presentation order of lists were randomized. For each word list, versions A and B were created. When a word was presented in its complete form in version A, it was presented as a fragment in version B, and vice versa. The A and B versions of the lists as well as list order were counterbalanced across participants. Thus, each participant saw a word only once, either in its complete or in its fragmentized form, but across participants each word appeared equally often in both forms.
Good and bad mood states were induced by sound films. For both mood conditions, we used a series of 4 short videotapes (from 3 to 5 min each) with either funny or sad content, respectively. The films have already been successfully used in earlier emotional studies (e.g., Fiedler and others 2001, 2003). For good mood, the following films were presented: 1) A young and clumsy bear is trying to catch a frog (with underlying music). 2) A flock of ostriches is moving like in ballet (with underlying music). 3) Charlie Chaplin in a roller-skating disco. 4) Mr Bean in the parking garage, Mr Bean is moving house. For bad mood, the following films were presented: 1) About the persecution of the Jews in Poland under the Nazi regime. 2) Documentary film about the last days of a doomed man in jail. 3) About apartheid in South Africa (people are mourning for Steve Biko in 1977). 4) A newscast about the suppression of students' demonstrations in Peking in the 1980s. Happy and sad films were rated for arousal on a scale from 1 (low arousal) to 5 (high arousal) by 6 subjects who did not participate in the main study. The presentation order of the films was counterbalanced. There was no significant difference in arousal between the films in a sign test (Z = 1.7, P = 0.07), although the sad films (mean = 3.4) tended to be slightly more arousing than the happy films (mean = 2.6)
Participants were seated in front of a computer screen in a dimly lit, electrically shielded, sound attenuated booth. They were instructed that the study was aimed at revealing the influence of mental work load due to a memory task on mood. They were told that during the study several films would be shown, and that they were supposed, for the success of the study, to let the films take effect on them emotionally. Participants were also informed that they would be presented with 8 word lists, which had to be memorized and recalled.
Learning of word lists took place in the following way: Stimuli were displayed in white font against a black background in the center of a computer screen synchronously with the screen refresh rate. Participants were first presented with a fixation cross for 750 ms, thereafter with a word from the list for 1200 ms, which could be complete or fragmented. Subsequently, a blank screen was shown for 1800 ms. Then, a question mark appeared for 2000 ms, which prompted the participants to name the word aloud. They had to withhold the response until to the appearance of the question mark in order to avoid movement-related artifacts in the electroencephalography (EEG). The produced name and the correctness of the response was recorded by the experimenter. After the question mark disappeared, a hash mark was presented for 2700 ms to signal the participants the intertrial interval. The next trial started again with the presentation of the fixation cross. After the presentation of each list, a free recall test was performed. Participants had to orally recall as many words as possible of the immediately learned list within 3 min. The experimenter noted all produced words and classified them later as being correctly recalled or not.
At the beginning of the experiment, participants were familiarized with the procedure and had to learn and subsequently to recall a training list. Thereafter, a first mood rating was administered, which served as baseline in order to control for preexisting differences in mood state. Subjects were asked: “Please indicate your momentary emotional state.” Responses were given on a 118 millimeter graphical scale anchored “very depressed” on the left and “very elated” on the right.
Afterward, mood states were induced by showing the first film. Half of the participants were induced with a good mood, the other half with a bad mood. After film presentation, participants had to answer 6 questions about their experience with the film and, as a manipulation check, had to rate their current mood again. Thereafter, they had to learn and to recall the first word list and afterwards they learnt and recalled the second word list. This cycle of mood induction, manipulation check, learning and recall of 2 lists was repeated 4 times (see also Fig. 1). At the end of the experiment, participants were asked for the strategies they had employed for memorizing the words. Finally, participants were debriefed. An entire experimental session including electrode placement took about 3 h.
EEG-Recording and Analysis
Scalp voltages were recorded using an equidistant montage of 64 sintered Ag/AgCl-electrodes mounted in a cap (Easy Cap, EasyCap, Herrsching-Breitbrunn, Germany). At the electrode positions according to the international 10/20 system Fpz and Fz was connected to the ground, and an electrode between Cz and FCz was used as recording reference. Eye movements were monitored with supra- and infraorbital electrodes and with electrodes on the external canthi. Electrode impedance was kept below 5 kΩ. Electrical signals were amplified with Synamps amplifiers (70 Hz-DC, 50 Hz notch filter), continuously recorded (digitization rate = 250 Hz), digitally band-pass filtered (high cut-off: 16 Hz, 24 dB/octave attenuation; low cut-off: 0.1 Hz, 12 dB/octave attenuation), and segmented (150 ms before to 800 ms after the onset of the word to be encoded). Artifacts from vertical eye movements and eye blinks were removed according to the regression technique by Gratton and others (1983). EEG segments were baseline corrected to the 150-ms prestimulus interval. Segments exceeding a potential threshold of ±50 μV in the horizontal electrooculogram, HEOG, channel or of ±75 μV in the remaining channels were rejected as artifacts. Segments with correct naming responses during encoding were averaged separately for each experimental condition. In order to obtain a reference-independent estimation of scalp voltage, the average-reference transformation was applied to the ERP data (Scherg and von Cramon 1984; Bertrand and others 1985). EEG data analysis was performed with the BrainVision Analyzer (BrainProducts, Munich, Germany).
Mean voltages were analyzed statistically in 3 time windows. The first time window (200–350 ms after the onset of the word) covered the onset of the N400, the second time window (350–500 ms) the peak of the N400 and the third time window (500–650 ms) a positive slow wave following the N400 (late positive complex [LPC]). Three scalp regions of interest, each of them being represented by 3 pairs of contralateral electrodes, were selected for analysis: occipito-parietal (O1/O2, PO3/PO4, P1/2), central (CP3/CP4, C1/C2, C3/C4), and fronto-temporal (T1/T2, FT7/FT8, F9/F10). The central and occipito-parietal electrode sites were chosen because the N400 is known to be largest in this scalp region (Kutas and Hillyard 1980). As ERP effects related to semantic processing have also been reported over fronto-temporal scalp (Snyder and others 1995; Kiefer and others 1998), electrodes in this region were also selected. Repeated measures analyses of variance (ANOVAs) were performed separately for each time window (P level of 0.05). When appropriate, degrees of freedom were adjusted according to the method of Greenhouse–Geisser, and the Greenhouse–Geisser ϵ as well as the corrected significance levels are reported.
In order to determine the neural sources for significant group effects in emotional word encoding, source analyses were performed within the N400 and the LPC time windows where ERP differences were largest. Sources were computed for the grand averaged ERP difference waves between positive and negative words in order to focus on valence-related brain activity and to eliminate unspecific brain activity related to word reading. As the different methods of ERP source analysis have specific strengths and weaknesses, we estimated sources using both distributed source modeling (minimum norm source estimates) and dipole modeling. In both cases, we used the algorithms implemented in Brain Electrical Source Analysis 2000 (MEGIS, Munich, Germany) (Scherg and others 2001). For distributed source modeling, maps of estimated cortical currents were calculated from scalp voltages according to the minimum-norm method. This method yields the unique solution that explains the data and does not contain components that are “silent,” that is, do not produce any measurable surface signal by themselves (Hamalainen and Ilmoniemi 1994; Hauk 2004). In contrast to dipole modeling (see below), distributed source modeling does not require any assumptions about the number of sources. Cortical currents were calculated for the time points of maximal global field power (GFP) in the ERP difference waves of each condition in order to ensure optimal signal-to-noise ratio.
In dipole modeling, the electrical current flow in a given brain area is modeled by dipole sources at a certain location and with a given orientation. We used regional sources in order to capture current flow in all 3 spatial orientations (Frishkoff and others 2004). In order to develop the model, we started with a pair of symmetrical regional sources. Further pairs of sources were added to the model until the reduction of residual variance (RV) was smaller than 0.5%. Likewise, a source of one pair was deleted if it did not reduce RV for more than 0.5%. Dipole source analysis was performed for the time interval with maximal GFP (greater than 95%) in each condition within the 300- to 650-ms time window.
Manipulation Check of Mood Induction
Mood ratings before the first (t0) and after the 4 mood inducing films (t1…t4) were submitted to a repeated-measures ANOVA with time point of rating (t0…t4) as within-subject factor and induced mood state as between-subject factor. This analysis yielded significant main effects of time point (F4,36 = 6.042, mean square of errors [MSe] = 165, ϵ = 0.592, P < 0.01) and mood (F1,36 = 25.491, MSe = 1176, P < 0.0001) as well as a significant interaction between both factors (F4,36 = 15.284, MSe = 165, ϵ = 0.592, P < 0.0001). Newman–Keuls post hoc tests showed that within the group receiving the happy movies, ratings differed significantly from baseline at t3 (P < 0.01) and marginally (P = 0.08) at t4, whereas within the groups receiving the negative movies ratings differed significantly from baseline at all time points (all Ps < 0.01). Most importantly, mood ratings did not differ between groups at the beginning of the experiment (at to, before the mood induction) but at all of the subsequent time points (all Ps < 0.01). As expected participants receiving the sad movies (bad mood condition) rated their mood as being more depressed, whereas participants receiving the happy movies (good mood condition) exhibited more elevated mood ratings (see Fig. 2). Thus, there were no pre-experimental differences in mood states between the 2 experimental groups, and the mood induction procedure elicited changes of mood states in the expected direction.
Performance in the encoding tasks was very accurate and close to ceiling in all participants. Participants named on average 39.85 words correctly (out of 40) in the “read” condition and 38.75 words in the “generate” condition.
We predicted an asymmetrically stronger influence of mood on valence in the “generate” than in the “read” condition, for which standard ANOVA interaction tests are not really sensitive (Rosenthal and Rosnow 1985). Therefore, we computed a more refined recall index intended to tap on the predicted pattern. As we had the a priori hypothesis that mood-congruent recall should be largest for good mood, particularly during elaborative encoding (“generate” condition), the recall index was defined as a weighted sum of the 4 recall measures multiplied by the following weights: number of recalled positive words in the “generate” condition, weighted by +1.0; number of recalled negative words in the “generate” condition, weighted by −0.5; number of recalled positive words in the “read” condition, weighted by +0.5; number of recalled negative words in the “read” condition, weighted by −0.25.
A high score on this index reflects the memory performance that is theoretically expected of participants in good as opposed to bad mood because good mood should foster the recall of both mood-congruent and self-generated stimuli. Note that the weights reflect the expected pattern of facilitation and inhibition effects, rather than a contrast vector, which would have to sum up to zero. Indeed, a 1-sided t-test for independent samples on the resulting contrast scores yielded a significant effect in the expected direction (t(1,18) = 1.706, P < 0.05): Scores were higher for participants in good than in bad mood. This interaction was mainly due to the fact that recall rate was highest for generated, positive words in participants with positive mood compared with the other conditions (see Fig. 3).
After the experiment, subjects were questioned regarding the strategy they had employed during encoding. Reported encoding strategies were classified as elaborative (e.g., forming a story out of the presented words, associating the words with specific persons), nonelaborative (e.g., list rehearsal, rote memorizing), or a combination of both. A χ2-test revealed a significant association of strategy choice and mood state (χ2(2) = 9.812, P < 0.01). Participants in good mood reported most frequently elaborative strategies, participants in bad mood most frequently nonelaborative strategies (see also Table 2).
Words to be encoded elicited P1 and N1 ERP components peaking at about 100 and 200 ms after stimulus onset, respectively, which are related to visual sensory processing. Thereafter, a negative deflection was observed at central and occipito-parietal electrodes, which peaked at about 400 ms. According to its polarity, latency, and topography, this potential was identified as N400 ERP component (Kutas and Hillyard 1980). As can be seen in Figure 4, ERPs to positive and negative words diverged within the N400 time interval (between about 300–500 ms after stimulus onset) at occipito-parietal and central electrodes: Overall, encoding of negative words elicited a larger N400 (i.e., a relatively greater negative potential) than encoding of positive words. (The N400 ERP components partially overlapped with a positive wave (LPC) in some scalp regions. Therefore, it only appears as a negative deflection in the waveforms rather than as a negative potential. At these electrodes, a less positive potential in one experimental condition compared with a different condition indexes a greater N400. However, for clarity, the term “more negative” is always used in this text to indicate a greater N400.) At fronto-temporal electrodes, a polarity-reversed voltage pattern was observed. These ERP effects of stimulus valence were modulated by participants' mood states. The interaction between mood and valence was most striking at fronto-temporal electrodes. Effects of valence were only present in participants with good mood states but were almost absent in bad mood.
These ERP effects were evaluated statistically in 3 time windows: An early time window (200–350 ms after stimulus onset) captured the onset of the N400 component. An intermediate time window (350–500 ms after stimulus onset) covered the peak of the N400, and a late time window (500–650 ms after stimulus onset) includes a positive slow wave, LPC, following the N400 (for further details see Method). For each time window, we first calculated a global 6-way ANOVA including the between-subject factor mood state (good vs. bad mood) and the within-subject factors encoding task (read vs. generate), valence (positive vs. negative), scalp region (occipito-parietal, central, and fronto-temporal), hemisphere (left vs. right), and electrode site. Significant interactions were further evaluated in separate subsidiary ANOVAs for each scalp region. In order to reduce complexity of the result section, we focus on the interactions of mood and valence due to their theoretical importance.
At 200–350 ms after Stimulus Onset
Valence-related ERP effects were significantly modulated by participants' mood state as shown by mood × scalp region × hemisphere (F2,72 = 3.556, MSe = 12.910, ϵ = 0.766, P < 0.05) and mood × valence × hemisphere interactions (F1,36 = 7.990, MSe = 1.25, P < 0.01). These interactions were further evaluated in separate ANOVAs for each scalp regions. At occipito-parietal (F1,36 = 6.647, MSe = 0.50, P < 0.05) and central electrodes (F1,36 = 8.609, MSe = 0.63, P < 0.01) a mood × valence × hemisphere interaction was obtained. Planned contrasts showed that valence affected ERPs only in the good mood state over the left central scalp region: Positive words elicited a less negative potential than negative words (see also Fig. 5). In negative mood, valence-related effects were not significant. At fronto-temporal electrodes, valence-related effects were not significant in either group. The topography of the valence-related effect in good mood is shown in Figure 6.
At 350–500 ms after Stimulus Onset
Valence-related effects were again modulated by participants' mood state (mood × valence × hemisphere: F1,36 = 4.422, MSe = .42, P < 0.05; mood × valence × encoding task × scalp region: F2,72 = 3.741, MSe = 3.17, P < 0.05). These complex interactions were further assessed in separate ANOVAs for each scalp region. At occipito-parietal (F1,36 = 20.701, MSe = 2.76, P < 0.0001) only a main effect of valence was obtained: In both groups, positive words elicited a less negative potential than negative words.
Most importantly, at central electrodes, mood states differentially modulated ERPs as shown by a mood × valence × hemisphere interaction (F1,36 = 6.078, MSe = 0.64, P < 0.05). The latter interaction was further assessed with planned contrasts. In good mood, positive words elicited a less negative potential than negative words particularly over the left hemisphere, although mean differences were statistically reliable over both hemispheres. In bad mood, mean differences between valence conditions were not statistically significant (see Fig. 5).
At fronto-temporal electrodes, valence differentially affected ERPs depending on the mood state as shown by the triple interaction mood × valence × encoding task (F1,36 = 9.245, MSe = 2.24, P < 0.01). The latter interaction was further evaluated with planned contrasts. This analysis showed that valence affected ERPs only in good but not in bad mood state. In good mood, positive words elicited a more negative potential than negative words in both encoding tasks, but valence-related ERP effects were larger for the “read” than for the “generate” condition. In bad mood, valence-related ERP effects were not statistically significant. The topography of valence-related effects in positive and negative mood is shown in Figure 6.
At 500–650 ms after Stimulus Presentation
In the global ANOVA, the interaction between mood × valence × encoding task × scalp regions × electrode site reached trend level (F4,144 = 1.834, MSe = 0.61, ϵ = 0.85, P = 0.137). As in the previous time window, separate ANOVAs for each scalp region were performed. At occipito-parietal (F1,36 = 15.291, MSe = 3.61, P < 0.001) and central electrodes (F1,36 = 7.215, MSe = 2.56, P < 0.05) valence modulated ERPs as indexed by a main effect.
At fronto-temporal electrodes, a mood × valence × encoding task × electrode site interaction (F2,72 = 3.948, MSe = 0.66, ϵ = 0.796, P < 0.05) suggested that valence-related ERP effects were modulated by the mood state. Subsequent planned contrasts showed that valence modulated ERPs only in subjects with good mood. As can bee seen in Figure 5, positive words elicited in subjects with good mood a more negative potential than negative words at the most anterior electrode site F9/10 in the “generate” condition. In bad mood, the effect of valence was not statistically significant (for the topography of the valence-related effects see Fig. 6).
Source analyses of the ERP effects of valence in the different mood conditions were performed on the difference waves (positive minus negative words) in the 350- to 500-ms and 500- to 650-ms intervals because ERP effects were largest and provided an optimal signal-to-noise ratio. In order to assess whether the method of ERP source analysis would affect the results, we performed both distributed source modeling (minimum norm source estimates) and dipole modeling (for further details see the methods section).
Distributed sources analysis yielded for participants in good mood valence-related activity in inferior and ventral parts of the temporal lobe in both the “read” and “generate” conditions (see Fig. 7A). Pronounced right occipito-temporal activity was only observed in the “read” condition. In the “generate” condition, we obtained widespread prefrontal activity with foci in left inferior prefrontal areas as well as in an area close to the frontal midline. For participants in bad mood, valence-related activity was restricted to occipital areas with stronger activity in the right hemisphere. As can be seen from Figure 7(A), the major foci of cortical currents were comparable in both time windows.
Dipole source analysis was performed for the time interval with maximal GFP in each condition. The precise time intervals and the RV of the source models are shown in Figure 7(B). In all conditions, RV of the source model was less than 10%. In good mood, this analysis revealed sources in the ventro-medial temporal lobe in both the “read” and “generate” conditions at similar locations. In the “read” condition, an additional source in right occipital areas was obtained, whereas in the “generate” condition additional sources in left inferior prefrontal cortex and in an area close to the frontal midline were yielded. In bad mood, sources in occipital and temporo-parietal cortex were obtained in both the “read” and “generate” conditions at similar locations. In the “read” condition, an additional pair of sources was placed in superior prefrontal areas, whereas in the “generate ” condition an additional source was placed in right inferior prefrontal areas. These analyses show that both methods of source modeling yielded comparable results. Most importantly, they converge with regard to the involvement of ventro-medial temporal lobe structures in participants with good mood.
Summary of the ERP Results Concerning the Effects of Mood and Valence
Between 200 and 650 ms, positive words elicited a less negative potential than negative words at both occipito-parietal and central electrodes. In good mood, ERP differences between positive and negative words emerged already in the interval between 200 and 350 ms. In bad mood, these valence-related ERP effects started later (350–500 ms) and were restricted to occipito-parietal electrodes in this time window. The most striking differences between mood states were obtained at fronto-temporal electrodes. In this scalp region, valence modulated ERPs only in good mood. Source analyses revealed for participants in good mood valence-related activity in ventro-medial temporal areas for both the “read” and “generate” conditions. In the “generate” condition, additional sources in left inferior prefrontal areas and in an area close to the frontal midline were observed. In bad mood, in contrast, valence-related activity spared ventro-medial temporal areas. Instead, we obtained in both the “read” and “generate” conditions sources in occipital and parietal areas as well as in dorso-lateral prefrontal and in right inferior prefrontal areas.
The present study was aimed at clarifying the role of mood states during the encoding of emotionally congruent and incongruent material. In line with the predictions of the assimilation–accommodation approach, we found an asymmetric effect of mood-congruency only for participants in good mood starting at 200 ms after stimulus onset on a centro-parietal negative potential, the N400 ERP component, as well as on fronto-temporal ERPs. Source analysis yielded valence-related activity in inferior and ventral temporal areas as well as in inferior prefrontal and frontal midline areas for good mood. For participants in bad mood, valence-related activity spared ventral temporal areas and was largest in occipital and parietal areas. In the recall data, recall accuracy was largest for generated positive words in good mood compared with the other conditions confirming the assumption of mood-congruent memory mainly in good mood for generative tasks. As we were mainly concerned with brain activity related to mood-congruent encoding processes, we focus in the discussion section upon the ERP results.
The Role of Ventral Temporal lobe Structures in Mood-Congruent Memory
According to the assimilation–accommodation approach (Fiedler 1991, 2001), good mood subserves assimilative processes. Assimilative processes involve the activation of stored semantic knowledge structures, which are applied to encode incoming stimuli into episodic memory. These activated knowledge structures are used to transform the new information. As a consequence, meanings of mood-congruent positive words are primed and integrated into the existing semantic context. In line with this assumption, positive words elicited a smaller centro-parietal N400 amplitude than negative words in participants with good mood. As outlined in the Introduction, modulation of the N400 ERP component indexes the activation of semantic knowledge structures. Source analysis revealed valence-related activity in inferior and ventral temporal cortex only for participants in good mood. The importance of ventro-medial temporal lobe structures (parahippocampal cortex, perirhinal cortex, fusiform gyrus) for semantic processing has been demonstrated in neuroimaging studies (Vandenberghe and others 1996; Moss and others 2005) and intracranial ERP recordings (Nobre and McCarthy 1995). It has been suggested that these areas are relevant for the binding and integration of semantic features (Phillips and others 2002; Moss and others 2005). These semantic brain areas also play an important role in episodic memory encoding. Erk and others (2003) observed activity in the ventro-medial temporal lobe during encoding (parahippocampal cortex, fusiform gyrus) that was predictive for subsequent recall performance (subsequent memory effect) when neutral words were encoded within a positive emotional context (see also Wagner and others 1998). Although the spatial resolution of ERPs is too low to precisely identify, which parts of the ventro-medial region is activated, our results from source analyses are well in accord with these neuroimaging findings. Unfortunately, unlike in the Erk and others study we could not determine the subsequent memory effect due to the limited amount of trials per condition (40 trials). Therefore, we restricted our analyses upon valence-related activity of emotional words independent of subsequent recall. For that reason it is an open question how valence-related activity in the ventro-medial temporal lobe would depend on subsequent recall performance. However, as differences in recall rate for positive and negative words were small, particularly in the “read” condition, it is very unlikely that the observed valence-related activity reflects differential subsequent memory effects.
Positive and negative words differed slightly but significantly with regard to arousal (positive: 0.99; negative: 0.30). Although we cannot entirely rule out that arousal also contributed to the observed interaction between mood and valence, we do not think that this is very likely for the following reasons: 1) Arousal ratings were generally very low (less than 1 on a scale from 0 to 5) and differences between word categories were small (0.69) in contrast to the valence ratings (±2 on a scale from −3 to +3). Thus, our words were generally low arousing, but showed strong differences in emotional valence. 2) In the behavioral pilot study without mood induction, recall rate was comparable for positive and negative words. If arousal had influenced the results, we would have expected a higher recall rate for the more arousing positive words. 3) In our main experiment we observed an interaction between mood, valence, and encoding task. It is hard to explain for any theory of emotion why arousal effects should be larger in good than in bad mood in a generative encoding task. 4) A previous ERP study (Dillon and others 2006) indicated that arousal specifically affected a late frontal positive ERP component but not the centro-parietal N400, which was modulated in the present study. For all these reasons, our N400 effects for positive and negative words most likely reflect semantic processing of emotional valence and not arousal.
Concluding, the observation of valence-related activity during emotional word encoding in the ventro-medial lobe for participants in good mood supports the assumption that good mood promotes an assimilative encoding style. Assimilative encoding into episodic memory involves the elaboration of semantic word meaning on the basis of activated mood-congruent semantic knowledge structures as indexed by ventro-medial temporal lobe activity/N400 effects. In fact, the majority of our participants in good mood reported to have encoded the words in an elaborative way (e.g., evaluating whether a trait adjective and a known person match), whereas the majority of the participants in bad mood used a nonelaborative encoding strategy (rote rehearsal). Hence, bad mood supports accommodative processes hereby conserving incoming stimulus information without transforming them according to activated stored semantic knowledge. Accordingly, we did not observe valence-related ventral temporal lobe activity in bad mood.
The mood induction procedure elicited larger mood changes from baseline for negative than for positive mood so that the observed interaction between mood and valence appears to be mainly driven by negative mood induction. However, this does not compromise our findings because at baseline all participants reported a very positive mood close to ceiling (82 of 100 mm on the visual analogue scale). As a consequence, the happy movies could not change the mood state in the positive direction to the same extent as the sad movies could change it in the negative direction. However, most central to the aim of our study, participants receiving the happy movies were in a much more elevated mood than the participants receiving the sad movies who rated their mood as being more depressed.
Our work focused on mood influences in the encoding phase in contrast to earlier studies that were more concerned with effects during retrieval (e.g., Maratos and others 2001; Lewis and others 2005). However, we do not deny that mood states influence recall performance also later during the retrieval stage. An earlier study showed that semantic processes during encoding as indexed by N400 modulation do not directly determine later successful recall (Neville and others 1986). This shows that other processes during memory encoding, consolidation, and retrieval also contribute to recall performance.
Our observation of an asymmetric mood-congruency ERP effect present only in good but not in bad mood contrasts with the findings of the earlier ERP study by Chung and others (1996) who found mood-congruent effects on the N400 to story outcomes in both good and bad mood states. However, there are fundamental differences between both studies, which may explain the discrepant findings. The stimuli of our study consisted of trait adjectives of positive and negative emotional valence, which can induce emotion-related processes (e.g., the below described repression effect), whereas Chung and others used emotionally neutral words, which received their emotional meaning only in the context of the story. Furthermore, in the Chung and others study, mood states were induced by a self-suggestive procedure in which participants were required to think of happy or sad life events. Participants were also instructed to generate an emotionally good or bad outcome to the story. Thus, participants were obviously aware of their mood states and were explicitly requested to use their current mood in order to form emotionally biased semantic expectations (see also the discussion in the Chung and others paper). In our study, in contrast, we induced mood states more indirectly by movies and used a cover story regarding the purpose of the experiment. These measures should prevent participants from deliberately using their mood states as memory encoding context. In our participants with bad mood who predominantly used a nonelaborative encoding style mood-congruent semantic knowledge structures might be less strongly activated in comparison with the participants in the Chung and others study who were requested to actively process the story on the basis of their mood. Together with the above-described differences in the stimulus material, mood induction procedure, this may explain the asymmetric N400 mood-congruency effect in our study and the symmetric effect in the Chung and others study.
We cannot exclude that demand characteristics imposed by our mood induction procedure and our cover story regarding the purpose of the study might have played a role in our study. Our participants knew that the movies should induce mood states and also had explicit knowledge about the nature of their mood (more elated or depressed). For that reason, it cannot be excluded that participants consciously formed semantic expectancies during word encoding even when not explicitly instructed to do so. However, as our participants were made to believe that the purpose of the study was to investigate the effect of cognitive load on emotions, it is not very likely that they applied this strategy. Furthermore, according to our postexperimental questioning none of our participants reported to have used such a strategy.
It could also be argued that the conceptual content of the films but not the induced mood states could account for our observed effects on emotional word encoding. For instance, the sad films (e.g., a person sentenced to death) could convey more complex conceptual content than the happy films (e.g., Charlie Chaplin in a roller skate disco), which interfered with word encoding. Such a criticism concerns not only the present study, but all studies on emotion and cognition, which use a mood induction procedure (movies, self-suggestion, or hypnosis) because it is always debatable whether the induced mood states or other nonemotional aspects of the induction procedure caused the observed effects on behavior or brain activity. This criticism can never entirely be ruled out because all mood induction procedures are complex experimental manipulations and trigger a variety of cognitive and emotional processes. Nevertheless, it is unlikely that in the present study nonemotional aspects of the movies such as the conceptual content can account for the results. Firstly, there was always a minimum time interval of about 4 min between the end of film presentation and the beginning of the first encoding/recall phase (and even about 15 min for the second encoding/recall phase within one experimental block). For that reason, all participants most likely stopped processing the films cognitively and focused on the straining encoding task. Secondly, it is difficult to explain how the conceptual content of a film—and not the induced emotional mood state as we suggest—can account for the differential effects of word valence in the 2 mood state groups.
Occipital Activity and the Repression Effect
An unexpected finding was the smaller occipito-parietal N400 and larger LPC amplitude for positive compared with negative words in bad mood. Source analysis yielded in bad mood valence-related activity in occipital areas, but not in ventral temporal areas. Interestingly, similar occipital valence-related activity was also obtained in good mood particularly in the “read” condition.
We explain this finding with the so-called “repression effect.” At a behavioral level, the repression effect refers to the phenomenon of a performance disadvantage for emotionally negative relative to positive material (Fiedler 1991). This effect fits nicely with our behavioral data from the pilot test, in which we found that positive word fragments were recognized faster than negative word fragments although subsequent recall did not differ. The repression effect has also been observed in lexical decision tasks (Weisbrod and others 1999; Huckauf and others 2003) and in free recall (Zeller 1950). This processing disadvantage for negative material is most likely due to an emotional preference for positive and pleasant compared with negative and unpleasant information. Positive stimuli signal reward and elicit an approach tendency, whereas negative stimuli signal punishment and are associated with avoidance (see Introduction). At a neural level, neuroimaging studies revealed in response to aversive stimuli increased activity in occipital areas as well as simultaneously decreased activity in the hippocampus and in left inferior prefrontal cortex (Wik and others 1993; Kosslyn and others 1996; Amir and Stewart 1999). This exactly resembles the pattern that we found in our participants in bad mood. We assume that the use of trait adjectives as encoding stimuli in the present experiment, which possess a high self-relevance, could have produced a strong repression effect.
The Role of Left Prefrontal and Cingulate Cortex in Mood and Memory
The most striking interaction between mood, valence, and encoding task was observed over fronto-temporal regions in the 350- to 500-ms (N400) and in 500- to 650-ms (LPC) time intervals: Fronto-temporal valence-related effects were only found in good mood states and were largest in the “generate” condition within the LPC interval. Due to the proximity of these electrode sites to the eyes, we were concerned that fronto-temporal ERPs could be contaminated by residual ocular activity. Suggestive for horizontal ocular activity, voltages at left and right frontal electrodes appear to simultaneously reverse polarity. In order to rule out this possibility, we took several measures. First of all, when we plotted voltages at left and right eye channels (F9/F10) against each other we realized that voltages did not synchronously invert polarity, but exhibited different time courses. As polarity inversion is not phase-locked, the observed left/right asymmetry most likely reflects brain activity and is not caused by a horizontal eye movement. Furthermore, we reanalyzed our data and rejected trials contaminated with ocular activity instead of correcting it. This analysis replicated the reported interactions between mood, valence, and encoding task. Hence, we can rule out that fronto-temporal effects were induced by residual ocular activity.
When we submitted ERPs to source analysis, we found only for participants in good mood in the “generate” condition activity in response to stimulus valence in left inferior prefrontal cortex as well as close to the frontal midline. Left inferior prefrontal activity has been observed in semantic tasks (e.g., Posner and others 1988; Kiefer and others 1998; Wagner and others 2001) as well as during episodic memory encoding (Wagner and others 1998; Erk and others 2003). Wagner and others (2001) have suggested that prefrontal cortex controls semantic retrieval, a function which is relevant during memory encoding, particularly when encoded words have to be actively generated (for the role of prefrontal cortex during semantic retrieval, see also Thompson-Schill and others 1999; Kiefer and others 2005). Source activity close to the frontal midline might correspond to activation of the dorsal parts of the anterior cingulate, a region known to be involved in executive control (Petersen and others 1988; Posner and Driver 1992; Carter and others 1998). The anterior cingulate is functionally connected with prefrontal cortex; frequently both regions are conjointly activated in tasks requiring active manipulation of information (Braver and Cohen 2000; Duncan 2001). In line with this interpretation, only participants in good mood who used elaborative encoding strategies showed this particular frontal activation pattern. In bad mood, other sources in other prefrontal regions were found. Source activity was obtained more superiorly in dorso-lateral prefrontal cortex (“read” condition) possibly reflecting working memory circuits during rote rehearsal. In the “generate” condition, source activity in right inferior prefrontal cortex, an area implicated in negative emotions, was yielded (Davidson and Irwin 1999; Dolcos and others 2004; Nitschke and others 2004).
Mechanisms Mediating the Influences of Emotional Mood on Memory Encoding
Based on our data we propose that mood states influence emotional word encoding through 2 interacting mechanisms. Firstly, mood states trigger different encoding styles (Ashby and others 1999; Fiedler 2001). Mood-dependent encoding strategies are a parsimonious and central explanation for the present results: Good mood promotes the active transformation of new information by applying existing semantic knowledge to incoming information in order to achieve a coherent memory structure (assimilation). Bad mood, in contrast, supports nonelaborative encoding like rote rehearsal without the active application of semantic knowledge to the incoming information. Accordingly, in bad mood the new information is changed very little during encoding so that episodic memory structure has to be altered to fit the new information (accommodation). As a result, participants in a good mood are more likely to employ “deep” semantic encoding strategies (Craik and Lockhart 1972) in comparison with participants in a bad mood. Correspondingly, brain areas known to play an important role in semantic retrieval and integration are activated in good mood only (left prefrontal cortex, ventro-medial temporal lobe). Secondly, mood states activate stored mood-congruent semantic knowledge in the ventro-medial temporal lobe. As a result, mood-congruent stimuli are primed and more efficiently encoded into existing knowledge structures than mood-incongruent stimuli (Bower 1981; Lewis and others 2005). However, mood-congruent memory effects seem to depend on an interaction of both mechanisms: The integration of to be encoded stimuli into mood-congruent knowledge structures is facilitated by good mood because good mood promotes active manipulation of incoming information on the basis of stored semantic knowledge (i.e., assimilative processes).
Of course, our data do not address the question through which neural mechanisms mood states can trigger different encoding styles. As we have outlined in the Introduction, emotionally positive situations—subtle mood states as in the present study or salient emotional stimuli—signal reward by activating brain circuits such as orbitofrontal cortex and the dopaminergic neurons passing through the nucleus accumbens and projecting to prefrontal cortex and the anterior cingulate. Ashby and others (1999) propose that the creative and elaborative cognitive style in good mood is the result of dopaminergic neuromodulatory action on neurons in the anterior cingulate, which improves cognitive flexibility by facilitating executive attention. In accordance with this proposal, we found in good mood activity in a frontal midline area, presumably in the anterior cingulate. Future research is clearly needed to elucidate the entire processing chain from neuromodulators to behavior in more detail.
Nevertheless, the present demonstration of mood-dependent brain activity during emotional word encoding suggests that mood-congruent memory already originates in the encoding phase. Our findings rule out the possibility that mood-congruent recall reduces to strategic processes during retrieval.
Supported by grants from the University of Ulm Medical School (P.506, P.638) and the German Research Community (DFG Ki 804/1-3) to M.K. The authors thank Tatjana Zimmermann for her help during data acquisition and analysis as well as Susanne Erk and Klaus Hoenig for helpful comments on an earlier version of this manuscript. Conflict of interest: None declared.