The mismatch negativity response, considered a brain correlate of automatic preattentive auditory processing, is enhanced for word stimuli as compared with acoustically matched pseudowords. This lexical enhancement, taken as a signature of activation of language-specific long-term memory traces, was investigated here using functional magnetic resonance imaging to complement the previous electrophysiological studies. In passive oddball paradigm, word stimuli were randomly presented as rare deviants among frequent pseudowords; the reverse conditions employed infrequent pseudowords among word stimuli. Random-effect analysis indicated clearly distinct patterns for the different lexical types. Whereas the hemodynamic mismatch response was significant for the word deviants, it did not reach significance for the pseudoword conditions. This difference, more pronounced in the left than right hemisphere, was also assessed by analyzing average parameter estimates in regions of interests within both temporal lobes. A significant hemisphere-by-lexicality interaction confirmed stronger blood oxygenation level–dependent mismatch responses to words than pseudowords in the left but not in the right superior temporal cortex. The increased left superior temporal activation and the laterality of cortical sources elicited by spoken words compared with pseudowords may indicate the activation of cortical circuits for lexical material even in passive oddball conditions and suggest involvement of the left superior temporal areas in housing such word-processing neuronal circuits.
Mismatch negativity (MMN), a unique indicator of automatic cerebral processing of acoustic stimuli, has been increasingly utilized for investigating the neural processing of speech and language (Näätänen 2001; Pulvermüller and Shtyrov 2006). MMN, with its major source of activity in the superior temporal auditory cortex, is an evoked brain response elicited by rare (so-called deviant) acoustic stimuli occasionally presented in a sequence of frequent (standard) stimuli (Alho 1995; Näätänen 1995). Importantly, MMN can be elicited in the absence of the subject's attention to the auditory input (Tiitinen et al. 1994; Schröger 1996). It is therefore considered to reflect the brain's automatic reaction to any change in the auditory sensory input. More recently, increased MMN amplitudes in response to native language phonemes and syllables and left-hemispheric lateralization of such activations led to a conclusion that preexisting long-term memory traces for speech sounds may be activated in passive oddball paradigm producing this specific MMN increase (Dehaene-Lambertz 1997; Näätänen et al. 1997; Alho et al. 1998; Shtyrov et al. 2000). Strikingly, it was found that the MMN amplitude increases substantially in the process of training as the subjects become more familiar with the experimental stimuli thus developing memory traces for previously unknown items (Näätänen et al. 1993; Winkler et al. 1999). Further still, we found that MMN in response to individual words was greater than for comparable meaningless word-like (i.e., obeying phonological rules of the language) stimuli (Pulvermüller et al. 2001). In a series of studies on this topic, we presented subjects with sets of acoustically matched word and pseudoword stimuli and found an increased MMN response whenever the deviant stimulus was a meaningful word (Pulvermüller et al. 2001, 2004; Shtyrov and Pulvermüller 2002b; Shtyrov et al. 2005). This enhancement, typically peaking at 100–200 ms, is best explained by the activation of cortical memory traces for words realized as distributed strongly connected populations of neurones (Pulvermüller and Shtyrov 2006). The lexical enhancement of the MMN, repeatedly confirmed by our group, was also demonstrated by other groups using various stimulus setups and languages (Korpilahti et al. 2001; Kujala et al. 2002; Sittiprapaporn et al. 2003; Endrass et al. 2004; Pettigrew et al. 2004), leading to a conclusion that, all acoustic factors being equal, lexical (and possibly semantic) status of the deviant may lead to greater neural activation that happens in addition to the change-detection MMN. Wunderlich and Cone-Wesson (2001) did not find larger MMNs to consonant-vowel-consonant words (e.g., bad) than to consonant-vowel pseudowords (ba), but physical stimulus properties were obviously not fully matched; the study focused on psychoacoustic issues instead. Our own studies indicated that the lexical status of the deviant stimulus was relevant for eliciting the MMN, but the lexical status of the standard stimulus did not significantly affect MMN amplitude (Shtyrov and Pulvermüller 2002b). However, others have reported that the event-related brain response evoked by the speech standard stimulus is also affected by its lexicality/familiarity (Diesch et al. 1998; Jacobsen et al. 2004, 2005), the most likely reasons for this discrepancy between different studies being probably of methodological nature.
As a result of such growing body of evidence, the pattern of MMN response to individual word deviants has been considered as indicating a word's lexicosemantic “signature,” its memory trace activated in the brain, which offers a unique opportunity to investigate neural processing of language in nonattention demanding and task-free fashion. Indeed, investigations of language function using MMN that followed were able to provide a great level of detail on the spatiotemporal patterns of activation involved in early automatic lexical access, semantic category-specific processing, and even in syntactic processes (Shtyrov and Pulvermüller 2002a; Pulvermüller and Shtyrov 2003; Pulvermüller et al. 2003, 2004, 2005; Shtyrov et al. 2003, 2004 ; Menning et al. 2005 ). These studies have further established the usefulness of the MMN component for the neuroscience of language.
Until now, however, such word-specific MMN responses were studied exclusively using fast neurophysiological imaging, that is, electroencephalography (EEG) and magnetoencephalography (MEG). These methods, which provided substantial information on the time course of this phenomenon, are frequently considered as lacking sufficient spatial resolution. More spatially oriented (but substantially less sensitive in time domain) functional magnetic resonance imaging (fMRI) has only rarely been used in MMN research so far. The few fMRI studies involving MMN paradigm provided new insights into the MMN generation mechanisms, but they have only used nonlinguistic stimulus material (Opitz et al. 1999, 2002; Molholm et al. 2005; Rinne et al. 2005; Gomot et al. 2006). So, the exact properties of MMN enhancement to words have not been approached from the hemodynamic imaging prospective. This challenge was addressed by the current study, in which we used fMRI to further investigate the nature of lexical MMN enhancement to meaningful language elements.
Materials and Methods
Eleven healthy right-handed (handedness assessed according to Oldfield 1971) native monolingual English-speaking volunteers (age 20–31 years, 5 females) with normal hearing and no record of neurological diseases were presented with 4 sets of spoken English language stimuli in 4 separate experimental conditions (see Table 1 and Fig. 1).
|Condition 1a||Condition 1b||Condition 2a||Condition 2b|
|Deviant||[kIk] kick||[fIk] (pseudoword)||[pIk] pick||[fIk] (pseudoword)|
|Standard||[fIk] (pseudoword)||[kIk] kick||[fIk] (pseudoword)||[pIk] pick|
|Condition 1a||Condition 1b||Condition 2a||Condition 2b|
|Deviant||[kIk] kick||[fIk] (pseudoword)||[pIk] pick||[fIk] (pseudoword)|
|Standard||[fIk] (pseudoword)||[kIk] kick||[fIk] (pseudoword)||[pIk] pick|
Note: All stimuli were maximally matched for their acoustic properties. The standard–deviant contrasts are identical in each pair of conditions, whereas the deviant stimuli in conditions 1a/2a are 2 words and in conditions 1b/2b, a meaningless pseudoword.
The experimental stimuli included 2 short English words: [kIk] (kick) and [pIk] (pick), which were, in 2 separate conditions, contrasted as deviants against a frequent standard pseudoword [fIk]. We recorded multiple repetitions of these stimuli and additionally one more word —[hIk]—uttered by a female native speaker of English and with great care selected a combination of the 4 items whose vowels matched in their fundamental frequency (F0) as well as sound energy and overall duration. To further minimize acoustic differences between the stimuli, which is highly important in MMN experiments, we decided to use identical final phoneme [k] in each standard and deviant sound. To achieve this, one needs to use cross-splicing, that is, replacing the original sounds in the recordings by different ones. To avoid differential effects of early acoustic cues (which would arise should the final plosion be drawn from one of the actual stimuli and copied to the other 2), we selected to take it from a similar word not used as such in the experiment. The sound [k] was therefore cross-spliced from the recording of [hIk] to the initial segments [kI], [pI], and [fI], which had been separated from their own terminal [k]-plosives. The pause of about 80 ms preceding the final stop consonants in this type of English monosyllabic stimuli (Fig. 1) makes such stops an ideal target for cross-splicing, and we were able to smoothly replace the phonemes producing naturally sounding stimuli with precisely controlled acoustic features. The sounds were normalized to have the same loudness by matching their root-mean-square (RMS) power; to ensure even smaller acoustic differences between the stimuli, RMS power was separately normalized for the initial consonant–vowel segments and for the final stops of the stimuli. Such meticulous procedure of stimulus preparation is essential for MMN experiments because MMN is highly sensitive to acoustic differences between the standard and deviant stimuli. All stimuli were 330 ms in duration. For the analysis and production of the stimuli, we used the Cool Edit 2000 program (Syntrillium Software Corp., Phoenix, AZ).
In condition 1a (see Table 1 for complete stimulus design), we used the word “kick” as the deviant stimulus and the pseudoword [fik] as the standard one. In condition 2a, the word “pick” served as the deviant stimulus against the same pseudoword. In conditions 1b and 2b, the reversed design was used, that is, the deviant stimulus was the pseudoword, whereas the words were presented as standard stimuli. These additional conditions were performed in order to obtain responses to the same acoustic contrasts with the reversed lexical relationship between the standard and deviant stimuli, which allowed us to disentangle purely acoustic effects (which should arise due to basic phonetic differences between the stimuli) from the lexicality effects. This design is based on our previous research, which suggested that the deviant stimuli are important for activating word memory traces, whereas the nature of standards seems of smaller significance (Shtyrov and Pulvermüller 2002b). So, the standard–deviant acoustic–phonetic contrast, the critical variable determining acoustical MMN (Näätänen and Alho 1997), was identical in each pair of conditions, whereas the deviant stimuli were either words (1a/2a) or a pseudoword (1b/2b). The 4 experimental conditions were performed with every subject, their order being counterbalanced across the subject group.
In each condition, the stimuli were presented in 90 trains consisting of 7 sounds each with 800-ms interstimulus interval within trains (onset-to-onset, see Fig. 2). The interval between the trains was 3.4 s. Fifty percent of the train consisted of standard stimuli only; the remaining 50% had 6 standard stimuli and one deviant, which was placed in positions 4–7 from the train onset, that is, commencing at 2.8, 2.0, 1.2, and 0.4 s before the image acquisition (in pseudorandom fashion, with equal probability for each position). This was done in order to position the acoustic deviance within a few seconds from the image acquisition, with some variability, in order to maximize the chances to register its hemodynamic response (which was suggested to have a delay in the range of 2–3 s for the auditory cortices, see e.g., Belin et al. 1999). The echo planar images (EPIs) were obtained after each stimulus train in order to ensure the stimuli are played in silence and not masked by the fMRI scanner noise. This way, the probability of a deviant's occurrence in an allowed position was 12.5%, and the overall percentage of deviant stimuli was one in 14 (i.e., ∼7.1%), falling within the usual range for MMN experiments (Sinkkonen and Tervaniemi 2000). The stimuli were binaurally presented at 70 dB SPL via headphones connected to a PC equipped with scanner-driven stimulation program (Medical Research Council Institute for Hearing Research, Nottingham, UK), which was triggered by the image acquisition pulse.
Subjects were positioned in a 3-T Bruker MR system and instructed to watch a silent videofilm of their own choice (without subtitles to avoid additional language-related activity and extraneous eye movements) and to ignore any auditory signals while EPIs were obtained using a head coil. The images were obtained in 3.4-s gaps between the stimulus trains to avoid acoustic interference from the scanner noise with the stimuli ( so-called “sparse imaging,” Hall et al. 1999); in addition to 90 test trains, 6 scans were obtained following silent periods equal in length to the trains (null events). EPI sequence parameters were time repetition 8.6 s, echo time 115 ms, and flip angle 90 degrees. The functional images consisted of 21 slices covering the whole brain (slice thickness 4 mm, interslice distance 1 mm, and in-plane resolution 1.6 × 1.6 mm).
Imaging data were processed using SPM2000 software (Wellcome Department of Cognitive Neurology, London, UK). The magnetic resonance imaging (MRI) scanner used here performs “interleaved” EPIs to reduce the effect adjacent slices have on each other: first the odd slices are collected, then the even ones. Images were off-line corrected for this interleaved slice order. Further still, the different time delays between each slice and the stimulation/scanning onset constitute the so-called time-slicing problem, which was solved here by realigning all scans to the first image using full sinc interpolation (Henson et al. 1999). The advantage of temporal interpolation is that inferences concerning differences in hemodynamic response function (HRF) height between slices are less likely to be confounded by different slice acquisition times. Phase maps were used to correct for inaccuracies resulting from inhomogeneities in the magnetic field (Jezzard and Balaban 1995; Cusack et al. 2003). The EPI images were coregistered to structural T1 images using a mutual information coregistration procedure. This uses a basic concept from information theory, mutual information (relative entropy), as a matching criterion and applies it to measure the statistical dependence or information redundancy between the image intensities of corresponding voxels in the matched images, which is assumed to be maximal if the images are geometrically aligned. Such maximization is considered a very general and powerful criterion because no assumptions are made regarding the nature of this dependence and no limiting constraints are imposed on the image content of the modalities involved (Maes et al. 1997). The structural MRI was normalized to the 152-subject T1 template of the Montreal Neurological Institute. The resulting transformation parameters were applied to the coregistered EPI images. During the spatial normalization process, images were resampled with a spatial resolution of 2 × 2 × 2 mm. Finally, all normalized images were spatially smoothed with a 12-mm full-width half-maximum Gaussian kernel, and single-subject statistical contrasts were computed using general linear model (Friston et al. 1998). Responses to the deviant stimuli were contrasted against those to the standard ones from the same experimental condition (i.e., word deviants vs. pseudoword standards and vice versa). Group data were analyzed with a random-effects analysis in SPM2000; only effects surviving false discovery rate (FDR) correction are reported as statistically significant.
A brain locus was considered to be activated in a particular condition if its voxels passed the threshold of P = 0.05 (corrected for FDR). For region-of-interest (ROI) analysis of areas differentially activated by specific stimulus categories, we computed the average parameter estimates over voxels for each individual subject. This was done using the Marsbar software utility (http://marsbar.sourceforge.net), an SPM-integrated toolbox that provides an extensive set of routines for ROI analysis (Brett et al. 2002). Thereby extracted blood oxygenation level–dependent (BOLD) activation values from contrasts between deviant and standard blocks were subjected to an analysis of variance (ANOVA) including the factors stimulus type (word vs. pseudoword deviant) and hemisphere (left vs. right).
All subjects gave their written consent to participate in the experiments and were paid for their participation. The experiments were performed in accordance with the Helsinki Declaration. Ethical permission for the experiments was issued by the Local Research Ethics Committee, Cambridge, UK.
The first random-effect analysis pulled together all auditory stimuli contrasted against silence. This indicated strong auditory activation in the superior temporal (most likely primary and secondary auditory) cortices of both hemispheres (P ≤ 0.001, FDR-corrected, see Fig. 3) indicating a robust auditory response even in a passive condition under strong attentional withdrawal and concurrent visual stimulation.
In the following analysis step, we contrasted deviant and standard blocks using the standard approach to MMN analysis in event-related potential (ERP) experiments. As it was found that none of the deviant trains with deviant sounds in position 7 (i.e., 400 ms before the EPI scan onset) produced any significant results (most likely due to insufficient time for the hemodynamic response to develop), these were excluded from the analysis, and the remaining deviant trains with deviants in positions 4–6 were used for contrasting against standards. This further analysis showed that the activation patterns were clearly distinct between the word and pseudoword blocks (Fig. 4). Whereas the word-elicited BOLD mismatch response in the temporal lobes (encompassing superior temporal gyrus and possibly extending into superior temporal sulcus and medial temporal gyrus) was significant even after the correction for search volume (P = 0.025, FDR-corrected, see also Table 2) in both hemispheres, the pseudoword-elicited response did not survive the correction procedure (corrected P > 0.13). This advantage of words over pseudowords for producing the hemodynamic mismatch responses seemed to be more pronounced in the left than in the right hemisphere.
|P FDR corrected||T||Z||P uncorrected||x||y (mm)||z|
|P FDR corrected||T||Z||P uncorrected||x||y (mm)||z|
To further analyze the latter, we decided to directly compare the activity in the left and right temporal areas for the different stimulus types. To this end, from deviant–standard contrasts, we extracted average parameter estimates in voxels within 15-mm sphere around the center of Heschl's gyrus in each subject's left and right hemispheres (based on Morosan et al. 2001) and submitted these to ANOVA for statistical verification. This volume included primary auditory areas and adjacent cortex attributable to auditory belt and parabelt areas (Rauschecker 1998; cf., also Gutschalk et al. 2002; Patterson et al. 2002). The analysis confirmed differential activation and produced significant hemisphere-by-stimulus type interaction (F1,10 = 5.97, P < 0.035) indicating stronger BOLD mismatch responses to words than pseudowords in the left but not in the right hemisphere’s auditory cortex (Fig. 5).
To assess possible lexicality effects on the standard stimuli (see Introduction), we also directly contrasted the activation following the standard blocks of different types. This, however, did not produce any significant results.
In the present study, we attempted to use fMRI to register hemodynamic counterpart of auditory MMN responses elicited by words presented among pseudowords or by pseudowords presented among words in passive oddball paradigm. In order to minimize acoustically related effects, phonetic stimulus properties and acoustic contrasts were carefully matched across conditions. Passive oddball paradigm was used in which the subjects were instructed to ignore the auditory stimuli and were additionally distracted from the stimulation by watching self-selected video. The stimulation was thus similar to that usually employed in MMN studies using EEG/MEG with an exception that the stimulation was periodically interrupted by the fMRI scanner noise. The sparse imaging was applied to make sure the stimuli were clearly audible and none of their features were masked by the scanner noise. In order to minimize possible effects of the resulting pause and reinstate the status of the standard stimulus, which serves as the basis of deviance detection by determining the context of sound sequence (Sussman et al. 2003), the first few stimuli after the scanner pulse were always standards.
In spite of the strong concurrent visual stimulation and the nonattend design, all stimuli showed a clear activation in the vicinity of the auditory cortex in both temporal lobes. This indicates that, similar to EEG and MEG experiments, passive auditory presentation simultaneously with visual distraction task can be used to study auditory processing in fMRI.
Furthermore, by contrasting the standard and deviant blocks, we were able to register a difference between BOLD activation for deviants and standards, that is, the putative hemodynamic analogue of the MMN. Such interpretation is supported by the earlier studies, which succeeded in recording MMN-like differences in fMRI using nonspeech stimuli (Opitz et al. 1999, 2002; Molholm et al. 2005; Gomot et al. 2006). (It needs to be understood that the application of the term “MMN” to fMRI results is only to highlight this indirect putative link between the neural processes and the BOLD activations; the response polarity implied by this term does not have any counterpart in fMRI domain.) Admittedly, an immediate correspondence between hemodynamic and EEG/MEG data cannot be unambiguously established at present, and it is conceivable that the usually registered electrophysiological MMNs and MMN-like hemodynamic phenomena are not necessarily directly related. The use of similar stimulation paradigm and the similarity in the result patterns suggest that the current fMRI data may be related to the same neural processes that underlie electric/magnetic MMNs to linguistic material; such analogies should, however, be treated with caution. To gain stronger evidence for this correspondence, multimodal studies using combined multichannel EEG/MEG and fMRI recordings and correlation analyses are necessary.
An important difference between this and the earlier hemodynamic MMN studies is the more subtle nature of the contrasts between the standard and deviant stimuli: previously, event-related designs similar to electrophysiological ERP studies were used estimating hemodynamic responses after each individual stimulus. Here, with a goal of minimizing acoustic disturbances of scanner noise, we obtained EPIs after entire stimulus trains. The only difference between the standard and deviant trains, each consisting of 7 sounds, was a single deviant auditory event; the rest of the stimuli were identical (i.e., all standards) in both deviant and standard trains. The differences thus were minimal. Further, any difference-related BOLD responses was most likely smeared further by variable position of the individual deviant events, an inherent feature of MMN design requiring random unexpected changes for response elicitation. Still, in spite of such minute, brief, and variable acoustic differences, the deviant–standard contrasts produced hemodynamic mismatch responses. Again, this happened under distraction conditions during concurrent visual input in the confined and uncomfortable environment of fMRI scanner. This implies a certain degree of robustness of the BOLD mismatch response.
This BOLD correlate of the MMN activation was, however, distinct for the subblocks involving word and pseudoword deviant stimuli. Most notably, only the word deviants produced mismatch responses that could survive FDR corrections for search volume. This indicated that the word-elicited MMNs possess a more robust degree of hemodynamic activation than those evoked by nonword deviants. This is remarkably similar to the phenomenon of lexical MMN enhancement known from the previous EEG/MEG studies, which have demonstrated that MMN in response to individual words is always greater than for comparable meaningless pseudowords if the acoustic contrasts are kept the same (Korpilahti et al. 2001; Pulvermüller et al. 2001, 2004; Shtyrov and Pulvermüller 2002b; Sittiprapaporn et al. 2003; Endrass et al. 2004; Pettigrew et al. 2004 ). The standard–deviant acoustic contrast, the critical variable determining the mismatch response (Näätänen and Alho 1997), was identical between the conditions; it is therefore unlikely that the difference in brain responses was due to acoustic factors. Instead, it appears likely that it is the lexicosemantic difference between the deviant stimuli (meaningful words vs. senseless pseudoword) that contributed to this dissociation. This is in line with the previous research, which demonstrated that oddball brain responses may reflect lexicosemantic features of the stimuli (Pulvermüller and Shtyrov 2006). Activation loci in superior and middle temporal areas had also been earlier reported to contribute to lexical and semantic processing (Price 2000; Salmelin et al. 2000; Scott and Johnsrude 2003; Hickok and Poeppel 2004). It therefore appears that the present activation pattern may be related to lexicosemantic processing of the stimuli rather than to the classical change-detection MMN mechanisms, although the exact distinction between the input of the 2 processes into linguistic oddball responses remains to be investigated.
To assess possible lexicality effects on the standard stimuli, we also directly contrasted responses to the standard word and pseudoword sequences but found no significant effects. This indicated that the differences reported here are due to additional MMN-like BOLD activation to rare deviants rather than to standard responses. Although in our view there is no compelling reason to refute lexical enhancement for the standard stimuli, which serve as the basis of deviance detection by determining the context of sound sequence (Sussman et al. 2003), such an enhancement is rarely seen (Jacobsen et al. 2004, 2005). The exact reason for the absence of such effects in standard responses, which may be of methodological nature, cannot be confirmed at this stage and remains to be investigated.
Finally, ROI analysis indicated that the lexical mismatch response enhancement is specific to the left hemisphere: comparison of activations in the left and right temporal lobes indicated that the word-evoked hemodynamic MMNs were significantly more lateralized to the left than the pseudoword ones. This supports the linguistic nature of the BOLD mismatch responses recorded here. The left-hemispheric nature of language processing has been repeatedly confirmed by a number of imaging and clinical studies (for review, see e.g., Tervaniemi and Hugdahl 2003). Importantly, however, the current data support the view that this asymmetry reflects the left hemisphere's specific ability to store memory traces for familiar language elements regardless of their exact acoustic/phonological makeup. The data confirm our earlier MEG results that, using different stimuli and different language, showed a clear lateralization only for MMNs elicited by meaningful words as opposed to phonologically similar but meaningless pseudoword (Shtyrov et al. 2005). Similarly, our current result also advocates the view that the language laterality is not merely bound to acoustic or phonological features of speech. Instead, only known, previously learnt language items seem to exhibit such lateralized activation, suggesting that this laterality is linked to the processes of learning and memory trace formation for such items rather than to their physical or phonological properties.
We found this mismatch-elicited activation to be most significant in superior temporal gyrus possibly extending into superior temporal sulcus and middle temporal gyrus. Thus, the extended area found involved in generating the observed responses likely includes primary auditory, belt and parabelt areas, which is consistent with earlier findings on the brain basis of both phonological and lexicosemantic processing (e.g., Poeppel and Hickok 2004; Pulvermüller and Shtyrov 2006; Uppenkamp et al. 2006), which all used different stimulation protocols. The activation of supratemporal and middle temporal areas has been linked to the processing of meaningful language by a range of studies exploiting different imaging modalities (Price 2000; Salmelin et al. 2000; Newman and Twieg 2001; Rissman et al. 2003; Scott and Johnsrude 2003; Poeppel et al. 2004), even though there is still a debate about the exact loci involved. Our data thus provide further support for lexicosemantic processing in superior temporal and middle temporal cortices. Even though STS activation had been seen in response to phonemes per se (e.g., Uppenkamp et al. 2006), STS is not known to become activated in oddball paradigms or to be sensitive to differences between phonemes, which also makes the lexicosemantic interpretation of its activation here more likely than that based on phonological or change-detection mechanisms.
Unlike some previous studies (including our own), here we did not find prefrontal/inferior frontal generators of the MMN (Alho 1995; Näätänen and Alho 1995; Opitz et al. 2002; Shtyrov and Pulvermüller 2002a; Pulvermüller and Shtyrov 2003; Pulvermüller et al. 2003, 2005; Rinne et al. 2005). As nearly all previous studies indicated that the frontal MMN sources may be more variable and exhibit smaller scale of activation, the subtle acoustic contrasts employed here may have been insufficient to register any putative frontal activity. Another potential explanation is that the HRF for auditory stimuli may have a substantially longer delay in the frontal areas compared with the particularly early HRF maximum in temporal areas (Belin et al. 1999) and thus could not be registered with the stimulus timing used here. These tentative explanations cannot, however, be unequivocally supported or refuted by the current data and remain to be addressed in future research.
We wish to thank Olaf Hauk, Ingrid Johnsrude, Gary Chandler, Christine Schmalz, Adrian Owen, Debbie Davis, Ruth Bisbrown-Chippendale, Gloria Stocks-Gee, Vicky Lupson, and William Marslen-Wilson for their contribution at different stages of this work. We are also very grateful to 2 anonymous referees for their helpful comments and constructive critique. This work was supported by the Medical Research Council, UK. Conflict of Interest: None declared.
- electrophysiological studies
- complement system proteins
- word processing
- temporal lobe
- memory, long-term
- functional magnetic resonance imaging
- right cerebral hemisphere
- blood oxygen level dependent
- mismatch negativity
- doppler hemodynamics
- auditory processing