We report evidence for a context- and not stimulus-dependent functional asymmetry in the left and right human auditory midbrain, thalamus, and cortex in response to monaural sounds. Neural activity elicited by left- and right-ear stimulation was measured simultaneously in the cochlear nuclei, inferior colliculi (ICs), medial geniculate bodies (MGBs), and auditory cortices (ACs) in 2 functional magnetic resonance imaging experiments. In experiment 1, pulsed noise was presented monaurally to either ear, or binaurally, simulating a moving sound source. In experiment 2, only monaural sounds were presented. The results show a modulation of the neural responses to monaural sounds by the presence of binaural sounds at a time scale of tens of seconds: In the absence of binaural stimulation, the left and right ICs, MGBs, and ACs responded stronger to stimulation of the contralateral ear. When blocks of binaural stimuli were interspersed in the sound sequence, the contralateral preference vanished in those structures in the right hemisphere. The resulting hemispheric asymmetry was similar to the asymmetry demonstrated for spatial sound processing. Taken together, the data demonstrate that functional asymmetries in auditory processing are modulated by context. The observed long time constant suggests that this effect results from a “top–down” mechanism.
Complex natural stimuli appear to be processed preferentially in either the left or the right human auditory cortex (AC). For instance, fast temporal modulations, as found in speech, seem to be preferentially processed in the left AC, whereas fine-grained frequency analysis has been found to engage the right AC more strongly than the left (Zatorre and Belin 2001; Zatorre and others 2002; Tervaniemi and Hugdahl 2003). Converging evidence from lesion and functional magnetic resonance imaging (FMRI) studies suggests a hemispheric asymmetry in auditory spatial processing as well: Zatorre and Penhune (2001) demonstrated that damage to the right AC differentially impairs sound localization performance. Consistent with that suggestion, a recent FMRI study showed that the right AC responds to perceived sound movement in both acoustic hemifields, whereas the left AC responds predominantly to movement in the contralateral hemifield (Krumbholz, Schonwiesner, von Cramon, and others 2005). This asymmetry in human auditory spatial information processing mirrors the known asymmetry in human visual spatial information processing (for a review, see Marshall and Fink 2001; Halligan and others 2003).
The role of the subcortical auditory pathway in the formation of lateralized processing is unknown. Previous studies suggest that response asymmetries in subcortical auditory structures exist and may contribute to cerebral lateralization. Studies of brain stem auditory-evoked potentials (Levine and McGaffigan 1983; Levine and others 1988) reported a rightward asymmetry during monaural click-train stimulation: Stimulation of the right ear elicited larger brain stem responses than stimulation of the left ear, suggesting an increased number of active neurons or increased firing synchrony in the brain stem structures along the afferent auditory path from the right ear to the left AC. The authors of those studies related the rightward asymmetry in the brain stem responses to the left hemisphere dominance for speech processing. Other studies reported asymmetries in peripheral auditory structures. In particular, the magnitude of active cochlear amplification, assessed by click-train–evoked otoacoustic emissions, appears to be greater in the right than in the left cochlea (Khalfa and Collet 1996; Khalfa and others 1998). Again, the authors linked this peripheral asymmetry to speech-related asymmetries in the cerebral hemispheres. However, to support this hypothesis, it would have been necessary to demonstrate asymmetrical cortical responses to the employed, acoustically simple, stimuli. Indeed, asymmetrical responses of the AC to simple sounds (amplitude-modulated sinusoids) were found with FMRI (Devlin and others 2003). In that study, sinusoids presented monaurally to the left or right ear yielded a greater blood oxygen level–dependent (BOLD) response in the left than right primary AC, irrespective of which ear was stimulated. The authors suggested that this asymmetrical response may indicate a generic left hemisphere dominance for auditory processing.
Whether auditory processing in humans is by default lateralized to the speech-dominant side of the brain is a crucial question in understanding auditory function. An immediately related question is whether functional lateralization is confined to the cerebral cortex or can be detected in subcortical auditory structures as well. The present study addresses both questions by simultaneously measuring cortical and subcortical responses to simple stimuli in the context of auditory spatial processing, which has been shown to produce a cortical activation asymmetry that is independent and different from the lateralization of speech processing.
Materials and Methods
A total of 20 subjects (8 males) between 23 and 32 years of age, with age-typical normal audiogram and no history of hearing disorder or neurological disease, participated in the 2 experiments after having given informed consent. All subjects were right handed and attained a score of 100% in the Edinburgh Handedness Questionnaire (Oldfield 1971). The experimental procedures were approved by the local ethics committee. Four subjects participated in both experiments; thus, 12 subjects participated in each experiment.
Stimuli and Experimental Protocol
The acoustic stimuli were trains of noise bursts; the same stimuli elicited a reliable activation asymmetry in a previous experiment on auditory spatial processing (Krumbholz, Schonwiesner, von Cramon, and others 2005). The noise bursts had a duration of 50 ms each and were presented at a rate of 10/s. They were continuously generated afresh and filtered between 0.2 and 3.2 kHz (Tucker Davis Technologies [Alachua, FL], System 3). The first experiment comprised 2 monaural, 2 binaural, and a silent condition (Fig. 1).
In the monaural conditions, trains of noise bursts were played separately to the left or right ear. This results in conditions monaural right stimulation (mon right) and monaural left stimulation (mon left). These conditions were used to access functional asymmetries in auditory structures. The binaural conditions were included to generate the context of auditory spatial processing. In one binaural condition (bin central), identical noise bursts were played to both ears simultaneously, generating the perception of a stationary sound source in the center of the head. In the other binaural condition (bin move), the signals to the left and right ear were dynamically time shifted relative to each other (in the range of ±1.2 ms), a manipulation that is perceived as a sound moving back and forth between the ears. This manipulation differs from a classical binaural beat stimulus: we used pulsed noise instead of sine tones as carrier. The AC is known to respond asymmetrically to sound source motion in the left and right acoustic hemifield (Baumgart and others 1999; Warren and others 2002; Krumbholz, Schonwiesner, von Cramon, and others 2005). In the present study, the binaural sound conditions merely provided the context of auditory spatial processing for the listeners; the brain responses during the binaural conditions were discarded. The binaural conditions were used to assess binaural interaction in the auditory system; those results are presented elsewhere (Krumbholz, Schonwiesner, Rubsamen, and others 2005). The second experiment was identical to the first, with the exception that all binaural conditions were replaced by silent baseline conditions (in the case of two or more consecutive silent blocks only one was presented). To minimize eye movements in the direction of the sounds and ensure a constant state of alertness throughout the experiment, subjects performed a visual task. They fixated a cross at the midpoint of the visual axis and were instructed to press a button upon each occurrence of the letter “Z” in 2 random sequences of one-digit numbers, presented to the left and right of the fixation cross. The timing of the visual task was unrelated to the sound stimulation. The numbers were presented once every 2 s for 50 ms. This is a relatively easy task (all subjects attained >90% correct responses), but it requires constant attention and monitoring of the fixation point.
The auditory stimuli were presented through magnetic resonance–compatible electrostatic headphones (Sennheiser model HE 60), which were fitted into industrial ear protectors that passively shielded the subjects from the scanner noise (Bilsom model 2452). The system was built according to the guidelines of the MRC Institute of Hearing Research sound system (Palmer and others 1998). The stimuli were presented at 85-dB sound pressure level (relative to 20 μPa), but in 4 subjects, the intensity was reduced to 75-dB sound pressure level minimally because of discomfort due to high subjective loudness. “Sparse imaging” (Edmister and others 1999; Hall and others 1999) was used to minimize the effects of the scanner noise on measured brain responses to the experimental stimuli. This technique takes advantage of the lag of the hemodynamic response by introducing a gap between consecutive image acquisitions that allows the response to the scanner noise to decay to almost resting level before the next image is acquired. The stimuli are presented during the gap between image acquisitions, thus each scan records primarily stimulus-related hemodynamic responses (indicated in Fig. 1, lower panel right; diagrams of measured hemodynamic responses can be found in Hall and others 1999). We used a gap of about 8.4 s. Including 2.1 s image acquisition time at the beginning of every repetition, the average repetition time (TR) was 10.5 s. Cardiac gating (Guimaraes and others 1998) was used to minimize motion artifacts in the brain stem signal resulting from pulsation of the basilar artery. The functional images were triggered 300 ms after the R-wave in the electrocardiogram, when the cardiac cycle is in its diastolic phase. The experimental conditions were presented in blocks of 5 trials. In experiment 1, 4 sound blocks containing the 4 sound conditions in pseudorandom order were alternated with a single silence block. In experiment 2, sound and silence blocks were presented in alternation. A total of 250 images (corresponding to 50 epochs) were acquired per subject in each experiment.
FMRI Data Acquisition
BOLD contrast images were acquired with a 3-T Bruker Medspec whole-body scanner using gradient echo planar imaging (average TR = 10.5 s, echo time [TE] = 30 ms, flip angle = 90°, acquisition bandwidth = 100 kHz). The functional images consisted of 28 ascending slices with an in-plane resolution of 3 × 3 mm, a slice thickness of 3 mm, and an interslice gap of 1 mm. Slices were oriented along the line connecting the anterior and posterior commissures and positioned so that the lowest slices covered the cochlear nucleus (CN) just below the pons. The slices were acquired in direct temporal succession in 2.1 s. A high-resolution structural image was acquired from each subject using a 3-dimensional modified driven equilibrium Fourier transform sequence (Ugurbil and others 1993) with 128 slices of 1.5-mm each (field of view = 25 × 25 × 19.2 cm, data matrix 256 × 256, TR = 1.3 s, TE = 10 ms). To assist registration, a set of T1-weighted echo planar imaging slices was acquired using the same parameters as for the functional images (inversion time = 1200 ms, TR = 45 s, 4 averages).
The data were analyzed with the software package Leipzig Image Processing and Statistical Inference Algorithms (Lohmann and others 2001). The raw functional images of each listener were carefully checked for baseline artifacts and excess head motion. The following preprocessing steps were performed: baseline correction (0.0019-Hz high-pass filter), correction for remaining head motion (<2 mm), and transformation into Talairach space. The normalized functional images were then spatially smoothed with Gaussian kernels of 3- and 10-mm full-width half-maximum to optimize for the signals from the brain stem and the cortex, respectively. The extent of auditory structures in the brain stem is only a few millimeters, and their location with respect to macroanatomical landmarks varies little across individuals. The detection of auditory activity in the brain stem can thus be optimized with a small smoothing kernel. In contrast, auditory cortical regions are comparatively large, and their boundaries exhibit a considerable interindividual variability with respect to macroanatomy (Rademacher and others 2001), warranting a larger smoothing kernel. Importantly, if the smoothing kernel fits with the size of the activation, then an activity measure at a local maximum contains activity information from the surrounding activated area and is, theoretically, the best characterization of activity in that area. The image time series of the group of 12 subjects in each experiment, comprising a total of 3000 volumes, was analyzed with a fixed-effects general linear model. Experimental conditions were modeled as boxcar functions convolved with a generic hemodynamic response function including a response delay of 6 s. The height threshold for activation on statistical parameter maps (SPMs) was z = 3.1 (P ≤ 0.001 uncorrected). Second level random-effects analyses were implemented by extracting the BOLD signal change in response to monaural sound stimulation from individual regions of interest (ROIs). The magnitude of lateralization in the cortical and subcortical ROIs was accessed by computing lateralization indices (LIs) of brain activation, that is, dividing the difference of the BOLD response to the right and left monaural sounds by their sum and expressing the result in percent: LI = 100 × [BOLD (right) − BOLD (left)]/[BOLD (right) + BOLD (left)]. Thus, positive LIs signify a stronger response to right than to left monaural sounds, whereas negative LIs signify a stronger response to left than to right monaural sounds. Taking individual LI as random effects, the significance of lateralization across the group was assessed with Student's t-tests (Gosset 1908). We did not apply the random-effects model directly to the SPM because, although established for cortical brain responses, it is not well suited for the analysis of small subcortical activations. The limited extent of these activations converts small location differences into a large interindividual variance. Subcortical ROI were defined in 2 steps: First, a sphere of 8-mm radius was drawn around the location of the respective structure on the individual anatomical scans with reference to standard human brain atlases. Second, the local z-score maximum in individual SPMs that fell into the sphere and its 8-voxel neighborhood formed the final ROI. This combined anatomy- and function-based definition is necessary for structures, like the medial geniculate bodies (MGBs), that are not clearly discernable on the anatomical images. It also compensates for the nonlinear displacement of anatomical loci between structural and functional scans.
The activation produced by the sum of both monaural sound conditions was compared with the silent baseline condition to highlight cortical and subcortical regions sensitive to noise stimuli. The activated regions included the CNs, the inferior colliculi (ICs), the MGBs, and the ACs on both sides of the brain. The Talairach coordinates of the most significant voxel in each structure in this contrast were as follows (x, y, z)—left CN: −14, −42, −30; right CN: 10, −42, −30; left IC: −8, −36, −3; right IC: 4, −36, −3; left MGB: −17, −30, 0; right MGB: 13, −30, −3; left AC: -47, −27, 12; and right AC: 40, 35, 25. Subsequently, the monaural conditions were individually compared with the silence condition as well as with each other in order to reveal asymmetries in the cortical and subcortical activation during monaural stimulation.
Activation during Monaural Left- and Right-Ear Stimulation
The activation during monaural stimulation of the left and right ear was compared with the activation during the silent baseline condition in 2 separate contrasts. For experiment 1, SPMs of these 2 contrasts are shown in Figure 2.
Left-ear stimulation caused a significant activation of the left CN (see first and third row in Fig. 2). Activation of the right CN just reached the significance criterion (P < 0.001). In subsequent auditory structures, activation shifted to the contralateral side. The right IC and MGB responded more strongly than their left-side counterparts, which barely reached the significance criterion. The AC was activated bilaterally, with the activation being more pronounced in the right hemisphere. At the level of the CN, the response to right-ear stimulation was essentially mirror symmetric to the response to left-ear stimulation, that is, the right monaural sounds elicited a strong activation in the right CN and a weak activation in the left CN. Beyond the CN, however, the activation pattern to right-ear stimulation differed markedly from being a mirror-symmetric equivalent of the pattern during left-ear stimulation (second and fourth row in Fig. 2): ICs, MGBs, and ACs on the left and right sides responded with more or less equal strength to the right-ear stimulation.
This difference in the responses to left- and right-ear stimulation was further analyzed by extracting a measure of the activation strength from the most significant voxel in each auditory structure (Fig. 3). Activation strength refers to contrast-weighted parameter estimates from the general linear model (effect sizes). In experiment 1, we found a hemispheric asymmetry in the IC, MGB, and AC responses: The right-side structures responded about equally strongly to sound stimulation from the left and right ear (no significant differences in effect sizes), whereas the left-side structures responded predominantly to the right-ear stimulation (t-test—IC: P < 0.0001, t = 7.35; MGB: P < 0.001, t = 3.84; and AC: P < 0.0001, t = 9.8; Fig. 3A). Experiment 2 tested whether this functional asymmetry was related to the presence of binaural sounds in experiment 1, by replacing them with silent intervals. In experiment 2, ICs, MGBs, and ACs on both sides responded stronger to contralateral stimulation (Fig. 3B). No hemispheric asymmetry was detected. In both experiments, monaural stimulation elicited stronger activation of the ipsilateral CN than the contralateral CN.
Responses across Subjects
To test whether the results were consistent across individuals, effect sizes were extracted from ROIs for all auditory structures in individual subjects, and LIs were computed as a measure of the magnitude of lateralization of the brain responses. From the results of the group analysis, we expected ipsilateral LI in left and right CNs in both experiments, contralateral LI in all auditory structures above the CN in experiment 2 and in the left-side structures above CN in experiment 1, but LI close to zero in the respective right-side structures in experiment 1.
Figure 4 shows LI of all subjects in the left- (left column) and right-side ROI (right column). Most subjects had clearly identifiable activation maxima in all structures. In both experiments, the majority of CN responses were stronger to ipsilateral than to contralateral stimulation (31 out of 48), only 7 out of 48 showed lateralization to the contralateral side (remaining responses: 1 nonlateralized, 9 not detected). The left IC, MGB, and AC exhibited clear contralateralization in both experiments (significant across subjects in almost all cases, trend in the left MGB in experiment 1; of 72 measurements, 59 [82%] were contralateral, 6 [8%] ipsilateral, 4 [6%] nonlateralized, 4 not detected). In the right IC, MGB, and AC, the pattern of lateralization differed between the 2 experiments: lateralization in experiment 1 was weaker and less consistent in direction (15 out of 36 [42%] contralateral, 10 [28%] ipsilateral, 5 [14%] nonlateralized, 6 [17%] not detected) than in experiment 2 (29 out of 36 [81%] contralateral, 4 [11%] ipsilateral, 1 [3%] nonlateralized, 2 [6%] not detected). The individual data thus confirmed the results of the group analysis. The small group activation in the right-side structures in experiment 1 appeared to result from 2 factors: a greater proportion of the right-side than the left-side structures showed an LI close to zero, and among those right-side structures that exhibited a sizable LI, there was no consistency in direction of the lateralization.
Our results demonstrate a context- and not stimulus-dependent functional asymmetry in the human IC, MGB, and AC in response to left and right monaural noise-burst stimulation. The lateralization pattern observed is similar to the one reported for auditory spatial processing and different from the left hemisphere dominance for speech processing. Importantly, this asymmetry is susceptible to experimental manipulation and thus does not solely reflect fixed anatomical asymmetries in the auditory pathway.
Responses to Left- and Right-Ear Stimulation
Our experimental paradigm was sensitive enough to access asymmetrical responses in subcortical structures, as indicated by the interindividual consistency of results and the observed activation shift from ipsi- to contralateral between the CN and the subsequent auditory pathway. The contralateral CN responses might have been caused by bone conduction of sound presented to the ipsilateral ear. The effect of bone conduction is considerable at high sound intensities; cortical activation was detected by monaural stimulation at 95 dB of the deaf ear of unilateral deaf subjects (Scheffler and others 1998). The observed activation shift is in agreement with the contralaterality established in the brain stem between CN and IC (“acoustic chiasm,” Glendenning and Masterton 1983). In humans, this is supported by neuroimaging studies demonstrating that monaural sound stimulation elicits stronger responses in the hemisphere contralateral to the stimulated ear (Loveless and others 1994; Hirano and others 1997; Scheffler and others 1998; Jäncke and others 2002; Suzuki and others 2002). In experiments 1 and 2, we found stronger responses to contralateral than ipsilateral stimulation in the left IC, MGB, and AC. The ratio of contra- to ipsilateral response magnitude is consistent with previous studies (Scheffler and others 1998; Jäncke and others 2002).
In contrast to previous studies, the right IC, MGB, and AC responded about equally strongly to sound stimulation from the left and right ear in experiment 1. The asymmetry is clearest in AC and IC. On the left side, both structures exhibited a clear contralateral stimulus preference: responses to right monaural stimulation were significantly larger than responses to left monaural stimulation. In contrast, no consistent side preference was detected in the right-side IC or AC in experiment 1; these structures responded on average about equally well to stimuli from both ears. The MGB showed a trend for the same lateralization pattern, although less consistent across individuals. The variability of the MGB responses may have partly been caused by the ascending slice acquisition, which reduces the benefit of cardiac triggering in superior slices.
It is unlikely that the results of experiment 1 can be explained by a spillover of the BOLD responses in the binaural conditions. Auditory BOLD responses decay in about 8 s after stimulus offset to baseline (Belin and others 1999; Backes and van Dijk 2002), which is short in comparison with the block length of >50 s in our experiments.
There is a subthreshold trend in our participant sample for the left AC and MGB to respond more strongly to monaural sounds than the same structures on the right side. This tendency in the AC, but not in the MGB, is in accord with the observation by Devlin and others (2003) of stronger responses to monaural tones in the left than right AC. That study used a frequency discrimination task, which might have contributed to the stronger left-side activations. Brechmann and Scheich (2005) demonstrated that discriminating pitch direction may induce such a response laterality. We used a non–sound-related distraction task in the present experiment.
Conditions Under Which Functional Laterality May Occur
There is a discussion about whether physical or semantic stimulus aspects of sounds are the main determinant of hemispheric lateralization. Some studies suggest that the presence of certain acoustic features (such as complex spectrotemporal modulations [Zatorre and Belin 2001; Schönwiesner and others 2005] or sound movement [Krumbholz, Schonwiesner, von Cramon, and others 2005]) may be sufficient to induce lateralized activation. Others, however, find changes in lateralization when the same stimuli are presented in different listening contexts (such as pitch classification task vs. no task [Brechmann and Scheich 2005] or speech vs. nonspeech context [Shtyrov and others 2005]). Such “bottom–up” versus “top–down” accounts for differential lateralization of neural activity are not exclusive options; the same neuronal populations may be activated by top–down and bottom–up projections, such that their activity depends on both physical and semantic stimulus aspects. Here, we demonstrate that the long-term (>25 s) stimulus context of auditory spatial processing modulates the laterality of responses to identical stimuli even when attention is directed toward the visual modality. This result goes beyond previous reports of contextual effects. The changes in lateralization observed by Brechmann and Scheich (2005) are caused by changes in attention with different task instructions. Shtyrov and others (2005) found changes in lateralization of responses to a short noise burst when it was imbedded in a word, pseudoword, or noise. They did not modify task instructions (as in our study, the subjects' attention was guided toward the visual modality), but their effects reside on the much shorter time scale of milliseconds compared with a time scale of tens of seconds in our experiment.
Devlin and others (2003) reported that both left and right monaural stimulation with amplitude-modulated pure tones activated the left primary AC more strongly than the right. The authors suggested that this result points to a general left hemisphere dominance for auditory processing that might have contributed to the evolution of speech-dominant left hemisphere. Although our demonstration of a different lateralization pattern does not exclude that such a left hemisphere bias may exist in many listening situations, this shows that it does not account for all situations. The asymmetry observed here (experiment 1) relates to auditory spatial rather than speech processing. We explicitly included an auditory motion condition, and the resulting lateralization pattern resembles the hemisphere specialization for the processing of acoustic spatial information found in a lesion study (Zatorre and Penhune 2001) and an FMRI study (Krumbholz, Schonwiesner, von Cramon, and others 2005). If the binaural motion stimuli are replaced by silent intervals, the hemispheric asymmetry vanishes and a contralateral activation predominance emerges. This finding suggests that the hemispheric distribution of auditory information throughout the auditory pathway is not fixed but rather depends on the acoustical context.
Top–Down or Bottom–Up?
The response asymmetry observed along the auditory pathway may either originate in the brain stem and then be reflected at the level of higher processing structures (bottom–up mechanism) or, alternatively, be generated in the cortex and projected down to subcortical structures (top–down mechanism). Finally, it may result from a combination of both. The bottom–up hypothesis would imply a modulation of neuronal responses in subcortical structures by the binaural motion stimuli. This modulation would have to extend over at least half a block length of the current experiment (>25 s). A stimulus-specific adaptation over tens of seconds has recently been found in AC neurons but, importantly, not in MGB neurons (Ulanovsky and others 2004). Furthermore, IC and MGB do not specifically respond to binaural motion stimuli; such responses are mainly found in the planum temporale (Krumbholz, Schonwiesner, Rubsamen, and others 2005). This finding argues strongly for a top–down mechanism evoking the observed functional asymmetry.
A lateralized response of the AC could be projected down to subcortical levels via efferent auditory projections—the corticofugal system. Monosynaptic projections descend from AC to MGB (FitzPatrick and Imig 1978; Winer and others 2001), IC (FitzPatrick and Imig 1978), and CN (Jacomme and others 2003). The main recipient is the MGB. Remarkably, efferent tracts from AC to MGB are considerably larger than afferent projections from MGB to AC (Winer and others 2001). Winer and others (1998, 2001) also demonstrated a significant projection from every AC field to the IC in various mammals. These projections are mainly ipsilateral (Saldana and others 1996; Druga and others 1997). Focal electrical activation of the AC elicits changes in frequency tuning, sensitivity, and temporal response pattern in IC neurons (Yan and Suga 1996, 1998; Zhang and others 1997; Zhang and Suga 2000; Yan and Ehret 2001, 2002). Hence, the corticofugal system seems capable of modulating the activation balance between left and right ICs and MGBs. There are direct demonstrations of strong efferent modulation of activity even in the lower brain stem down to the auditory periphery in cats (Hernandez-Peon and others 1961) and humans (Perrot and others 2005).
Top–down projections can denote either recurrent corticocortical connections (those might convey cognitive effects from, for instance, prefrontal cortex to sensory cortices) or projections from sensory cortices to subcortical structures, as outlined above. In the present experiments, we recorded activity from the AC and subcortical structures. There was no significant differential activation outside the superior temporal plane in any of the experimental conditions. We can therefore only draw conclusions about the latter kind of efferent projection. In addition, most of our participants reported that they were not aware of the sound stimulation while concentrating on the visual task for most of the experimental time, suggesting that the asymmetry observed in experiment 1 relies on interactions between low-level AC and subcortical structures.
In conclusion, the present study reports evidence for a functional hemispheric asymmetry in the left- and right-side AC and subcortical structures in response to monaural noise-burst stimulation. Importantly, this asymmetry depends on the acoustic context in which the stimuli are presented. Infrequent binaural stimulation shifts the monaural responses in the AC and in subcortical structures to resemble the hemispheric asymmetry for auditory spatial processing. At present, the dependence on acoustic context and its long time constant argue for a top–down mechanism underlying the observed laterality effect. Such a mechanism would be consistent with previous studies of the functional anatomy underlying cerebral specialization in the visual system, which have demonstrated in the context of lexical versus visuospatial decision making that top–down mechanisms are important determinants of hemispheric specialization (Stephan and others 2003). This suggests that such a context- rather than stimulus-dependent functional lateralization of neural activations constitutes a more general mechanism of cerebral specialization. The current study extends prior studies by showing that these mechanisms also apply to auditory processing and, furthermore, can also be observed at the level of subcortical processing.
This study was supported by the Max Planck Society, Helmholtz Association, German National Academic Foundation (MS), German Research Foundation (DFG-KFO-112, GRF), and Bundesministerium für Bildung und Forschung (BMBF, BICW). We thank one anonymous referee and Lutz Jancke for helpful comments. Conflict of Interest: None declared.
- acute coronary syndromes
- american cancer society
- ciliary motility disorders
- cochlear nucleus
- geniculate body
- inferior colliculus
- thalamic structure
- brain stem
- abdominal compartment syndrome
- functional magnetic resonance imaging
- right cerebral hemisphere
- american college of surgeons
- auditory processing