Optic-flow fields can induce the conscious illusion of self-motion in a stationary observer. Here we used functional magnetic resonance imaging to reveal the differential processing of self- and object-motion in the human brain. Subjects were presented a constantly expanding optic-flow stimulus, composed of disparate red–blue dots, viewed through red–blue glasses to generate a vivid percept of three-dimensional motion. We compared the activity obtained during periods of illusory self-motion with periods of object-motion percept. We found that the right MT+, precuneus, as well as areas located bilaterally along the dorsal part of the intraparietal sulcus and along the left posterior intraparietal sulcus were more active during self-motion perception than during object-motion. Additional signal increases were located in the depth of the left superior frontal sulcus, over the ventral part of the left anterior cingulate, in the depth of the right central sulcus and in the caudate nucleus/putamen. We found no significant deactivations associated with self-motion perception. Our results suggest that the illusory percept of self-motion is correlated with the activation of a network of areas, ranging from motion-specific areas to regions involved in visuo-vestibular integration, visual imagery, decision making, and introspection.
When we move through a given environment, proprioceptive, vestibular, acoustic, and visual information signal to us our position, orientation, as well as our displacement and acceleration. By integrating the information over different sensory modalities we construct a coherent perception of self-motion. However, under certain circumstances the illusion of self-motion may also emerge in a stationary observer. Systematic transformations of the optical array, resembling the visual stimulation of a moving observer, that is, the optic flow (Gibson 1950), sometimes result in the false impression of self-motion (Dichgans and Brandt 1972; Lee 1980). This purely visually induced percept of self-motion is called “vection” and has been studied perceptually in the last decades intensively (Koenderink 1986; Lappe et al. 1999).
Recently, the neural correlates of vection have also been studied by functional brain imaging methods. Several studies used coherently moving flow fields, likely to induce vection, and compared the obtained signals with displays where the dots were stationary or were moving randomly. Most of these studies used random-dot patterns rotating coherently, simulating self-rotation (Brandt et al. 1998; Previc et al. 2000; Beer et al. 2002; Deutschländer et al. 2004), whereas others used expanding/contracting dots-patterns, simulating forward movement in a three-dimensional (3D) environment (de Jong et al. 1994; Rutschmann et al. 2000; Beer et al. 2002; Deutschländer et al. 2004).
Although these studies discovered a large network of areas activated or deactivated during optic flow, there are some concerns that hinder the interpretation of these results regarding the neural representation of self-motion. First, some earlier studies have not tested whether the visual stimulation had indeed induced self-motion. de Jong et al. (1994) and Previc et al. (2000) simply assumed that the applied coherently moving stimulus results in vection. Because both the duration and the subjectively estimated strength of vection show large interindividual differences (Kennedy et al. 1996) it is difficult to relate their results to self-motion per se. Second, most previous studies compared brain activity observed during coherent movement with either stationary (Deutschländer et al. 2004; Slobounov et al. 2006) or incoherently moving patterns (de Jong et al. 1994; Brandt et al. 1998; Previc et al. 2000; Beer et al. 2002). Such a differential activity does not necessarily reflect the perceptual difference between self- and object-motion, but can be due to the physical stimulus differences themselves. Thus, the above-mentioned studies have not provided direct information regarding the generation of self-motion illusion or vection.
The only exception to the above limitations is the study of Kleinschmidt et al. (2002), who generated periods of circular vection with a rotating wind-mill pattern and compared the resulting blood oxygen level–dependent (BOLD) response when the subjects alternated between the perceptual states reporting object-motion or vection. Their major finding was that during vection the vestibular parieto-insular cortex was deactivated and early motion sensitive areas were less active than during object-motion. However, the location and extent of cortical areas, activated by rotating or expanding/contracting patterns (Beer et al. 2002; Deutschländer et al. 2004), as well as by moving structured shapes (i.e., a wind-mill) and random-dot fields (Sunaert et al. 1999) vary considerably. Here we used functional magnetic resonance imaging (fMRI) techniques to identify the neural correlates of visually induced self-motion in depth. To create the vivid percept of 3D motion in depth we used an expanding optic-flow stimulus, composed of disparate red–blue spheres, and viewed through red–blue anaglyphic glasses. BOLD signals were compared during periods of vection with periods that led to the percept of object-motion.
Materials and Methods
Twelve naïve, healthy volunteers (6 females) with normal or corrected-to normal vision participated in the experiments (mean age: 22 ± 2 years). All subjects gave written consent and were screened for MRI compatibility. The procedures were approved by the Ethical Committee of the University of Regensburg.
Display, Stimuli, and Procedures
Visual stimuli were presented with Presentation 9.9 (Neurobehavioral Systems Inc., Albany, Canada) on a standard PC, equipped with a standard 3D graphics card and back projected via an LCD video projector (JVC, DLA-G20, Yokohama, Japan) onto a translucent circular screen (approximately 30° diameter), placed inside the scanner bore at 63 cm from the observer. The projector was running at 72 Hz with a resolution of 800 × 600 and a color resolution of 3 × 8 bit (RBG). For animations, the images were refreshed each 3 frames resulting in 24 frames each second.
Figure 1 depicts our experimental and control conditions (Fig. 1a) as well as a schematic illustration of the simulated 3D space (Fig. 1b). Stimuli consisted of 800 animated spheres. The spheres moved coherently with an average speed of 6 deg/s (COH) and increased in size gradually from 0.2° to 3.5° radius, simulating the effect of the observer moving forward along the line of sight or in other conditions they moved randomly (INCOH) in the x–y plane with the same speed or they were stationary (STAT). To avoid the spheres from occluding the fixation spot we defined a circular region, along the center with a radius of 1.1° in the x–y plane, in which no objects were present. To make the self-motion illusion in the COH condition more vivid we added a slight sinusoidal translation in the x-direction (0.15 Hz; range: 5°). A static sphere (radius = 0.4°) was presented in the center of the screen that served as a fixation dot.
Our pilot psychophysical experiments (n = 9) suggested that viewing this stimulus without additional stereoscopic cues results in the conscious illusion of self-motion for only 14 ± 5% of the time. To increase the vividness of the illusion and the proportion of time spent in vection, we converted our images for anaglyphic red/blue viewing using NVIDIA stereo drivers (Santa Clara, CA) at 2.3° disparity levels. The perspectively correct images were rendered using Direct3D (Microsoft, Redmond, WA). Subjects viewed the stimuli binocularly through monochromatic red and blue standard anaglyphic filters and a mirror located above their eyes. The average luminance of the anaglyphic filtered stimulus was 12 cd/m2 for the red and 1.2 cd/m2 for the blue spheres. The subjects' task was to fixate a sphere located in the center of the 3D space. Every 9 s (i.e., 6 times during COH and once during INCOH and STAT), the average luminance of the screen was reduced to approximately 44% of its original value for 300 ms and then was increased back to the original level. At the occurrence of this dimming cue, subjects were instructed to signal by a button-press whether during the preceding period they felt predominantly self-motion or object-motion (Fig. 1a). Misses, responses exceeding a latency of 2 s, and periods where subjects could not reliably decide between the illusory self-motion and the object-motion state were collected and analyzed separately (undecided). Sixty seconds of such COH displays were followed by 15 s of INCOH and 15 s of STAT. During 1 scan session, this stimulus sequence was repeated 30 times resulting in a total of 45-min scan-time. Subjects were familiarized with the anaglyphic stimulus presentation and the task by practicing it for at least 30 min immediately prior to scanning. To test if the eye-movement pattern of our subjects was different during self-motion and object-motion periods, we recorded the eye-movement patterns of 4 of our subjects using the same stimulus parameters and paradigm as the BOLD measurements (Skalar Iris, Skalar Medical BV, 250 Hz) after the scanning session. We compared the frequency and amplitude of saccades during self-reported periods of vection with that observed during periods where subjects reported object-motion only.
Imaging was performed using a 3-Tesla MR head scanner (Siemens Allegra, Erlangen, Germany). For the functional series we continuously acquired 1802 images with 24 interleaved axial slices using a standard T2*-weighted echo-planar imaging sequence (time repetition [TR] = 1500 ms; time echo [TE] = 30 ms; flip angle = 90°; 64 × 64 matrixes; in-plane resolution: 3 × 3 mm; slice thickness: 3 mm). To cover most of the areas involved in optic-flow processing, the slice distance factor was adjusted between 0% and 10% until they covered the occipital lobe (sometimes partly excluding V1), posterior parietal areas (dorsal boundary), superior and middle temporal lobe as well as anterior insula (ventral boundary) completely. After the functional scans, high-resolution sagittal T1-weighted images were acquired using a magnetization prepared rapid gradient echo sequence (MP–RAGE; TR = 2250 ms; TE = 2.6 ms; 1 mm isotropic voxel size) to obtain a 3D structural scan. The parameters of the MP–RAGE sequence were adapted from the Alzheimer's disease Neuroimaging Initiative project (http://www.loni.ucla.edu/ADNI/). This sequence has been optimized to differentiate between white and gray matter.
Preprocessing and statistical analysis of the data was performed using SPM5 (Wellcome Department of Imaging Neuroscience, London, UK), running under Matlab 7.0 (Mathworks, Natick, MA). The functional images were corrected for acquisition delay and realigned to spatially match the first image. The structural image was realigned to a mean image computed from the functional series. All images were then normalized to the MNI-152 space. The realigned and normalized functional series was resampled to 2 × 2 × 2-mm resolution and spatially smoothed with a Gaussian kernel of 8-mm FWHM.
The convolution of a reference hemodynamic response function with box cars, representing the onsets and durations of the experimental conditions, was used to define the regressors for a general linear model analysis of the data. For each of the experimental conditions we modeled a period of 9 s, beginning 7 s before and ending 2 s after the onset of the response cue. The remaining periods were assigned to different regressors depending on the subjects' ratings of perceived self-, object-motion, or undecided passages. Low-frequency components were excluded from the model using a high-pass filter with 128-s cut-off. Variance which could be explained by the previous scans was excluded using an autoregressive AR(1) model and movement-related variance was accounted for by the spatial parameters resulting from the realignment procedure. The resulting regressors were fitted to the observed functional time series. Individual t-contrasts were calculated between COH, INCOH, and STAT, as well as between all object- and self-motion rated periods within COH. For the random-effects analysis, the contrast estimates entered a simple t-test on second level. The resulting t-maps were thresholded with Puncorrected < 0.001 (T = 4.3). Clusters surpassing a threshold of Pcorrected < 0.05 were considered as significantly activated. For visualization, the thresholded t-images were superimposed onto an average structural image of our subjects. For reporting stereotaxic Montreal Neurological Institute (MNI) coordinates were converted into Talairach (TAL) space using Wake Forest University Pickatlas (Version 1.02; Maldjian et al. 2003).
Region of interest (ROI) analysis was conducted using the MARSBAR 0.38 toolbox for SPM (Brett et al. 2002). Motion-sensitive areas were determined as areas responding more strongly to COH than to STAT (Puncorrected < 0.01; T = 2.8), whereas areas selectively responding to coherent motion were determined by the COH versus INCOH contrast (Puncorrected < 0.01; T = 2.8). Following the study by Sunaert et al. (1999) we bilaterally selected 3 motion-sensitive regions located in the dorsal intraparietal sulcus lateral/medial (DIPSL/M), at the boundary between the parieto-occipital and the intraparietal sulci (POIPS), and area MT+.
Additionally, a single control ROI was defined at the peak voxel falling in early visual cortex (8-mm radius) responding to both COH and INCOH versus STAT condition.
For visualization, the results of both differential contrasts (COH vs. STAT and COH vs. INCOH) were superimposed with the selected threshold (Puncorrected < 0.01; T = 2.8) onto the population average landmark and surface–based (PALS-B12) standard brain (van Essen 2005) using Caret 5.5 (van Essen et al. 2001; http://brainmap.wustl.edu/caret). The ROIs were selected individually on the single subject level from thresholded T-maps (Puncorrected < 0.001; T = 3.1). Areas matching our anatomical criteria and lying closest to the corresponding reference cluster (resulting from the random-effects analysis for differential contrasts) were considered as their appropriate equivalents on the single subject level. A time series of the mean voxel value within a sphere (radius = 8 mm) around the local peak of the areas of interest was calculated and extracted. General linear model constraints and estimation were performed identically to the main analysis, except that it was not possible to control for slow signal drifts (high-pass filtering) and autocorrelative effects. Percent signal changes for the ROIs were calculated for the periods of object- and self-motion during COH. The resulting values for object- and self-motion periods were compared by paired t-tests (Puncorrected < 0.05; t > 1.8, n = 10).
All of our subjects reported the illusion of self-motion during optic-flow (COH) stimulation (Fig. 2) in the scanner. Furthermore, the amount of self-motion and object-motion periods was not different across our subjects (chi-squared test; χ2 = 2.77; n.s.). This suggests that on average, our 3D presented optic-flow stimulus induced the illusion of self-motion and the perception of object-motion to a comparable extent. Furthermore, the illusion of self-motion was reported significantly more often during the last 18 s than during the first 18 s of the 60-s COH stimulation periods (t-test, t = 1.98; P = 0.05). Because vection is known to build up over time (Johansson 1977; Bonato and Bubka 2006; Wright et al. 2006) this result supports our claim, that our subjects indeed reported reliably the occurrence of subjective illusion.
For 2 subjects, the frequency of intercue periods, judged as “object-motion” was below 10% of total number of periods. For this reason, the fMRI data of these subjects were not included in the analysis. The results of the occulomotor control experiment, in which we determined the effect of self- and object-motion perception on the frequency and amplitude of saccades, indicated that our subjects fixated well and their gaze distributions were not significantly different across periods of self- or object-motion perception. We found no significant difference across trial types in the mean saccade amplitude (T = −0.21, P = 0.83; and T = 1.62, P = 0.10; for x and y positions, respectively) and the number of saccades (T = −0.73, P = 0.46 and T = −1.33, P = 0.20 for x and y positions, respectively). This result suggests that it is unlikely that our BOLD results could be confounded by different eye-movement patterns during self- and object-motion periods.
The coordinates of the control regions in early visual cortex are presented in Table 1. When we extracted the imaging data from an 8-mm radius spherical ROI centered on the peak voxel falling in early visual cortex, the neural responses were similar for self-motion and object-motion periods (t = 0.5, P = 0.62). Figure 3 presents the percent signal changes in self-motion and object-motion periods.
|COH versus STAT||COH versus INCOH||COH+INCOH versus STAT|
|COH versus STAT||COH versus INCOH||COH+INCOH versus STAT|
Note: Early visual cortex (V1/V2) was identified by the COH + INCOH > STAT contrast. V1/V2, early visual areas; V5/MT+, human area V5.
Our optic-flow stimulus activated several posterior visual areas, known to be involved in motion processing when compared against a static control. Figure 3 presents the approximate locations of those motion-related regions that showed differential activation in relation to self- and object-motion periods by defining an ROI over the corresponding regions and computing the mean activation averaged across all voxels of the ROI and across all subjects. A major difference was observed over the right medial temporal gyrus, corresponding to MT+ complex (Table 1). Localizing MT+, using our COH versus STAT contrast in our subjects (10 out of 10 subjects), reveals coordinates that are in close agreement with previous studies (Huk et al. 2002; Smith et al. 2006). The magnitude of MT+ activation was significantly larger (t = 3.1, P < 0.01) during periods of self-motion than during object-motion (Fig. 3). In addition, 3 areas along the intraparietal sulcus, corresponding to DIPSL/DIPSLM (17 out of 20 hemispheres; Fig. 3) and another area located ventro-posterior to this, on the left side, along the boundary of the POIPS (7 out of 10 subjects, corresponding to the POIPS of Sunaert et al. 1999; Fig. 3) showed significantly larger signal change during self-motion than during object-motion (left DIPSL: t = 2.7, P < 0.02; right DIPSL: t = 4.0, P < 0.004; left POIPS: t = 2.3, P < 0.05).
To assess specifically the role of optic-flow–related areas in the production of the illusion of self-motion, we identified the brain regions that were more robustly activated by the coherent motion condition (COH) when measured against the condition with incoherent random motion (INCOH). This comparison yielded a cluster of activated voxels in POIPS (Table 1) of the left hemisphere (10 out of 10 subjects). We found a modestly significant difference (t = 2.1, P < 0.06) indicating larger signal change during periods of self-motion than during object-motion for this region. Another cluster, located bilaterally between the inferior temporal and middle occipital gyri, reached our individual level criteria only in 20% of the hemispheres, thus it was not analyzed further.
Comparing Self- with Object-Motion States
To assess the role of brain areas that are not selectively activated by our motion stimuli, we compared the activations during periods of perceived self-motion with periods of perceived object-motion (Fig. 4) using the global GLM. The largest area activation was found in the left parietal cortex (Table 2), corresponding to the precuneus. Additional significant BOLD signal increases were located in the depth of the left superior frontal sulcus, over the ventral part of the left anterior cingulate, as well as in the depth of the right central sulcus, at the level of the middle frontal gyrus. Subcortically we also found activations in the basal ganglia (caudate nucleus/putamen). We have not found any significant deactivations during self-motion even when the threshold of t-maps in the random analysis was adjusted to the less conservative value of Puncorrected < 0.01. This suggests that no cortical area was more active during object- than during self-motion in our experiments.
|Region||Side||x||y||z||Cluster size||P value|
|Superior frontal sulcus||L||−18||58||10||112||0.01|
|Region||Side||x||y||z||Cluster size||P value|
|Superior frontal sulcus||L||−18||58||10||112||0.01|
Our results show that there are several regions in the human brain that respond differentially to coherent optic flow and that during optic-flow stimulation several areas respond more robustly during self-reported state of vection. In addition to visual areas in the dorsal pathway, these areas include the parietal, frontal, cingulate, and subcortical regions.
The most well-studied area of the lateral occipital motion pathway is area MT+. Here we report that MT+ activations are significantly higher to coherently moving optic-flow stimuli during periods of self-motion illusion than during object-motion perception. Because the coherent motion stimulation was identical during self- and object-motion periods, it is plausible to suggest that the resulting differential BOLD signal reflects the different perceptual states. Although many previous studies related the activations in MT+ to motion processing, its role in self- versus object-motion perception is rather unclear. On the one hand, Deutschländer et al. (2004) and Previc et al. (2000), using positron emission tomography (PET) imaging, found that MT+ was activated bilaterally during self-motion perception. On the other hand, early (de Jong et al. 1994) and more recent (Beer et al. 2002) PET studies found no significant differential activation in MT+/V5 when activations evoked by optic-flow stimuli (likely to result in self-motion illusion) were compared with those evoked by incoherent dot motion. In fact, many studies suggested that MT+ is more greatly stimulated by incoherent motion than by coherently moving stimuli (de Jong et al. 1994; McKeefry et al. 1997; Brandt et al. 1998; Previc et al. 2000 but see Rees et al. 2000; Morrone et al. 2000 for opposite results with stimuli frequently reversing their motion direction). We know that long-term or repeated exposure to the same stimulus reduces neural activity for various stimuli, including visual motion (for reviews see Grill-Spector et al. 2006; Krekelberg et al. 2006). This adaptation effect has been proposed to reduce the neural responses so that it is too weak to distinguish from the response to well-matched incoherent, noise stimuli (Morrone et al. 2000). This would lead to similar activations for coherent and random moving stimuli. An interesting hypothesis regarding the larger activation in MT+ during self-motion versus object-motion could be that the illusion of vection may abolish this adaptation effect in the COH condition, thus leading to relatively higher activations than the object-motion condition.
Earlier studies that compared coherently expanding or rotating patterns with static or incoherently moving patterns (see above) report differential activity that might simply be due to the physical differences of the stimuli and does not necessarily reflect the illusion of self-motion per se. The only human imaging study comparing self- versus object-motion under identical perceptual stimulation conditions (viewing a rotating wind-mill pattern) was conducted by Kleinschmidt et al. (2002). They found, that the right MT is more activated during object-motion, whereas the more anterior area MST (the medial superior temporal area) shows similar activations, independently of the subject's perceptual state, but activated transiently when the perceptual state of the subjects changed from one interpretation into the other. Due to the differences of the applied stimuli (high-contrast wind-mill pattern versus 3D expanding optic-flow field) it is difficult to compare our results with those of the Kleinschmidt et al. (2002) study. As a consequence we only can propose that our optic-flow stimulus results in a stronger and more vivid illusion of self-motion, thus explaining the higher activity of MT+, found in our present study, during self-motion compared with episodes interpreted as object-motion.
The conclusion that MT+ indeed plays an active role in the creation of the self-motion illusion is supported by previous macaque single-cell recording studies of MSTd, a possible homolog of part of the human MT+ region that responds well to optic-flow stimuli, showing that a portion of neurons also receive extraretinal (vestibular: Duffy 1998; Gu et al. 2006; smooth pursuit eye-movements: Ono and Mustari 2006; command related corollary discharges: Sommer and Wurtz 2004) inputs as well. Furthermore, the fact that the heading preference of MSTd neurons is dominated by the visual input during visual/vestibular stimulation (Gu et al. 2006) implies that this region can play a significant role in the creation of self-motion illusions under circumstances when visual stimulation implies self-motion, whereas vestibular inputs suggest object-motion.
Human MT+ invariably includes area MT as well as the more anterior MST (Dukelow et al. 2001; Huk et al. 2002; Goossens et al. 2006). In the macaque, MT encodes the basic, local elements of motion, because the nonretinotopic area MST responds to such higher-order motion stimuli as optic flow and has been suggested to play a role in object and self-motion perception. Unfortunately, our current analysis methods and the applied wide-field visual stimulus do not enable us to differentiate between MT and MST. However, the average y coordinates of our peak voxels in MT+ (−64 ± 2.5 mm) and the reported MST coordinates of a recent study, using 3D simulated optic-flow stimulus (Goossens et al. 2006; −63 ± 7 mm) are very similar. Thus, the fact that MST has greater specificity than MT to global expansion of dot patterns (Smith et al. 2006) and the fact that MST is activated by nonvisual smooth pursuit eye movements (Dukelow et al. 2001), suggest that the anterior parts of MT+, presumably corresponding to MST, play a significant role in the perception of optic-flow–based self-motion perception in humans. These results emphasize the role of extraretinal inputs to MT+, however, further studies, separating MT and MST, are necessary to disentangle the role of these 2 areas of the human cortex.
We found 4 loci showing significantly higher activations during self- than during object-motion in the parietal cortex. The bilaterally observed superior parietal activations along the intraparietal sulcus correspond to the DIPSM/DIPSL regions of previous studies (Goebel et al. 1998; Sunaert et al. 1999; Peuskens et al. 2001), and these areas are activated by optic flow. DIPSM/DIPSL plays a role in many spatial tasks, including heading estimation, motion in depth, and control of visuo-spatial attention (see Greenlee 2000 for a review). The coordinates of this area fit well with those of an area that showed bilateral activations (TAL: −22/32, −70/−60, 64/60) in the conjunction analysis of Deutschländer et al. (2004), in which linearly moving and rotating pattern-induced self-motion was compared with static patterns. This result and our present fMRI data, showing differential activation of this posterior parietal area during self- and object-motion, are in line with nonhuman primate studies as well: DIPSM/DIPSL is a possible homolog of macaque area 7a (Peuskens et al. 2001), an area known to be involved in the computation of self- and object-motion (Read and Siegel 1997; Phinney and Siegel 2000).
Another area, ventro-posterior to the left DIPSM/DIPSL, probably corresponding to POIPS of Peuskens et al. (2001) was localized functionally by our COH versus STAT and (although only marginally significant) by the COH versus INCOH contrasts. This area exhibited also significantly larger activations during self- compared with object-motion (Fig. 3). Previous studies found activations related to optic flow at coordinates corresponding well to this site (de Jong et al. 1994; Peuskens et al. 2001). The area also shows correspondence to the parieto-occipital area of Deutschländer et al. (2004; TAL: −22, −88, 24), activated during both linear and circular vection and of Paradis et al. (2000; TAL; −33, −78, 39), obtained during observation of a 3D shape-from-motion display, when compared against random motion.
The largest cluster of the present study was situated in the precuneus, showing significantly more robust activation during self-motion periods than during object-motion. The precuneus is known to be involved in the execution (Simon et al. 2002) and coordination of movement (Wenderoth et al. 2005), visuo-spatial imagery (Stephan et al. 1995; Ogiso et al. 2000; Hanakawa et al. 2003), processing of information that regards one's own personal identity and past personal experiences (for a review see Cavanna and Trimble 2006) and is activated by optic flow as well (de Jong et al. 1994; Deutschländer et al. 2004). The precuneus has several connections with vestibular processing regions like parieto-insular vestibular cortex (Cavanna and Trimble 2006). Thus, it is not surprising that this area is also involved in vection. The role of the precuneus in the generation of the self-motion illusion is also supported by the neurological study of Wiest et al. (2004), which describes a young epileptic patient, with circumscribed ependymoma in the right paramedian precuneus, who experienced recurrent episodes of linear self-motion perception. This clinical observation, which is in line with present data, supports the idea that the precuneus plays a significant role in the integration of vestibular and visual information regarding self-motion.
Recently, a novel retinotopic visual area, V6, having similar coordinates to our precuneus activation (x = −11 ± 7 mm; y = −72 ± 4 mm; z = 46 ± 13 mm) has been described. This area responds preferentially to extremely wide-field (up 100°) stimulation (Pitzalis et al. 2006) and, in line with our present results, it was selectively activated by coherent motion of random-dot fields (Pitzalis et al. 2005). Because the macaque homolog of V6 is retinotopically organized with contralateral representation and very large receptive fields (Galletti et al. 1999) it is plausible that the activation in the precuneus represents the human homolog of area V6.
One possible explanation for the observed higher parietal activations during periods of self-motion compared with periods of object-motion is that it simply reflects the heightened state of visual attention during self-motion compared with object-motion periods. Indeed, such attentional modulation of occipital and parietal areas has already been found for moving surfaces (Saenz et al. 2002) as well for illusory percepts (motion aftereffect; Huk et al. 2001). However, such findings were usually found throughout the early retinotopic visual areas as well. The fact, that our control region, corresponding to the peak activations within V1/V2, showed similar activations for self- and for object-motion periods suggests that our results are not confounded by different attentional states during self- and object-motion periods.
We found a differential activation in the depth of the right central sulcus, at the level of the middle frontal gyrus. This area is usually activated during imagined movements of the upper extremity (Stephan et al. 1995) and might correspond to area 3a (Guldin and Grüsser 1998; Lobel et al. 1998), an area receiving vestibular inputs in nonhuman primates (Lewis and Van Essen 2000). A very similar location (TAL: 44, −10, 44) was also found to be activated after galvanic (DC) stimulation of the vestibular organs (Lobel et al. 1998). Thus, it appears that the central sulcus receives vestibular and proprioceptive information both in human and nonhuman primates. We could show here that this region is also activated during purely visually induced illusory states of self-motion, making it an ideal candidate for a role as an integrator relay station for various sensory information during self-motion.
The other frontal lobe activation was in the depth of the rostral part of the left superior frontal sulcus. This area, as part of the anterior prefrontal cortex, has been proposed to play a role in the introspective evaluation of one's own mental states (for a review see Ramnani and Owen 2004). During scanning our subjects were asked, at the regular occurrence of a cue, if they experienced self- or object-motion during the previous period. This evaluation of one's own sense of self-motion versus stationary state might explain the obtained activity in this area during the self-motion illusion.
Limbic Cortex and Subcortical Activation
The left anterior cingulate activation was close to the site found recently by Calhoun et al. (2002; TAL: −9, 38, 3) to be more active during simulated driving conditions when compared with passive viewing of the same scenes. Because the anterior part of cingulate cortex (Broadmann area [BA] 24/32) is known to be involved in premotor functions and movement execution (Devinsky et al. 1995) the greater activity during self-motion may reflect the cognitive processing and response selection related to the illusion.
The only subcortical differential activation for self- versus object-motion was observed in the caudate nucleus, which was also found to be activated during studies of perceived self-motion (Previc et al. 2000; Deutschländer et al. 2004). The caudate nucleus is also activated during vestibular stimulation accompanied by visual stimulation, suggesting its role in the analysis of gravitational movement (Indovina et al. 2005).
Generation of Self Motion Illusion
In summary, our results suggest that the illusory percept of self-motion correlates with the activity of a large network of cortical areas. These areas include MT+, being responsive to moving visual stimuli, areas along the intraparietal sulcus, and the precuneus, which are also responsive to optic-flow stimuli and to the estimation of one's own heading and motor imagery. Self-motion illusions also activate an area in the central sulcus, which might play a key role in the integration of vestibular and visual information and finally frontal and limbic cortex as well as part of basal ganglia, involved in the introspective and decision-related mechanisms of our task leading to higher activations during self- compared with object-motion.
Previously, it has been proposed that a parieto-insular area identified as the possible “vestibular cortex” of the human brain (Guldin and Grüsser 1998) shows deactivation in response to rotating self-motion (roll vection) induced by rotating random-dot patterns (Brandt et al. 1998) or by a rotating wind-mill (Kleinschmidt et al. 2002; Deutschländer et al. 2004). The lack of such deactivations in our study suggests that rotational and translational self-motion (linear vection) are processed differentially in the vestibular cortex. Indeed, previous studies have suggested that roll vection results in stronger deactivations of the area than linear vection (Deutschländer et al. 2004). Furthermore, a recent study (Indovina et al. 2005), using visual motion that appears to be congruent to the effects of gravity on falling objects, found increased BOLD signal in the same area. Thus, it seems that various moving stimuli, suggesting certain types of self-motion, are processed differentially in the human central nervous system. Here we suggest that the conscious illusion of one's own progress in 3D space, induced by optic flow, results in the activation of a wide network of areas, ranging from motion specific parietal areas to frontal areas, involved in decision making and introspection.
European Union grant (EU IST Cognitive Systems, project 027198 “Decisions in Motion”); Bavarian Research Foundation (570/03); and Siemens Medical Solutions.
We thank Melanie Wulff for her assistance in carrying out the experiment and data analysis. Conflict of Interest: None declared.