During daily life, we reach and grasp objects located in a variety of positions in our visual-field. Where is the information regarding the visual (position) and motor (acting-hand) aspects integrated in the brain? To address this question, a functional magnetic resonance imaging experiment was conducted, in which 10 right-handed subjects used their right or left hand to grasp 3-dimensional tools, located to the right or left of a central fixation point. The posterior part of the intraparietal sulcus (IPS), the putative human homolog of caudal-IPS, was found to be primarily involved in representing the visual location of the tools, whereas more anterior regions, the human homologs of medial intraparietal area and anterior intraparietal, primarily encoded the identity of the contralateral acting-hand. Quantitative analysis revealed 2 opposite visual and motor gradients along the posterior–anterior axis within the IPS: although the importance of the visual-field gradually diminished, the weight of the acting-hand became increasingly greater. Moreover, direct evidence for visuomotor interaction was found in all 3 IPS subregions, but not in occipital or frontal regions. These findings support the hypothesis that the human IPS is comprised of subregions that have different properties, and that it is engaged in visuomotor transformations necessary for visually guided prehension.
Think of yourself sitting in a restaurant, eating and chatting with a friend. While looking at your companion, you pick up the spoon placed to the right of your soup bowl or the fork that is to the left of your salad plate, to bring some food to your mouth. This example demonstrates that although we usually tend to gaze directly at the objects of our future actions, we often act on objects even when they are located in the peripheral visual-field. Moreover, we act with either of our hands. Yet, such a behavior is based on a complex process that involves several stages. First, the object (e.g., the spoon) is represented in the visual system, in retinotopic coordinates, according to its position in the visual space. In order to act on what we see, the representation of the object in space needs to be transformed into a framework suitable for motor action with either one of the hands. Finally, a movement can be performed.
How and where is the information about object position and hand identity integrated in the brain? It is well known that the occipital lobe is involved in the visual representation and processing of objects, whereas the frontal lobe is involved in the planning and execution of motor actions. It is generally assumed that the visual–dorsal pathway, leading from the occipital to the parietal lobe, is involved in the visual guidance of actions, determining “how” to interact with objects (Goodale and Milner 1992). Evidence from monkey studies suggests that the posterior parietal cortex (PPC), anatomically located between the occipital and frontal lobes, is involved in combining visual information regarding the shape and the position of the target object with motor information such as limb position about the acting effector (Sakata et al. 1999; Buneo et al. 2002; Cohen and Andersen 2002).
The involvement of the parietal cortex in visuomotor integration was also revealed by various impairments resulting from lesions in different regions of the human PPC (for reviews see Battaglia-Mayer et al. 2006; Culham et al. 2006). For example, damage to the PPC, particularly in the parieto-occipital (PO) junction, leads to a difficulty in reaching to visual targets, especially when located in the peripheral visual-field, in the absence of specific visual or motor deficits (as in optic ataxia; Rondot et al. 1977; Perenin and Vighetto 1988; Prado et al. 2005). Lesions involving the anterior lateral bank of the intraparietal sulcus lead to deficits in object grasping, whereas reaching is much less disturbed (Binkofski et al. 1998). Unilateral neglect, usually following right hemisphere damage, often including posterior parietal regions, is characterized by deficits in visual attention or difficulties in performing visuomotor tasks, even though the primary sensory pathways are still intact (for review see Driver and Vuilleumier 2001).
Separate anatomical regions within the monkey PPC, and particularly the intraparietal sulcus (IPS) within the PPC, are involved in the control of movements of different body-parts (for reviews see Colby and Goldberg 1999; Andersen and Buneo 2002; Cohen and Andersen 2002). For example, the lateral IPS (LIP) is closely linked to the generation of saccadic eye-movements (Barash et al. 1991); the anterior intraparietal area (AIP) is involved in the control of grasping movements (Sakata et al. 1995); and the medial intraparietal area (MIP) and area V6a (Galletti et al. 1999), which together construct the functionally defined parietal reach region (PRR; Snyder et al. 1997), represent arm reaching movements. Homologous areas have been suggested in the human brain (for reviews see Grefkes and Fink 2005; Culham and Valyear 2006; Culham et al. 2006). Recently, a gradual transition from saccade preference to reach preference was found on the occipito-parieto-frontal axis (Levy et al. 2007).
Previous studies have tried to assess the coordinate system used in visuomotor tasks. In most parietal areas, both the location of the target and the action are represented in the same reference frame, which in many cases is eye-centered (monkeys: Duhamel et al. 1992; Batista et al. 1999; Buneo et al. 2002; Cohen and Andersen 2002; humans: DeSouza et al. 2000; Sereno et al. 2001; Medendorp et al. 2005). Simple stimuli and tasks were typically used in parietal studies of upper limb movements, such as pointing toward light spots on a screen (e.g., monkeys: Cohen and Andersen 2002; Battaglia-Mayer et al. 2005; humans: Kertzman et al. 1997; DeSouza et al. 2000; Medendorp et al. 2005; Beurze et al. 2007; Fernandez-Ruiz et al. 2007). A few other human studies included more complex paradigms, such as reaching toward pictures of objects (Chao and Martin 2000; James et al. 2002; Valyear et al. 2006; Filimon et al. 2007) or grasping simplified 3-dimensional (3D) objects (Culham et al. 2003; Frey et al. 2005; Cavina-Pratesi et al. 2007). Studies focusing on visual (and/or spatial)-effector interactions during reaching manipulated both the position of a light target (relative to the fixation point and/or on the screen) and the acting-hand (Kertzman et al. 1997; DeSouza et al. 2000; Medendorp et al. 2005; Beurze et al. 2007). Topographical regions in the IPS were found to be modulated by the identity of the acting-hand, being more strongly activated by the contralateral hand (Medendorp et al. 2005; Beurze et al. 2007), whereas the activity in a parietal pointing area was modulated by either the retinotopic or spatial positions of the target (DeSouza et al. 2000). However, the activation patterns that are associated with visual-to-motor transformation within the various parietal subregions are obscure, and the relative weight of the visual and motor aspects in the representation within each subregion is not yet known, as previous studies of visuomotor transformations did not address these issues directly. Here, we aimed to characterize and quantitatively evaluate regions of visuomotor integration, during more complicated movements involving natural visually guided prehension (reaching and grasping) of 3D tools. By manipulating the visual-field position of the tools and the hand acting on them, we were able to characterize the representation of the visual and motor elements in different areas along the IPS.
Ten male volunteers without histories of neurological, psychiatric, or visual deficits (aged 21–31) participated in the present experiment. All subjects were unambiguously right handed according to the Edinburgh Handedness Inventory (Oldfield 1971) (0.7–1, on a [−1 to 1] scale), and all were taller than 1.80 m, thus able to reach the tools positioned at the scanner opening (see below). A written informed consent was obtained from each subject. The Tel-Aviv Sourasky Medical Center Ethics Committee approved the experimental procedure.
We aimed to mimic natural situations in which people reach and grasp tools and not only point at them. Because reaching movements may lead to undesired head movements in the magnet, a special radiotherapy mask (Uni-frame thermoplastic masks, Medtec, Orange City, IA) was individually prepared for each subject, according to his head and face structure. The mask was anchored to a custom-made head-apparatus secured to the head coil. This procedure indeed minimized head movements, and only scans with head movements <3 mm (as measured during data preprocessing) were analyzed. Using the head-apparatus, subjects’ heads were tilted to allow direct gaze at the fixation point and peripheral tools without seeing their resting hands (Fig. 1B). This procedure, avoiding the use of a mirror, has the obvious advantage of preventing the need for additional visuomotor transformations (Binkofski et al. 2003; Culham et al. 2003; Culham et al. 2006; Cavina-Pratesi et al. 2007; Filimon et al. 2007). The disadvantage of the special head-mask and head-apparatus combined with the unusual gaze angle is that they made eye-movement recording during the scan practically impossible. We thus addressed the potential confounding effect of unwarranted eye-movements on the results by using a theoretical post hoc model (Supplementary Data) in addition to the subjects’ extensive training to maintain fixation.
Experimental Setup and Conditions
The experiment included 4 conditions in a 2-by-2 design, manipulating the acting-hand and the visual-field in which the objects were placed. During separate blocks, subjects were required to use either their right or left hand in order to reach and grasp the handles of metal-less 3D tools located in the right or left periphery, while maintaining central fixation (Fig. 1A).
During the experiment, subjects lay in the magnet in a supine position with their head fixed and their hands positioned on a table above their thighs, ∼75 cm distant from their eyes (Fig. 1C). A custom-made “roller” apparatus was placed on the table (somewhat resembling the “grasparatus” of Culham et al. 2003; Cavina-Pratesi et al. 2007; Fig. 1C,D). In each trial, the roller was rotated by 1 step, thus exposing 1 novel object for the subject to act upon. An opaque screen blocked the subject's view of the magnet room, preventing him from seeing the experimenter attaching and removing tools from the unseen panels of the roller. A window cut in the screen allowed the subject to view only 1 panel of the roller at a time (Fig. 1D). During the rest periods, the subjects’ hands were placed on the table, in front of the roller, each above the thigh of the same side, and out of the subjects’ sight.
Subjects were extensively trained outside the scanner to grasp tool handles using a precision grip (between thumb and index finger), without disconnecting the tools from the roller. They were instructed to grasp the tools according to the tools’ orientation, using their right hand when the handle pointed right and their left hand when the handle was oriented to the left, regardless of the relative location of the tools and the fixation point. In conditions in which the hand crossed the body midline, subjects were trained to perform movements such that the acting-hand will not pass through the observed part of the “tool-less” visual-field.
Experimental Paradigm and Stimuli
The experiments were carried out using a block design (12 s block, 9 s rest, last rest 15 s). The 4 conditions were interleaved and each was repeated 6 times with different tools in a counterbalanced manner. Thus, subjects did not know which block and which specific tool were to be seen next. A dummy block, lasting 12 s, was inserted at the beginning of the experiment and removed during the analysis. Each block consisted of 4 trials, with a different tool in each trial (∼3 s per tool). Tools’ orientation and location were maintained within a block, thus the same visual-field was stimulated and the same hand was used within a block. 24 different mock tools were used as stimuli (size: 8–13, 3–6, 0.2–2 cm; length, width, and thickness), presented such that their center was 6–7° in the visual periphery. Tools were selected such that the handle and the functional part of the tool could be easily distinguished (e.g., toothbrush, comb, plastic spoon, or plastic wrench). Each tool was presented 4 times, once in each of the 4 conditions. During the rest periods, subjects placed both arms on the table and maintained central fixation. Auditory instructions were given by earphones to 1 experimenter indicating when to roll the roller, whereas another experimenter placed the tools for the next block, according to the order of blocks.
Magnetic Resonance Imaging Acquisition
The blood oxygenation–dependent (BOLD) functional magnetic resonance imaging (fMRI) measurements were performed in a whole-body 1.5-T, Signa Horizon, LX8.25 General Electric scanner. The fMRI protocols were based on a multislice gradient echo-planar imaging and a standard head coil. The functional data were obtained under optimal timing parameters: time repetition (TR) = 3 s, time echo (TE) = 55 ms, flip angle = 90°, imaging matrix = 80 × 80, field of view (FOV) = 24 cm. The 27 slices with slice thickness of 4 mm (1 mm gap) were oriented in an oblique axis in order to cover the whole brain and cerebellum (excluding the anterior pole of the temporal lobe in some subjects). The functional voxels were thus of 3 × 3 × 5 mm3. Immediately after each functional run, 27 T1 images were taken in the same orientation and location as the functional images. 128 T1-weighted spoiled gradient-recalled images with high resolution (voxel size = 1 × 1 × 1 mm3) were recorded after the functional runs.
Data analysis was performed using the BrainVoyager 4.96 and BrainVoyager QX software packages (Brain Innovation, Maastricht, The Netherlands, 2000). The functional images were superimposed on 2-dimensional (2D) anatomical images and incorporated into the 3D data sets through trilinear interpolation. Prior to statistical analysis, raw data were examined for motion and signal artifacts. Head motion correction and high-pass temporal filtering in the frequency domain (3 cycles/total scan time) were applied in order to remove drifts and to improve the signal-to-noise ratio. The complete data set was transformed into Talairach space (Talairach and Tournoux 1988), Z-normalized and spatially smoothed by a Gaussian kernel of 3 mm. The cortical surface of each subject was reconstructed from the 3D Talairach normalized brain. The procedure included segmentation of the white matter using a grow-region function, the smooth covering of a sphere around the segmented region, and the expansion of the reconstructed white matter into the gray matter.
Regions of Interest
Regions of interest (ROIs) were defined using anatomical markers of sulci and gyri on the 3D inflated brain of each subject. Five ROIs were defined for each subject in each hemisphere along the occipito-parieto-frontal cortex, corresponding to areas expected to be involved in the visuomotor pathway (see Table 1). Although the hand region within primary motor cortex is well defined in the human cortex, regions along the IPS are not clearly delineated in the human brain. To avoid errors, we will use throughout the manuscript the nomenclature of Culham et al. (2006) adding an “h” prefix to denote the putative human functional equivalent areas to those of the macaque regions in the parietal lobe. The ROIs, from anterior to posterior, are (see Fig. 2):
1. M1: The hand area of the primary motor cortex (M1) is located in the central gyrus, from the middle of the central gyrus to the posterior part of the precentral gyrus (Weinrich and Wise 1982), including the “omega-shaped hand-area” in humans (Moore et al. 2000). The center of mass of M1, as anatomically defined here, corresponds well with that reported in many previous imaging studies (i.e., Medendorp et al. 2005; for review see Picard and Strick 2001; and for a meta-analysis of the human motor cortex see Mayka et al. 2006).
2. hAIP: The human homolog of AIP was defined as the region extending ∼1 cm posterior from the junction between the postcentral sulcus and the anterior edge of the intraparietal sulcus (or the projection of the IPS) including both banks of the IPS. This ROI locus is based on accumulating imaging studies in humans, suggesting that this area is involved in visually guided grasping (for review, see Culham et al. 2006). The Talairach coordinates of hAIP in this study are similar to those described by others (e.g., Culham et al. 2003; Frey et al. 2005; Shmuelof and Zohary 2005, 2006; Cavina-Pratesi et al. 2007; Filimon et al. 2007).
3. hMIP: Due to technical challenges (such as head movements) accompanying reaching movements in human neuroimaging studies, reaching movements were much less studied than saccades or grasps, and were sometimes not differentiated from pointing movements (Grefkes and Fink 2005; Culham and Valyear 2006; Culham et al. 2006). We defined hMIP from the fundus of the IPS, expanding 1–2 cm medially into the superior parietal lobule (SPL), and extending from posterior edge of the hAIP ROI ∼2 cm along the posterior axis. Using this definition, the center of mass of the current hMIP is in good concordance with the one described by others (Grefkes et al. 2004; Prado et al. 2005; Culham et al. 2006).
4. h cIPS: Similar to MIP, not many experiments have studied the putative human caudal-IPS (cIPS), and the precise definition of its anatomically position varies slightly between studies (Shikata et al. 2001; James et al. 2002; Shikata et al. 2003; Valyear et al. 2006). In the present study, h cIPS was defined as the posterior third of the IPS including its banks, small divisions, and the neighboring gyri. This locus of the h cIPS extends 2–3 cm posterior from the projection of the PO sulcus on the IPS. The Talairach coordinates of this locus are similar to those previously described by others (Culham et al. 2003; Valyear et al. 2006).
5. SOG: A visual–dorsal ROI in the superior-occipital gyrus (SOG) within the occipital lobe, was defined from ∼1 cm posterior to the PO, and from the medial wall laterally, roughly corresponding to the location of V3A/V7 (Tootell et al. 1997, 1998).
|SOG||L||13, −86, 17||5, 5, 5||700||347|
|R||−4, −82, 18||3, 4, 8||1077||520|
|h cIPS||L||26, −75, 24||4, 5, 4||1465||1000|
|R||−25, −75, 23||2, 1, 2||1368||280|
|hMIP||L||31, −55, 47||5, 7, 4||1480||685|
|R||−22, −57, 54||2, 1, 1||1550||182|
|hAIP||L||36, −41, 46||3, 4, 7||945||338|
|R||−29, −43, 50||2, 2, 2||600||94|
|M1||L||34, −27, 54||3, 4, 3||1149||473|
|R||−31, −26, 55||3, 3, 2||1671||337|
|SOG||L||13, −86, 17||5, 5, 5||700||347|
|R||−4, −82, 18||3, 4, 8||1077||520|
|h cIPS||L||26, −75, 24||4, 5, 4||1465||1000|
|R||−25, −75, 23||2, 1, 2||1368||280|
|hMIP||L||31, −55, 47||5, 7, 4||1480||685|
|R||−22, −57, 54||2, 1, 1||1550||182|
|hAIP||L||36, −41, 46||3, 4, 7||945||338|
|R||−29, −43, 50||2, 2, 2||600||94|
|M1||L||34, −27, 54||3, 4, 3||1149||473|
|R||−31, −26, 55||3, 3, 2||1671||337|
Note: The center of mass of Talaraich coordinates (mm) and cluster size (mm3) of the ROIs are listed for each area and hemisphere. The clusters within each anatomical ROI were based on contiguous voxels that showed significantly greater activation in all conditions relative to rest.
Activation and Time Courses
A general linear model was used to generate statistical parametric maps. The hemodynamic response function was modeled using standard parameters (Boynton et al. 1996). Significance levels were calculated for each subject and contrast, taking into account the probability of a false detection for any given cluster (cluster size correction) (Forman et al. 1995), based on Monte Carlo simulation incorporated in BV QX.
For each subject, we chose clusters within the anatomically defined ROIs that were significantly more active during all conditions when compared with rest (P < 0.005, Z > 2.85 (cluster size corrected), see Table 1). By comparing all 4 conditions to rest, no a priori preference was given to one condition over others. The group average center of mass of these individually based functional-and-anatomical ROIs differed from the group mean anatomical one in no more than 2 mm in each direction.
The time course of activation (i.e., the change from baseline activation) was calculated for each condition. Baseline activation was defined as the average signal at time points just before (TR −1) and at block onset (TR 0). Data from individual subjects for whom the tested contrast yielded clusters that were smaller than 100 active (1 × 1 × 1 mm3) voxels within the anatomically defined ROIs were excluded from the group average. Thus, N = 10 in each ROI, except for R-SOG (N = 9) and L-SOG (N = 8).
Average Percent Signal Change and Multiple Comparisons
For each ROI and subject, the percent signal change was averaged over 6–15 s after condition onset, separately for each condition, while taking into account the hemodynamic delay. All statistical analyses (1- and 2-way repeated measures ANOVAs and post hoc multiple comparisons) and indices calculations were based on these average percent signal change measures. When a 1-way ANOVA showed a significant difference between the 4 conditions, a multiple comparisons post hoc test was carried out (Tukey's least significant difference procedure), to determine whether a given condition significantly differed from each of the 3 other conditions.
Individual Activation Maps
For each of the 10 subjects, a set of four “2-set relative contribution” statistical parameter maps (BV QX) was calculated, separately for each hemisphere and set of 2 conditions, illustrating the change in 1 of the experimental elements (e.g., hand identity) while holding the other element (e.g., visual-field) constant (e.g., the 2-set relative contribution map of the CHCVF and IHCVF conditions; Fig. 4). Highlighted voxels are those for which the variance explained by the 2 predictors was significant, whereas the color denotes the relative weight between the 2 conditions. Although direct contrasts show only voxels in which one condition generates significantly greater activation than the other, the 2-set relative contribution maps show all voxels in which the 2 conditions explain a significant portion of the total variance, even when there is no difference between their relative contribution. Thus, this method is suitable for a quantitative visualization of gradients.
Visual-Field and Motor Laterality Indices
Laterality indices were calculated by subtracting the responses to ispilateral conditions from the responses to contralateral ones, and dividing this difference by the sum of all conditions. Hence the indices range between −1 (ipsilateral representation) to 1 (contralateral representation). Two indices were calculated for each bilateral ROI: a Motor Index [MI = (CH − IH)/(CH + IH)] and a Visual Index [VI = (CVF − IVF)/(CVF + IVF)] where C = contralateral, I = ispilateral, VF = visual-field, and H = hand. Notice that each component in the formulas is shorthand for the summation of 2 different conditions. For example, CH includes the 2 conditions in which the acting-hand was contralateral to the ROI, that is, CHCVF and CHIVF. Therefore, the denominator is the same in both indices and includes all 4 conditions, though named differently according to the numerator. Indices were calculated for each subject and hemisphere separately, and averaged over corresponding ROIs in the 2 hemispheres (N = 17 values for SOG, N = 20 values per each other ROI). Note that index value, such as MI (or VI), is monotonically related to the laterality ratio: C/I = (1 + MI)/(1 − MI). For example, when the contralateral and ipsilateral representations are identical, MI equals 0, and the laterality ratio C/I is 1; when the magnitude of contralateral activation ≫ ipsilateral activation, MI ∼ 1, and C/I tends to infinity (Fig. 5, right side ordinate).
For illustration purposes, we calculated a threshold index value above which the (visual or motor) index significantly differs from 0 (according to the standard deviation of each index, its sample size, and the t-statistic corresponding to alpha = 0.05). Because thresholds were essentially the same for all indices (mean, 0.1; SD, 0.03), data across regions was pooled together to obtain an average threshold (dotted line in Fig. 5).
Motor and Visual Aspects of Representation along the Visuomotor Pathway
The experimental task included 4 conditions in which reaching and grasping with either the right or left hand was performed to mock 3D tools located either to the right or to the left of a central fixation point (Fig. 1A). To study the relative contribution of the visual and motor aspects in various regions along the visuomotor pathways, we applied a ROI analysis method, selecting voxels in anatomically restricted regions along the visuomotor pathway which had significant fMRI activation in all conditions relative to the rest period (i.e., fixation period). This approach assured that no condition was favored over the others. Anatomical localization of the various ROIs was according to the individual structure of sulci and gyri of each subject (see Methods and Table 1 for further details).
Comparing all conditions to rest led to widespread activation over the cortex, as illustrated in an example from a typical subject shown in Figure 2 (inflated brain, P < 0.005; cluster size corrected). The accompanying activation time courses (Fig. 2) represent the group average hemodynamic response in each of the ROIs (see methods). We will first describe the results qualitatively.
As expected, in both hemispheres, the strongest activation in the most anterior ROIs (i.e., M1) was for the 2 conditions in which the contralateral hand acted, color coded in green (dark and light green lines in the hemodynamic responses in Fig. 2). For instance, the highest activation in L-M1 was for the 2 conditions in which subjects used their contralateral (right) hand to reach and grasp tools, regardless of the tools’ location (in the right or left visual-field). The equivalent pattern was seen in R-M1.
The complementary phenomenon was found in the posterior ROIs located in the dorsal visual pathway in the occipital cortex (i.e., SOG). These areas showed a stronger BOLD response to the 2 conditions in which tools were located in the contralateral visual-field, regardless of the hand used (see hemodynamic responses; dark blue and dark green in Fig. 2).
The picture was more complicated in the IPS, where in some of the regions it was not easy to identify contralateral visual or motor preferences, presumably due to the additional visuomotor interactions, as shown below. Moreover, close inspection of Figure 2 shows that the one condition contralateral in both the motor and visual components of the task, namely the Contra-Hand Contra-Visual-Field condition (CHCVF; dark green lines), elicited the strongest response in all ROIs.
To better visualize the effects and assess their statistical significance, we averaged in each subject the percent signal change over 6–15 s after epoch onset to receive the average response to each condition in each ROI (considering the hemodynamic delay). No significant difference was found between the BOLD response for any pair of analogous conditions in corresponding ROIs between hemispheres (paired t-tests, P > 0.1 for all comparisons). Thus, all subsequent analyses were performed on bilaterally pooled data.
Figure 3 depicts bilateral group averages, averaged across all subjects, and their 95% confidence limits. In all ROIs the strength of the responses was significantly different between conditions (1-way ANOVA, P < 0.01), but the factors contributing to the variance in the response changed from region to region. As expected, M1 displayed a strict “motor-effect,” representing movements of the contralateral hand, irrespective of the visual-field location of the target (2-way ANOVA, motor effect, P ≪ 0.001, Table 2). Note that the 2 movements performed by the contralateral hand (i.e., CHCVF vs. CHIVF) did not significantly differ in M1 (t-test, P > 0.1). In contrast, SOG strictly showed the complementary “visual-field effect” (2-way ANOVA, visual effect, P ≪ 0.001, Table 2). Similarly, the fMRI signal in SOG was not affected by the grasping-hand identity when placed in the contralateral visual-field (t-test between CHCVF and IHCVF; P > 0.05).
Note: P values of 2-way ANOVA tests performed on bilaterally pooled data (N = 17 in SOG and N = 20 in each of the other ROIs). Significant values are marked in bold.
The contralateral preference seen in the most extreme ROIs tested along the visuomotor pathway was evident also in the regions within the IPS, although in a less obvious manner, most likely due to additional visuomotor interactions. The contralateral hand activation was significantly stronger than the ipsilateral one not only in M1, but also in the anterior parts of the IPS: hAIP and hMIP (2-way ANOVA, motor effect, P < 0.01, Table 2). This acting-hand dependence was not significant in the more posterior ROIs (i.e., h cIPS and SOG). The contralateral visual-field position elicited significantly stronger fMRI signals than the ipsilateral one in the h cIPS (2-way ANOVA, visual effect, P < 0.005, Table 2) but not in the more anterior ROIs. Therefore, there was a gradual change from a representation of the contralateral visual location of the tool to be grasped in the posterior ROIs (SOG and h cIPS), to a representation of the contralateral acting-hand in the anterior ROIs (hMIP, hAIP, and M1). In other words, along the IPS, mapping was based on contralateral preference (as in SOG and M1), with a varying contribution of the contralateral visual-field or the contralateral acting-hand among the regions.
Moreover, a significant visuomotor interaction was found in all 3 bilateral ROIs in the IPS: h cIPS, hMIP, and hAIP (2-way ANOVA, interaction effect, P < 0.05, Table 2), but not in the anatomically extreme ROIs, M1, and SOG (P > 0.1, Table 2). This suggests that each of the ROIs within the IPS does not only represent visual or motor information, but is rather involved in the integration of both types of information.
It is also clear from Figure 3 that the CHCVF is larger than each of the other conditions. This effect is statistically significant only in h cIPS and hMIP, as indicated by the 3 other conditions falling outside the 95% confidence limit of the response to CHCVF in these regions (marked by the vertical dotted lines; 1-way ANOVA, corrected for multiple comparisons, P < 0.01).
Two Opposite Visual and Motor Gradients
To visualize the anatomical distribution of motor and visual representations along the posterior–anterior axis on a subject-by-subject basis, we generated for each subject maps that measure the relative contralateral dominance of the acting-hand (Fig. 4A) or the visual-field (Fig. 4B), while holding the other element constant (see methods). Although there is considerable intersubject variability, a gradual shift from visual-to-motor representations is evident along the posterior–anterior axis. Moreover, both visual and motor elements are represented in the parietal regions of most subjects.
To quantify the shift of visual and motor effect sizes along the visoumotor pathway in the face of the intersubject spatial variance in the location of sulci and the position of the activation foci relative to them, we computed a motor laterality index (MI) and a visual laterality index (VI) for the discrete bilateral ROIs (see Methods). Indices range from −1 (a purely ipsilateral representation) to 1 (pure contralateral representation), with a 0 value suggesting no contra- or ipsilateral preference (i.e., that action with either of the 2 hands generated an equal response, or that there was no preference for one visual hemifield over the other).
Figure 5 shows the motor and visual indices calculated for each of the ROIs. Notice that all indices were positive, indicating a general trend for contralateral representations. However, not in all cases were the indices significantly different from 0, as indicated by the dotted line (corresponding to a significance level of alpha = 0.05, see Methods). High MI values were found in the anterior ROIs: M1, hAIP, and hMIP, whereas values near 0 were found in the posterior ROIs. This is consistent with the 2-way ANOVA results, showing a significant motor effect only in these anterior ROIs. Moreover, the quantitative nature of the indices revealed a specific pattern of a motor gradient of the motor effects within the anterior ROIs: the highest MI was obtained in M1 (0.72) and progressively smaller MI values were observed in hAIP and hMIP. The opposite pattern was found for the VI, with the highest values in SOG and a smaller VI in h cIPS. Insignificant VI values were found in the more anterior areas (MIP, AIP, and M1).
To summarize, 2 opposite gradients were found along the occipito-parieto-frontal cortex: a descending visual gradient, from the posterior to the anterior ROIs; and an opposite motor gradient, ascending from the posterior to the anterior ROIs (Fig. 5).
The goal of this study was to assess the relative contribution of the visual and motor elements in visuomotor processing. To this end, we designed a visually guided reaching and grasping task and compared conditions in which either the contralateral or ipsilateral acting-hand was used to reach and grasp objects presented in either the contralateral or ipsilateral visual-field. As expected, occipital, and frontal regions were involved in the exclusive representation of visual or motor task elements, respectively. In contrast, both visual and motor elements affected the response in the parietal cortex, but the relative weight of the 2 components differed among the regions within the IPS. This change in relative weights was much more evident in the ROI analysis than in the individual subject maps, probably due to intersubject spatial variance in the location of the ROIs. Although previous studies have suggested the existence of a visuomotor gradient (for example, Ellermann et al. 1998; Battaglia-Mayer et al. 2006), this issue was not tested directly. We found evidence for 2 opposite gradients along the posterior to anterior axis in the IPS: the weight of the acting-hand gradually increased from the h cIPS via the hMIP to the hAIP. In contrast, the importance of the visual-field position of the target objects gradually decreased along the same axis.
Moreover, we found an acting-hand by visual-field interaction term in all IPS regions, h cIPS, hMIP and hAIP, and only in them. This indicates that the parietal cortex integrates both visual and motor information, rather than representing either of the single sources, suggesting parietal involvement in visuomotor transformations. Generally, an interaction term is interpreted as a nonlinear combination of the individual factors. In this study, movements of the contralateral acting-hand to the contralateral visual-field (condition CHCVF) elicited a larger BOLD response than any of the other conditions in the IPS (in h cIPS and hMIP). This response could not be explained by the additive effect of the 2 factors. Thus, we found evidence that a specific combination of the object's position in space and the hand used to manipulate it affect parietal activity. This may be due to the overrepresentation of movements to the contralateral space by the contralateral limb, as found in the monkey PPC (Battaglia-Mayer et al. 2005). Such brain activation may also be related to the trend to usually grasp objects located in peripersonal space with the hand that is closer to the object (e.g., using the right hand for grasping objects in the right side of the body, and vise versa; Scherberger et al. 2003). In humans, such movements are initiated more quickly, completed more rapidly, and performed more accurately (Fisk and Goodale 1985). Anecdotally, this behavior has its parallel in Hebrew: the phrase “leyad” meaning “next to” originated from the same phonological root as the word “yad” which means “hand,” and literally means “to (the) hand.”
Taken together, our results show that unlike the occipital or frontal cortex, both visual and motor aspects of visually guided reaching and grasping are represented in the human IPS, supporting its role as a network engaged in visuomotor transformations. However, within the IPS, the representation changes from a visually dominant to a motor-driven one. The gradual change in the relative weights of the visual and motor attributes along the IPS suggests that areas within this network encode different aspects of visuomotor transformations, as described below.
Possible Confounding Factors
We first address the possibility that factors other than the visual-field and acting-hand may have influenced the results. For example, it may be argued that conditions in which one hand grasped tools in the right visual-field differed from conditions in which the same hand grasped tools placed in the left visual-field, in both (the highlighted) visual aspects and the (ignored) motor aspect of movement direction. It could therefore be argued that the ascribed visual effect actually results from motor aspects. However, we note that due to the known characteristics of representation of limb movement in the cortex, this is highly unlikely. Neurons in the motor cortex of the monkey typically have a preferred direction of movement (Georgopoulos et al. 1988), with the representation of preferred directions being uniform over the population (Schwartz et al. 1988). A similar uniform distribution of representation of movements of the contralateral hand was found in areas within the parietal cortex (PRR: Quian Quiroga et al. 2006; SPL: Battaglia-Mayer et al. 2006; V6A: Fattori et al. 2005). Furthermore, although the preferred directions of neighboring neurons are somewhat more similar than expected by chance (Ben-Shaul et al. 2003), such nonrandom mapping can not be detected by the resolution of standard fMRI (mm, in which a single voxel consists of millions of neurons). Moreover, movements in which the acting-hand crossed the midline are of somewhat larger amplitude. It was previously shown that movements of larger amplitude elicit stronger fMRI responses (Waldvogel et al. 1999). However, we did not observe any differences in the BOLD response in primary motor cortex during movements of the contralateral acting-hand to the right or left visual-fields. We therefore rule out the possibility that visual-field effects are due to differences in movement direction or amplitude.
Another potentially troubling issue is that unwarranted eye-movements may have affected the results. After all, grasping a tool placed in a peripheral location may be accompanied by a saccadic eye-movement toward that location (Land et al. 1999; Prado et al. 2005). The visual-field effect may therefore be due to saccade specificity or gaze position specificity (as has been extensively reported by Andersen et al., for example, see Batista et al. 1999; Cohen and Andersen 2002). We can not entirely rule out these confounding factors because we did not record eye-movements in the scanner, as it was practically impossible to do so in the presence of the special head-mask and head-apparatus, combined with the unusual gaze angle. However, we note that unwarranted eye-movements probably did not play a major role in our experiment. First, prior to the scan, all our subjects were extensively trained in maintaining central fixation while reaching and grasping the tools, and thus saccades were expected to be rare events. Indeed, the subjects reported that they managed to maintain fixation throughout the experiment. But because it may still be possible that they were not aware of their saccadic behavior, we also designed a simple statistical model taking into account the measured fMRI noise levels. Using this model, we tested if the visuomotor interaction effects found along the IPS would remain significant even if saccades occurred at a given proportion of the trials (see details in Supplementary Data). The results of this analysis indicate that saccades would have had to occur very frequently (in about 45%, 5%, and 20% of the trials) to explain the interaction effects found in cIPS, MIP, and AIP, respectively. We therefore suggest that the observed interaction effects are unlikely to result from saccadic eye-movements. Second, had subjects made the saccades toward the peripheral tools, we would not have been able to see a clear preference in the BOLD response to tools presented in the contralateral side in retinotopic visual areas (see Supplementary Fig.).
On a related note, in conditions in which the acting-hand crossed the midline, the acting-hand could have potentially passed through the observed part of the “tool-less” visual-field along its path. To prevent this, subjects were trained to perform their tool grasping movements without entering this visual-field. The clear preference for tools presented in the contralateral side in retinotopic visual areas (Supplementary Fig.) suggests that this was not a serious problem. Such movements, in which the hand crossed the midline, may require extra “cognitive supervision,” thereby generating greater fMRI activation. However, we found that the summed activation during the noncrossing conditions (i.e., CHCVF + IHIVF) is in fact greater than during midline crossing (i.e., CHIVF + IHCVF). This reverse effect, meaning a positive interaction term, is statistically significant in the ROIs along the IPS, indicating the interaction does not emerge from differences in cognitive load.
Another potential concern is that because the orientation of a tool's handle determined the acting-hand in the current experiment, the reported acting-hand effects could potentially result from visual differences in the tool's appearance. However, such orientation specificity can only be detected when using nonstandard fMRI techniques such as fMRI adaptation (Tootell et al. 1998; James et al. 2002; Valyear et al. 2006).
Finally, throughout the manuscript, we referred to the visual-field in which the tools were positioned. Because the fixation point was centrally located, the relevant coordinate system of the visual position (e.g., retinal-, body-, or head-based) can not be determined. However, this does not affect the result showing a change in the relative weight of the visual versus motor aspects of visuomotor function along the parietal cortex.
To summarize, we believe that none of the factors that could potentially have confounded our results is of material significance. We next discuss the observed pattern of activation in each of the regions within the IPS in light of previous knowledge of their function.
h cIPS Maps Visual Elements and Visoumotor Transformations
Neuronal activity in the monkey cIPS depends on the shape and orientation of the visual stimuli (Sakata et al. 1999; Tsutsui et al. 2003). In humans, the putative homolog of monkey cIPS is functionally defined as the area within the posterior part of the IPS which is selective to the visual orientation of textures (Shikata et al. 2001, 2003), irrespective of whether action (mimicking the same orientation) is required or not. Voxels in cIPS show clear fMRI adaptation to repeated presentation of pictures of 3D tools, as long as the orientation of the tool's handle is maintained (regardless of whether the tool's identity is maintained or not; James et al. 2002; Valyear et al. 2006). The representation in regions corresponding anatomically to h cIPS is based on retinotopic coordinates, mapping the contralateral visual-field (Silver et al. 2005).
Using a visuomotor task, we found a predominant representation of the contralateral visual-field in h cIPS, corresponding to the previously reported retinotopic mapping. In addition, we found an interaction between the visual-field and the acting-hand (though the acting-hand per se did not affect the fMRI signal), emphasizing that the nature of the motor action affects the responses in h cIPS. These findings suggest that the representation in this area is still in retinotopic coordinates rather than motor based.
hMIP Maps Motor Elements and Visuomotor Transformations
MIP neurons are typically direction selective, with their firing rates usually modulated by the movement direction of the arm, although some MIP neurons also show visual selectivity to the same direction of motion (Colby and Goldberg 1999; Eskandar and Assad 2002). The PRR in the monkey, consisting of areas MIP and V6A, is involved in planning and execution of reach movements with the contralateral limb, more than saccadic eye-movements to the same target (Snyder et al. 1997). The putative human PRR-homolog (Connolly et al. 2003), has similar characteristics. Using optical left/right reversing prisms, activity in the human PRR was found to be selective to the visual direction of movements and not to the movement direction of the contralateral reaching hand (Fernandez-Ruiz et al. 2007). Furthermore, human MIP is active during movements of the contralateral hand, irrespective of the retinal position of the reaching (central or peripherial; Prado et al. 2005). Additionally, human MIP shows a greater BOLD response when subjects manipulate a joystick which leads to the movement of a visual stimulus, more than when the stimulus moves independently (Grefkes et al. 2004). A mirror neuron area in the human parietal cortex was revealed by fMRI, as it was found to be active during the execution of reaching movements as well as during observed and imagined reaching (though with a weaker activation; Filimon et al. 2007).
The functionally-and-anatomically defined hMIP in the current study exhibited a preference for the contralateral hand, as may be expected from a human PRR-homolog (although to our knowledge, hand-specificity in MIP has not been directly checked in monkeys). We found no preference for a specific visual-field, suggesting that both fields are similarly represented in hMIP. Nonetheless, the visual position of the object to be reached affected the fMRI signal through its interaction with the acting-hand. Thus, we propose that although h cIPS can be referred to as a visual area, in which the activity is modulated by the nature of the motor action; hMIP is a motor area, coding the identity of the reaching hand, whose activity is affected by the visual position of the target.
Visuomotor Transformations and Stronger Representation of Motor Elements in hAIP
Activity in monkeys’ AIP has been related to grasping movements (Sakata et al. 1995). Similarly, neuroimaging studies have shown activation in the putative human homolog of AIP during grasping with (Culham et al. 2003; Frey et al. 2005) or without visual guidance, showing a preference for the contralateral hand (Culham et al. 2006). Moreover, when no movement was required, AIP was active in response to visual stimuli only when these had high affordance for grasping (Chao and Martin 2000; Shikata et al. 2001, 2003; Shmuelof and Zohary 2005; Culham et al. 2006). However, human AIP activation was stronger during movements than during movement imagery, which in turn elicited a stronger response than during visual discrimination per se (Shikata et al. 2003). Although AIP shows a stronger activation during grasping then reaching, it does not show differential activation when the size of 3D objects is computed for pure perceptual purposes, during pattern discrimination, or during passive viewing of the objects (Cavina-Pratesi et al. 2007).
Consistent with this, we found a strong hand-effect in hAIP. As in MIP, the visual-field contributed to hAIP activation via the interaction effect, despite the lack of a main visual-field effect. However, the contralateral hand preference was somewhat more pronounced in hAIP than in hMIP. Because grasping requires a delicate planning of hand configuration, using more joints than needed for reaching, it seems reasonable that motor aspects would be relatively more dominant in hAIP than in hMIP.
This work extends previous studies which used a similar 2-by-2 paradigm manipulating target visual-field and acting-hand (Kertzman et al. 1997; Medendorp et al. 2005; Beurze et al. 2007). However, there are several methodological differences between the past studies and our study. Previously, 2D dot targets were presented on a screen, whereas we presented subjects with mock familiar 3D tools. Moreover, in previous studies subjects performed pointing (Medendorp et al. 2005) or reaching and touching movements toward the targets (Kertzman et al. 1997), whereas our subjects performed prehension movements, which involve both reaching toward tools and grasping them.
We found that movements of the contralateral hand to the contralateral visual-field elicit a stronger response than movements of the ipsilateral hand to the same contralateral visual-field. This was also seen in the study by Medendorp et al., within the anterior occipital cortex (aOC) and retinotopic IPS (retIPS) regions, which anatomically correspond to our h cIPS and hMIP, respectively (Medendorp et al. 2005). Both aOC and retIPS additionally showed a preference for the contralateral visual-field and significant modulations related to both the acting-hand and target visual location. Our results differentiated between the 2 corresponding regions, demonstrating a contralateral hand-effect in the more anterior region of hMIP (their retIPS), as opposed to the visual-field effect found in the more posterior region of h cIPS (their aOC). One way of interpreting this difference is that our reaching and grasping task required a much more extensive motor plan, and therefore a greater differential signal between the hands, than the mere finger pointing used by Medendorp et al.. The additional evidence for visuomotor interactions found in these 2 regions, suggests that despite the differential preferences of these regions in the visuomotor process, both regions are involved in integrating visual and motor information.
Visuomotor interaction effects in the IPS were also found by Beurze et al. (Beurze et al. 2007), with a stronger preference for the contralateral hand (over the ipsilateral hand) than for the contralateral visual-field in which the target was positioned (over the ipsilateral visual-field). The coordinates of the peak activation in their IPS are similar to those of our hMIP.
Stronger involvement of the left hemisphere was previously found in the majority of apraxia/optic ataxia patient studies (Perenin and Vighetto 1988; Koski et al. 2002; Wheaton and Hallett 2007), in which objects were held or that the function of the tool had to be appreciated. In our study, although subjects were presented with tools, they were instructed to reach and grasp the tools’ handle with a precision grip, without actually picking the tools or using them. As all mock tools were of similar size and weight, stereotypical grasping was suitable. This may possibly explain the lack of activation asymmetry between hemispheres in our study, in contrast with the abovementioned patient studies. Moreover, as the main focus of our study was on activity within anatomically defined ROIs, differences in the size of purely functionally defined areas between the 2 hemispheres could not be detected. Therefore, although our study supports the hypothesis that both parietal cortices are engaged in visuomotor transformations necessary for visually guided prehension, one can not conclude that the 2 hemispheres play similar roles in visuomotor transformations.
The visuomotor interactions found in the IPS between the visual-field position and the acting-hand reflect, in our opinion, the process of visuomotor transformations. An open question is which coordinate frame/s is/are used to code the information along the IPS. Manipulating the starting position of the acting-hand and/or the location of the fixation similar to the experimental paradigms used by others (monkeys: Batista et al. 1999; Buneo et al. 2002; Scherberger et al. 2003; humans: DeSouza et al. 2000) may offer a direct way to examine the coordinate frames used and perhaps reveal additional differences between brain areas involved in natural prehension of 3D tools.
Israel Science Foundation of the Israel Academy of Sciences (grant #8009).
We thank Eran Stark for insightful comments and for help in developing the saccade-induced interaction model, Ya'acov Ritov for help with the statistical modeling, and Lior Shmuelof for insightful comments. We are grateful to Yitshak Simhayoff for constructing the fMRI roller apparatus.
Conflict of Interest: None declared.