Abstract

During daily life, we reach and grasp objects located in a variety of positions in our visual-field. Where is the information regarding the visual (position) and motor (acting-hand) aspects integrated in the brain? To address this question, a functional magnetic resonance imaging experiment was conducted, in which 10 right-handed subjects used their right or left hand to grasp 3-dimensional tools, located to the right or left of a central fixation point. The posterior part of the intraparietal sulcus (IPS), the putative human homolog of caudal-IPS, was found to be primarily involved in representing the visual location of the tools, whereas more anterior regions, the human homologs of medial intraparietal area and anterior intraparietal, primarily encoded the identity of the contralateral acting-hand. Quantitative analysis revealed 2 opposite visual and motor gradients along the posterior–anterior axis within the IPS: although the importance of the visual-field gradually diminished, the weight of the acting-hand became increasingly greater. Moreover, direct evidence for visuomotor interaction was found in all 3 IPS subregions, but not in occipital or frontal regions. These findings support the hypothesis that the human IPS is comprised of subregions that have different properties, and that it is engaged in visuomotor transformations necessary for visually guided prehension.

Introduction

Think of yourself sitting in a restaurant, eating and chatting with a friend. While looking at your companion, you pick up the spoon placed to the right of your soup bowl or the fork that is to the left of your salad plate, to bring some food to your mouth. This example demonstrates that although we usually tend to gaze directly at the objects of our future actions, we often act on objects even when they are located in the peripheral visual-field. Moreover, we act with either of our hands. Yet, such a behavior is based on a complex process that involves several stages. First, the object (e.g., the spoon) is represented in the visual system, in retinotopic coordinates, according to its position in the visual space. In order to act on what we see, the representation of the object in space needs to be transformed into a framework suitable for motor action with either one of the hands. Finally, a movement can be performed.

How and where is the information about object position and hand identity integrated in the brain? It is well known that the occipital lobe is involved in the visual representation and processing of objects, whereas the frontal lobe is involved in the planning and execution of motor actions. It is generally assumed that the visual–dorsal pathway, leading from the occipital to the parietal lobe, is involved in the visual guidance of actions, determining “how” to interact with objects (Goodale and Milner 1992). Evidence from monkey studies suggests that the posterior parietal cortex (PPC), anatomically located between the occipital and frontal lobes, is involved in combining visual information regarding the shape and the position of the target object with motor information such as limb position about the acting effector (Sakata et al. 1999; Buneo et al. 2002; Cohen and Andersen 2002).

The involvement of the parietal cortex in visuomotor integration was also revealed by various impairments resulting from lesions in different regions of the human PPC (for reviews see Battaglia-Mayer et al. 2006; Culham et al. 2006). For example, damage to the PPC, particularly in the parieto-occipital (PO) junction, leads to a difficulty in reaching to visual targets, especially when located in the peripheral visual-field, in the absence of specific visual or motor deficits (as in optic ataxia; Rondot et al. 1977; Perenin and Vighetto 1988; Prado et al. 2005). Lesions involving the anterior lateral bank of the intraparietal sulcus lead to deficits in object grasping, whereas reaching is much less disturbed (Binkofski et al. 1998). Unilateral neglect, usually following right hemisphere damage, often including posterior parietal regions, is characterized by deficits in visual attention or difficulties in performing visuomotor tasks, even though the primary sensory pathways are still intact (for review see Driver and Vuilleumier 2001).

Separate anatomical regions within the monkey PPC, and particularly the intraparietal sulcus (IPS) within the PPC, are involved in the control of movements of different body-parts (for reviews see Colby and Goldberg 1999; Andersen and Buneo 2002; Cohen and Andersen 2002). For example, the lateral IPS (LIP) is closely linked to the generation of saccadic eye-movements (Barash et al. 1991); the anterior intraparietal area (AIP) is involved in the control of grasping movements (Sakata et al. 1995); and the medial intraparietal area (MIP) and area V6a (Galletti et al. 1999), which together construct the functionally defined parietal reach region (PRR; Snyder et al. 1997), represent arm reaching movements. Homologous areas have been suggested in the human brain (for reviews see Grefkes and Fink 2005; Culham and Valyear 2006; Culham et al. 2006). Recently, a gradual transition from saccade preference to reach preference was found on the occipito-parieto-frontal axis (Levy et al. 2007).

Previous studies have tried to assess the coordinate system used in visuomotor tasks. In most parietal areas, both the location of the target and the action are represented in the same reference frame, which in many cases is eye-centered (monkeys: Duhamel et al. 1992; Batista et al. 1999; Buneo et al. 2002; Cohen and Andersen 2002; humans: DeSouza et al. 2000; Sereno et al. 2001; Medendorp et al. 2005). Simple stimuli and tasks were typically used in parietal studies of upper limb movements, such as pointing toward light spots on a screen (e.g., monkeys: Cohen and Andersen 2002; Battaglia-Mayer et al. 2005; humans: Kertzman et al. 1997; DeSouza et al. 2000; Medendorp et al. 2005; Beurze et al. 2007; Fernandez-Ruiz et al. 2007). A few other human studies included more complex paradigms, such as reaching toward pictures of objects (Chao and Martin 2000; James et al. 2002; Valyear et al. 2006; Filimon et al. 2007) or grasping simplified 3-dimensional (3D) objects (Culham et al. 2003; Frey et al. 2005; Cavina-Pratesi et al. 2007). Studies focusing on visual (and/or spatial)-effector interactions during reaching manipulated both the position of a light target (relative to the fixation point and/or on the screen) and the acting-hand (Kertzman et al. 1997; DeSouza et al. 2000; Medendorp et al. 2005; Beurze et al. 2007). Topographical regions in the IPS were found to be modulated by the identity of the acting-hand, being more strongly activated by the contralateral hand (Medendorp et al. 2005; Beurze et al. 2007), whereas the activity in a parietal pointing area was modulated by either the retinotopic or spatial positions of the target (DeSouza et al. 2000). However, the activation patterns that are associated with visual-to-motor transformation within the various parietal subregions are obscure, and the relative weight of the visual and motor aspects in the representation within each subregion is not yet known, as previous studies of visuomotor transformations did not address these issues directly. Here, we aimed to characterize and quantitatively evaluate regions of visuomotor integration, during more complicated movements involving natural visually guided prehension (reaching and grasping) of 3D tools. By manipulating the visual-field position of the tools and the hand acting on them, we were able to characterize the representation of the visual and motor elements in different areas along the IPS.

Methods

Subjects

Ten male volunteers without histories of neurological, psychiatric, or visual deficits (aged 21–31) participated in the present experiment. All subjects were unambiguously right handed according to the Edinburgh Handedness Inventory (Oldfield 1971) (0.7–1, on a [−1 to 1] scale), and all were taller than 1.80 m, thus able to reach the tools positioned at the scanner opening (see below). A written informed consent was obtained from each subject. The Tel-Aviv Sourasky Medical Center Ethics Committee approved the experimental procedure.

Head Position

We aimed to mimic natural situations in which people reach and grasp tools and not only point at them. Because reaching movements may lead to undesired head movements in the magnet, a special radiotherapy mask (Uni-frame thermoplastic masks, Medtec, Orange City, IA) was individually prepared for each subject, according to his head and face structure. The mask was anchored to a custom-made head-apparatus secured to the head coil. This procedure indeed minimized head movements, and only scans with head movements <3 mm (as measured during data preprocessing) were analyzed. Using the head-apparatus, subjects’ heads were tilted to allow direct gaze at the fixation point and peripheral tools without seeing their resting hands (Fig. 1B). This procedure, avoiding the use of a mirror, has the obvious advantage of preventing the need for additional visuomotor transformations (Binkofski et al. 2003; Culham et al. 2003; Culham et al. 2006; Cavina-Pratesi et al. 2007; Filimon et al. 2007). The disadvantage of the special head-mask and head-apparatus combined with the unusual gaze angle is that they made eye-movement recording during the scan practically impossible. We thus addressed the potential confounding effect of unwarranted eye-movements on the results by using a theoretical post hoc model (Supplementary Data) in addition to the subjects’ extensive training to maintain fixation.

Figure 1.

Methods. (A) Cartoon of the experimental design. The experiment included 4 conditions in a 2-by-2 block design with the visual-field and acting-hand as the 2 variables. The direction of the tools’ handle informed subjects which hand to use during each block. Subjects maintained central fixation on the white dot throughout the experiment. In separate blocks, subjects were required to do 1 of the following: use their left hand to reach and grasp handles of tools located in the right visual-field, to the right of a central fixation point (top left); use their right hand to reach and grasp tools in the right visual-field (top right); or use their left or right hand for reaching and grasping tools in the left visual-field (bottom left or bottom right, respectively). (B) Head-apparatus. A subject-specific mask was anchored to a custom-made head-apparatus, which was anchored to the head coil, minimizing head movements. The apparatus also tilted subjects’ heads, allowing them to directly gaze at the central fixation point and view the peripheral tools. (C) Experimental setup, viewed from the scanner room. A roller apparatus was placed on a table positioned above the subjects’ thighs. Central white fixation points were placed on the roller's panels during conditions and rest periods. Each 12 sec block included the sequential presentation of 4 different 3D tools placed on separate panels in the same orientation and location. (D) Experimental setup, from the subject's view. An opaque screen between the roller and magnet bore allowed subjects to view only 1 panel and 1 tool at a time.

Figure 1.

Methods. (A) Cartoon of the experimental design. The experiment included 4 conditions in a 2-by-2 block design with the visual-field and acting-hand as the 2 variables. The direction of the tools’ handle informed subjects which hand to use during each block. Subjects maintained central fixation on the white dot throughout the experiment. In separate blocks, subjects were required to do 1 of the following: use their left hand to reach and grasp handles of tools located in the right visual-field, to the right of a central fixation point (top left); use their right hand to reach and grasp tools in the right visual-field (top right); or use their left or right hand for reaching and grasping tools in the left visual-field (bottom left or bottom right, respectively). (B) Head-apparatus. A subject-specific mask was anchored to a custom-made head-apparatus, which was anchored to the head coil, minimizing head movements. The apparatus also tilted subjects’ heads, allowing them to directly gaze at the central fixation point and view the peripheral tools. (C) Experimental setup, viewed from the scanner room. A roller apparatus was placed on a table positioned above the subjects’ thighs. Central white fixation points were placed on the roller's panels during conditions and rest periods. Each 12 sec block included the sequential presentation of 4 different 3D tools placed on separate panels in the same orientation and location. (D) Experimental setup, from the subject's view. An opaque screen between the roller and magnet bore allowed subjects to view only 1 panel and 1 tool at a time.

Experimental Setup and Conditions

Experimental Procedure

The experiment included 4 conditions in a 2-by-2 design, manipulating the acting-hand and the visual-field in which the objects were placed. During separate blocks, subjects were required to use either their right or left hand in order to reach and grasp the handles of metal-less 3D tools located in the right or left periphery, while maintaining central fixation (Fig. 1A).

During the experiment, subjects lay in the magnet in a supine position with their head fixed and their hands positioned on a table above their thighs, ∼75 cm distant from their eyes (Fig. 1C). A custom-made “roller” apparatus was placed on the table (somewhat resembling the “grasparatus” of Culham et al. 2003; Cavina-Pratesi et al. 2007; Fig. 1C,D). In each trial, the roller was rotated by 1 step, thus exposing 1 novel object for the subject to act upon. An opaque screen blocked the subject's view of the magnet room, preventing him from seeing the experimenter attaching and removing tools from the unseen panels of the roller. A window cut in the screen allowed the subject to view only 1 panel of the roller at a time (Fig. 1D). During the rest periods, the subjects’ hands were placed on the table, in front of the roller, each above the thigh of the same side, and out of the subjects’ sight.

Subjects were extensively trained outside the scanner to grasp tool handles using a precision grip (between thumb and index finger), without disconnecting the tools from the roller. They were instructed to grasp the tools according to the tools’ orientation, using their right hand when the handle pointed right and their left hand when the handle was oriented to the left, regardless of the relative location of the tools and the fixation point. In conditions in which the hand crossed the body midline, subjects were trained to perform movements such that the acting-hand will not pass through the observed part of the “tool-less” visual-field.

Experimental Paradigm and Stimuli

The experiments were carried out using a block design (12 s block, 9 s rest, last rest 15 s). The 4 conditions were interleaved and each was repeated 6 times with different tools in a counterbalanced manner. Thus, subjects did not know which block and which specific tool were to be seen next. A dummy block, lasting 12 s, was inserted at the beginning of the experiment and removed during the analysis. Each block consisted of 4 trials, with a different tool in each trial (∼3 s per tool). Tools’ orientation and location were maintained within a block, thus the same visual-field was stimulated and the same hand was used within a block. 24 different mock tools were used as stimuli (size: 8–13, 3–6, 0.2–2 cm; length, width, and thickness), presented such that their center was 6–7° in the visual periphery. Tools were selected such that the handle and the functional part of the tool could be easily distinguished (e.g., toothbrush, comb, plastic spoon, or plastic wrench). Each tool was presented 4 times, once in each of the 4 conditions. During the rest periods, subjects placed both arms on the table and maintained central fixation. Auditory instructions were given by earphones to 1 experimenter indicating when to roll the roller, whereas another experimenter placed the tools for the next block, according to the order of blocks.

Magnetic Resonance Imaging Acquisition

The blood oxygenation–dependent (BOLD) functional magnetic resonance imaging (fMRI) measurements were performed in a whole-body 1.5-T, Signa Horizon, LX8.25 General Electric scanner. The fMRI protocols were based on a multislice gradient echo-planar imaging and a standard head coil. The functional data were obtained under optimal timing parameters: time repetition (TR) = 3 s, time echo (TE) = 55 ms, flip angle = 90°, imaging matrix = 80 × 80, field of view (FOV) = 24 cm. The 27 slices with slice thickness of 4 mm (1 mm gap) were oriented in an oblique axis in order to cover the whole brain and cerebellum (excluding the anterior pole of the temporal lobe in some subjects). The functional voxels were thus of 3 × 3 × 5 mm3. Immediately after each functional run, 27 T1 images were taken in the same orientation and location as the functional images. 128 T1-weighted spoiled gradient-recalled images with high resolution (voxel size = 1 × 1 × 1 mm3) were recorded after the functional runs.

Data Analysis

Preprocessing

Data analysis was performed using the BrainVoyager 4.96 and BrainVoyager QX software packages (Brain Innovation, Maastricht, The Netherlands, 2000). The functional images were superimposed on 2-dimensional (2D) anatomical images and incorporated into the 3D data sets through trilinear interpolation. Prior to statistical analysis, raw data were examined for motion and signal artifacts. Head motion correction and high-pass temporal filtering in the frequency domain (3 cycles/total scan time) were applied in order to remove drifts and to improve the signal-to-noise ratio. The complete data set was transformed into Talairach space (Talairach and Tournoux 1988), Z-normalized and spatially smoothed by a Gaussian kernel of 3 mm. The cortical surface of each subject was reconstructed from the 3D Talairach normalized brain. The procedure included segmentation of the white matter using a grow-region function, the smooth covering of a sphere around the segmented region, and the expansion of the reconstructed white matter into the gray matter.

Regions of Interest

Regions of interest (ROIs) were defined using anatomical markers of sulci and gyri on the 3D inflated brain of each subject. Five ROIs were defined for each subject in each hemisphere along the occipito-parieto-frontal cortex, corresponding to areas expected to be involved in the visuomotor pathway (see Table 1). Although the hand region within primary motor cortex is well defined in the human cortex, regions along the IPS are not clearly delineated in the human brain. To avoid errors, we will use throughout the manuscript the nomenclature of Culham et al. (2006) adding an “h” prefix to denote the putative human functional equivalent areas to those of the macaque regions in the parietal lobe. The ROIs, from anterior to posterior, are (see Fig. 2):

  • 1. M1: The hand area of the primary motor cortex (M1) is located in the central gyrus, from the middle of the central gyrus to the posterior part of the precentral gyrus (Weinrich and Wise 1982), including the “omega-shaped hand-area” in humans (Moore et al. 2000). The center of mass of M1, as anatomically defined here, corresponds well with that reported in many previous imaging studies (i.e., Medendorp et al. 2005; for review see Picard and Strick 2001; and for a meta-analysis of the human motor cortex see Mayka et al. 2006).

  • 2. hAIP: The human homolog of AIP was defined as the region extending ∼1 cm posterior from the junction between the postcentral sulcus and the anterior edge of the intraparietal sulcus (or the projection of the IPS) including both banks of the IPS. This ROI locus is based on accumulating imaging studies in humans, suggesting that this area is involved in visually guided grasping (for review, see Culham et al. 2006). The Talairach coordinates of hAIP in this study are similar to those described by others (e.g., Culham et al. 2003; Frey et al. 2005; Shmuelof and Zohary 2005, 2006; Cavina-Pratesi et al. 2007; Filimon et al. 2007).

  • 3. hMIP: Due to technical challenges (such as head movements) accompanying reaching movements in human neuroimaging studies, reaching movements were much less studied than saccades or grasps, and were sometimes not differentiated from pointing movements (Grefkes and Fink 2005; Culham and Valyear 2006; Culham et al. 2006). We defined hMIP from the fundus of the IPS, expanding 1–2 cm medially into the superior parietal lobule (SPL), and extending from posterior edge of the hAIP ROI ∼2 cm along the posterior axis. Using this definition, the center of mass of the current hMIP is in good concordance with the one described by others (Grefkes et al. 2004; Prado et al. 2005; Culham et al. 2006).

  • 4. h cIPS: Similar to MIP, not many experiments have studied the putative human caudal-IPS (cIPS), and the precise definition of its anatomically position varies slightly between studies (Shikata et al. 2001; James et al. 2002; Shikata et al. 2003; Valyear et al. 2006). In the present study, h cIPS was defined as the posterior third of the IPS including its banks, small divisions, and the neighboring gyri. This locus of the h cIPS extends 2–3 cm posterior from the projection of the PO sulcus on the IPS. The Talairach coordinates of this locus are similar to those previously described by others (Culham et al. 2003; Valyear et al. 2006).

  • 5. SOG: A visual–dorsal ROI in the superior-occipital gyrus (SOG) within the occipital lobe, was defined from ∼1 cm posterior to the PO, and from the medial wall laterally, roughly corresponding to the location of V3A/V7 (Tootell et al. 1997, 1998).

Table 1

Talaraich coordinates and cluster size of ROIs

ROI Hemisphere TAL Cluster size 
  Average SD Average SD 
SOG 13, −86, 17 5, 5, 5 700 347 
 −4, −82, 18 3, 4, 8 1077 520 
h cIPS 26, −75, 24 4, 5, 4 1465 1000 
 −25, −75, 23 2, 1, 2 1368 280 
hMIP 31, −55, 47 5, 7, 4 1480 685 
 −22, −57, 54 2, 1, 1 1550 182 
hAIP 36, −41, 46 3, 4, 7 945 338 
 −29, −43, 50 2, 2, 2 600 94 
M1 34, −27, 54 3, 4, 3 1149 473 
 −31, −26, 55 3, 3, 2 1671 337 
ROI Hemisphere TAL Cluster size 
  Average SD Average SD 
SOG 13, −86, 17 5, 5, 5 700 347 
 −4, −82, 18 3, 4, 8 1077 520 
h cIPS 26, −75, 24 4, 5, 4 1465 1000 
 −25, −75, 23 2, 1, 2 1368 280 
hMIP 31, −55, 47 5, 7, 4 1480 685 
 −22, −57, 54 2, 1, 1 1550 182 
hAIP 36, −41, 46 3, 4, 7 945 338 
 −29, −43, 50 2, 2, 2 600 94 
M1 34, −27, 54 3, 4, 3 1149 473 
 −31, −26, 55 3, 3, 2 1671 337 

Note: The center of mass of Talaraich coordinates (mm) and cluster size (mm3) of the ROIs are listed for each area and hemisphere. The clusters within each anatomical ROI were based on contiguous voxels that showed significantly greater activation in all conditions relative to rest.

Figure 2.

Definition of ROIs and group time courses. Center: ROIs were defined in each hemisphere of each subject, according to his anatomical brain structure. The ROIs of one exemplary subject are marked by white lines on his inflated brain, shown from a dorsal view (sulci are marked in dark gray, main sulci dotted in white and named). Within these regions, we selected voxels that were significantly more active during all conditions than during the rest period (depicted in orange; P<0.005; cluster size corrected). Periphery: Group time courses. For each ROI and each condition, the average (and SEM) percent signal change (relative to baseline activation) over all subjects is shown as a function of time (0 corresponds to block starting time; same time scale in all insets; N= 9 in R-SOG, N= 8 in L-SOG, and N= 10 in each of the other ROIs). Note that color coding is according to the hemisphere tested, such that green colors represent the contralateral hand; blue the ipsilateral hand; dark colors the contralateral visual-field; and light colors the ipsilateral one (see cartoons above each hemisphere). Notice the similar activation patterns between the 2 hemispheres. In the anterior ROIs (M1, hAIP, and hMIP), movements of the contralateral hand(dark and light green lines) elicited the strongest responses. In comparison, tools located in the contralateral visual-field(dark blue and dark green lines) elicited the strongest responses in the posterior ROIs (h cIPS and SOG). Additionally, in all ROIs, movements of the contralateral hand to the contralateral visual-field (dark green lines) elicited the strongest response. L-left hemisphere, R-right hemisphere, CS-central sulcus, postCS-post-central sulcus, TOc-transverse-occipital sulcus.

Figure 2.

Definition of ROIs and group time courses. Center: ROIs were defined in each hemisphere of each subject, according to his anatomical brain structure. The ROIs of one exemplary subject are marked by white lines on his inflated brain, shown from a dorsal view (sulci are marked in dark gray, main sulci dotted in white and named). Within these regions, we selected voxels that were significantly more active during all conditions than during the rest period (depicted in orange; P<0.005; cluster size corrected). Periphery: Group time courses. For each ROI and each condition, the average (and SEM) percent signal change (relative to baseline activation) over all subjects is shown as a function of time (0 corresponds to block starting time; same time scale in all insets; N= 9 in R-SOG, N= 8 in L-SOG, and N= 10 in each of the other ROIs). Note that color coding is according to the hemisphere tested, such that green colors represent the contralateral hand; blue the ipsilateral hand; dark colors the contralateral visual-field; and light colors the ipsilateral one (see cartoons above each hemisphere). Notice the similar activation patterns between the 2 hemispheres. In the anterior ROIs (M1, hAIP, and hMIP), movements of the contralateral hand(dark and light green lines) elicited the strongest responses. In comparison, tools located in the contralateral visual-field(dark blue and dark green lines) elicited the strongest responses in the posterior ROIs (h cIPS and SOG). Additionally, in all ROIs, movements of the contralateral hand to the contralateral visual-field (dark green lines) elicited the strongest response. L-left hemisphere, R-right hemisphere, CS-central sulcus, postCS-post-central sulcus, TOc-transverse-occipital sulcus.

Activation and Time Courses

A general linear model was used to generate statistical parametric maps. The hemodynamic response function was modeled using standard parameters (Boynton et al. 1996). Significance levels were calculated for each subject and contrast, taking into account the probability of a false detection for any given cluster (cluster size correction) (Forman et al. 1995), based on Monte Carlo simulation incorporated in BV QX.

For each subject, we chose clusters within the anatomically defined ROIs that were significantly more active during all conditions when compared with rest (P < 0.005, Z > 2.85 (cluster size corrected), see Table 1). By comparing all 4 conditions to rest, no a priori preference was given to one condition over others. The group average center of mass of these individually based functional-and-anatomical ROIs differed from the group mean anatomical one in no more than 2 mm in each direction.

The time course of activation (i.e., the change from baseline activation) was calculated for each condition. Baseline activation was defined as the average signal at time points just before (TR −1) and at block onset (TR 0). Data from individual subjects for whom the tested contrast yielded clusters that were smaller than 100 active (1 × 1 × 1 mm3) voxels within the anatomically defined ROIs were excluded from the group average. Thus, N = 10 in each ROI, except for R-SOG (N = 9) and L-SOG (N = 8).

Average Percent Signal Change and Multiple Comparisons

For each ROI and subject, the percent signal change was averaged over 6–15 s after condition onset, separately for each condition, while taking into account the hemodynamic delay. All statistical analyses (1- and 2-way repeated measures ANOVAs and post hoc multiple comparisons) and indices calculations were based on these average percent signal change measures. When a 1-way ANOVA showed a significant difference between the 4 conditions, a multiple comparisons post hoc test was carried out (Tukey's least significant difference procedure), to determine whether a given condition significantly differed from each of the 3 other conditions.

Individual Activation Maps

For each of the 10 subjects, a set of four “2-set relative contribution” statistical parameter maps (BV QX) was calculated, separately for each hemisphere and set of 2 conditions, illustrating the change in 1 of the experimental elements (e.g., hand identity) while holding the other element (e.g., visual-field) constant (e.g., the 2-set relative contribution map of the CHCVF and IHCVF conditions; Fig. 4). Highlighted voxels are those for which the variance explained by the 2 predictors was significant, whereas the color denotes the relative weight between the 2 conditions. Although direct contrasts show only voxels in which one condition generates significantly greater activation than the other, the 2-set relative contribution maps show all voxels in which the 2 conditions explain a significant portion of the total variance, even when there is no difference between their relative contribution. Thus, this method is suitable for a quantitative visualization of gradients.

Visual-Field and Motor Laterality Indices

Laterality indices were calculated by subtracting the responses to ispilateral conditions from the responses to contralateral ones, and dividing this difference by the sum of all conditions. Hence the indices range between −1 (ipsilateral representation) to 1 (contralateral representation). Two indices were calculated for each bilateral ROI: a Motor Index [MI = (CHIH)/(CH + IH)] and a Visual Index [VI = (CVFIVF)/(CVF + IVF)] where C = contralateral, I = ispilateral, VF = visual-field, and H = hand. Notice that each component in the formulas is shorthand for the summation of 2 different conditions. For example, CH includes the 2 conditions in which the acting-hand was contralateral to the ROI, that is, CHCVF and CHIVF. Therefore, the denominator is the same in both indices and includes all 4 conditions, though named differently according to the numerator. Indices were calculated for each subject and hemisphere separately, and averaged over corresponding ROIs in the 2 hemispheres (N = 17 values for SOG, N = 20 values per each other ROI). Note that index value, such as MI (or VI), is monotonically related to the laterality ratio: C/I = (1 + MI)/(1 − MI). For example, when the contralateral and ipsilateral representations are identical, MI equals 0, and the laterality ratio C/I is 1; when the magnitude of contralateral activation ≫ ipsilateral activation, MI ∼ 1, and C/I tends to infinity (Fig. 5, right side ordinate).

For illustration purposes, we calculated a threshold index value above which the (visual or motor) index significantly differs from 0 (according to the standard deviation of each index, its sample size, and the t-statistic corresponding to alpha = 0.05). Because thresholds were essentially the same for all indices (mean, 0.1; SD, 0.03), data across regions was pooled together to obtain an average threshold (dotted line in Fig. 5).

Results

Motor and Visual Aspects of Representation along the Visuomotor Pathway

The experimental task included 4 conditions in which reaching and grasping with either the right or left hand was performed to mock 3D tools located either to the right or to the left of a central fixation point (Fig. 1A). To study the relative contribution of the visual and motor aspects in various regions along the visuomotor pathways, we applied a ROI analysis method, selecting voxels in anatomically restricted regions along the visuomotor pathway which had significant fMRI activation in all conditions relative to the rest period (i.e., fixation period). This approach assured that no condition was favored over the others. Anatomical localization of the various ROIs was according to the individual structure of sulci and gyri of each subject (see Methods and Table 1 for further details).

Comparing all conditions to rest led to widespread activation over the cortex, as illustrated in an example from a typical subject shown in Figure 2 (inflated brain, P < 0.005; cluster size corrected). The accompanying activation time courses (Fig. 2) represent the group average hemodynamic response in each of the ROIs (see methods). We will first describe the results qualitatively.

As expected, in both hemispheres, the strongest activation in the most anterior ROIs (i.e., M1) was for the 2 conditions in which the contralateral hand acted, color coded in green (dark and light green lines in the hemodynamic responses in Fig. 2). For instance, the highest activation in L-M1 was for the 2 conditions in which subjects used their contralateral (right) hand to reach and grasp tools, regardless of the tools’ location (in the right or left visual-field). The equivalent pattern was seen in R-M1.

The complementary phenomenon was found in the posterior ROIs located in the dorsal visual pathway in the occipital cortex (i.e., SOG). These areas showed a stronger BOLD response to the 2 conditions in which tools were located in the contralateral visual-field, regardless of the hand used (see hemodynamic responses; dark blue and dark green in Fig. 2).

The picture was more complicated in the IPS, where in some of the regions it was not easy to identify contralateral visual or motor preferences, presumably due to the additional visuomotor interactions, as shown below. Moreover, close inspection of Figure 2 shows that the one condition contralateral in both the motor and visual components of the task, namely the Contra-Hand Contra-Visual-Field condition (CHCVF; dark green lines), elicited the strongest response in all ROIs.

To better visualize the effects and assess their statistical significance, we averaged in each subject the percent signal change over 6–15 s after epoch onset to receive the average response to each condition in each ROI (considering the hemodynamic delay). No significant difference was found between the BOLD response for any pair of analogous conditions in corresponding ROIs between hemispheres (paired t-tests, P > 0.1 for all comparisons). Thus, all subsequent analyses were performed on bilaterally pooled data.

Figure 3 depicts bilateral group averages, averaged across all subjects, and their 95% confidence limits. In all ROIs the strength of the responses was significantly different between conditions (1-way ANOVA, P < 0.01), but the factors contributing to the variance in the response changed from region to region. As expected, M1 displayed a strict “motor-effect,” representing movements of the contralateral hand, irrespective of the visual-field location of the target (2-way ANOVA, motor effect, P ≪ 0.001, Table 2). Note that the 2 movements performed by the contralateral hand (i.e., CHCVF vs. CHIVF) did not significantly differ in M1 (t-test, P > 0.1). In contrast, SOG strictly showed the complementary “visual-field effect” (2-way ANOVA, visual effect, P ≪ 0.001, Table 2). Similarly, the fMRI signal in SOG was not affected by the grasping-hand identity when placed in the contralateral visual-field (t-test between CHCVF and IHCVF; P > 0.05).

Figure 3.

Group average activation in the various ROIs. For each ROI, the average percent signal change of each condition pooled across both hemispheres in all subjects is shown (N = 17 in SOG and N = 20 in each of the other ROIs). This was calculated by computing the average response 6–15 s after block initiation. The 4 conditions are CHCVF = movements of the contralateral hand to the contralateral field; CHIVF = movements of the contralateral hand to the ipsilateral field; IHIVF = movements of the ipsilateral hand to the ipsilateral field; and IHCVF = movements of the ipsilateral hand to the contralateral field. Colors are the same as in Figure 2. Circles indicate the mean response to each condition, with the 95% confidence interval around the mean marked by the horizontal lines. The vertical dashed lines mark the confidence interval around the response to the CHCVF condition, marked in dark green. In all ROIs, the CHCVF condition, elicited the strongest response. The confidence intervals of conditions which significantly differ from this condition do not cross the confidence interval around the CHCVF condition (corrected for multiple comparisons).

Figure 3.

Group average activation in the various ROIs. For each ROI, the average percent signal change of each condition pooled across both hemispheres in all subjects is shown (N = 17 in SOG and N = 20 in each of the other ROIs). This was calculated by computing the average response 6–15 s after block initiation. The 4 conditions are CHCVF = movements of the contralateral hand to the contralateral field; CHIVF = movements of the contralateral hand to the ipsilateral field; IHIVF = movements of the ipsilateral hand to the ipsilateral field; and IHCVF = movements of the ipsilateral hand to the contralateral field. Colors are the same as in Figure 2. Circles indicate the mean response to each condition, with the 95% confidence interval around the mean marked by the horizontal lines. The vertical dashed lines mark the confidence interval around the response to the CHCVF condition, marked in dark green. In all ROIs, the CHCVF condition, elicited the strongest response. The confidence intervals of conditions which significantly differ from this condition do not cross the confidence interval around the CHCVF condition (corrected for multiple comparisons).

Table 2

P values of visual, motor and interaction effects

Effect SOG h cIPS hMIP hAIP M1 
Visual ≪0.001 0.0027 0.3443 0.891 0.8094 
Motor 0.2288 0.353 0.0093 0.0037 ≪0.001 
Interaction 0.1225 0.0028 0.0323 0.0148 0.3293 
Effect SOG h cIPS hMIP hAIP M1 
Visual ≪0.001 0.0027 0.3443 0.891 0.8094 
Motor 0.2288 0.353 0.0093 0.0037 ≪0.001 
Interaction 0.1225 0.0028 0.0323 0.0148 0.3293 

Note: P values of 2-way ANOVA tests performed on bilaterally pooled data (N = 17 in SOG and N = 20 in each of the other ROIs). Significant values are marked in bold.

The contralateral preference seen in the most extreme ROIs tested along the visuomotor pathway was evident also in the regions within the IPS, although in a less obvious manner, most likely due to additional visuomotor interactions. The contralateral hand activation was significantly stronger than the ipsilateral one not only in M1, but also in the anterior parts of the IPS: hAIP and hMIP (2-way ANOVA, motor effect, P < 0.01, Table 2). This acting-hand dependence was not significant in the more posterior ROIs (i.e., h cIPS and SOG). The contralateral visual-field position elicited significantly stronger fMRI signals than the ipsilateral one in the h cIPS (2-way ANOVA, visual effect, P < 0.005, Table 2) but not in the more anterior ROIs. Therefore, there was a gradual change from a representation of the contralateral visual location of the tool to be grasped in the posterior ROIs (SOG and h cIPS), to a representation of the contralateral acting-hand in the anterior ROIs (hMIP, hAIP, and M1). In other words, along the IPS, mapping was based on contralateral preference (as in SOG and M1), with a varying contribution of the contralateral visual-field or the contralateral acting-hand among the regions.

Moreover, a significant visuomotor interaction was found in all 3 bilateral ROIs in the IPS: h cIPS, hMIP, and hAIP (2-way ANOVA, interaction effect, P < 0.05, Table 2), but not in the anatomically extreme ROIs, M1, and SOG (P > 0.1, Table 2). This suggests that each of the ROIs within the IPS does not only represent visual or motor information, but is rather involved in the integration of both types of information.

It is also clear from Figure 3 that the CHCVF is larger than each of the other conditions. This effect is statistically significant only in h cIPS and hMIP, as indicated by the 3 other conditions falling outside the 95% confidence limit of the response to CHCVF in these regions (marked by the vertical dotted lines; 1-way ANOVA, corrected for multiple comparisons, P < 0.01).

Two Opposite Visual and Motor Gradients

To visualize the anatomical distribution of motor and visual representations along the posterior–anterior axis on a subject-by-subject basis, we generated for each subject maps that measure the relative contralateral dominance of the acting-hand (Fig. 4A) or the visual-field (Fig. 4B), while holding the other element constant (see methods). Although there is considerable intersubject variability, a gradual shift from visual-to-motor representations is evident along the posterior–anterior axis. Moreover, both visual and motor elements are represented in the parietal regions of most subjects.

Figure 4.

“Two-set relative contribution”statistical parameter maps of individual subjects. These maps illustrate the change in acting-hand preference (A) or visual-field preference (B) across the cortical sheet (see icons on the right). Each column shows the 4 maps created for each of the 10 subjects. The central sulcus and IPS are marked with black dots. (A) Acting-hand preference. The change in the preference of the acting-hand within the contralateral visual-field is illustrated by comparing the CHCVF and IHCVF conditions, separately for each hemisphere (left hemisphere, LH, first row; right hemisphere, RH, second row; R>0.3, corresponding to P<0.005). A strong preference for the contralateral acting-hand is found in the motor cortex (CHCVF > IHCVF; blue voxels). Generally, this hand preference becomes weaker along the anterior to posterior axis within the parietal cortex, indicating that the relevance of the acting-hand identity within the contralateral visual-field is less important (i.e., green to orange voxels). (B) Visual-field preference. The complementary effect can be assessed by plotting the decay of the visual-field dominance along the posterior to anterior axis of the parietal cortex during movements of the contralateral hand (comparing CHCVF and CHIVF; R>0.3, corresponding to P<0.005). Generally, activation in PPC was stronger for tools presented in the contralateral visual-field (blue-green voxels), whereas the visual position of the tools had no effect in the motor cortex (orange voxels).

Figure 4.

“Two-set relative contribution”statistical parameter maps of individual subjects. These maps illustrate the change in acting-hand preference (A) or visual-field preference (B) across the cortical sheet (see icons on the right). Each column shows the 4 maps created for each of the 10 subjects. The central sulcus and IPS are marked with black dots. (A) Acting-hand preference. The change in the preference of the acting-hand within the contralateral visual-field is illustrated by comparing the CHCVF and IHCVF conditions, separately for each hemisphere (left hemisphere, LH, first row; right hemisphere, RH, second row; R>0.3, corresponding to P<0.005). A strong preference for the contralateral acting-hand is found in the motor cortex (CHCVF > IHCVF; blue voxels). Generally, this hand preference becomes weaker along the anterior to posterior axis within the parietal cortex, indicating that the relevance of the acting-hand identity within the contralateral visual-field is less important (i.e., green to orange voxels). (B) Visual-field preference. The complementary effect can be assessed by plotting the decay of the visual-field dominance along the posterior to anterior axis of the parietal cortex during movements of the contralateral hand (comparing CHCVF and CHIVF; R>0.3, corresponding to P<0.005). Generally, activation in PPC was stronger for tools presented in the contralateral visual-field (blue-green voxels), whereas the visual position of the tools had no effect in the motor cortex (orange voxels).

To quantify the shift of visual and motor effect sizes along the visoumotor pathway in the face of the intersubject spatial variance in the location of sulci and the position of the activation foci relative to them, we computed a motor laterality index (MI) and a visual laterality index (VI) for the discrete bilateral ROIs (see Methods). Indices range from −1 (a purely ipsilateral representation) to 1 (pure contralateral representation), with a 0 value suggesting no contra- or ipsilateral preference (i.e., that action with either of the 2 hands generated an equal response, or that there was no preference for one visual hemifield over the other).

Figure 5 shows the motor and visual indices calculated for each of the ROIs. Notice that all indices were positive, indicating a general trend for contralateral representations. However, not in all cases were the indices significantly different from 0, as indicated by the dotted line (corresponding to a significance level of alpha = 0.05, see Methods). High MI values were found in the anterior ROIs: M1, hAIP, and hMIP, whereas values near 0 were found in the posterior ROIs. This is consistent with the 2-way ANOVA results, showing a significant motor effect only in these anterior ROIs. Moreover, the quantitative nature of the indices revealed a specific pattern of a motor gradient of the motor effects within the anterior ROIs: the highest MI was obtained in M1 (0.72) and progressively smaller MI values were observed in hAIP and hMIP. The opposite pattern was found for the VI, with the highest values in SOG and a smaller VI in h cIPS. Insignificant VI values were found in the more anterior areas (MIP, AIP, and M1).

Figure 5.

Opposite visual and motor gradients. Visual and motor laterality indices were calculated for each bilaterally pooled ROI (see Methods). (A) The motor index (MI, black bars) was calculated by subtracting the activation to ipsilateral hand movements from the activation to the contralateral hand movements (symbolically indicated by black rectangles in the experimental design cartoon) and dividing by the sum of all responses. (B) The visual index was similarly calculated, by taking the difference between the responses to the contralateral tools and the responses to ipsilateral tools (as symbolically marked by white) and dividing as in (A). (C) Index values show 2 opposite gradients along the occipito-parietal-frontal cortex: a visual gradient descending from the posterior to the anterior ROIs, and an opposite motor gradient descending in the other direction, from the anterior ROIs to the posterior ones. The y-axis represents visual and motor index values (VI, MI) and the additional y-axis on the right hand side measures the laterality ratio (C/I). Indices whose average value is above the dotted line are significantly different from 0 (P<0.05).

Figure 5.

Opposite visual and motor gradients. Visual and motor laterality indices were calculated for each bilaterally pooled ROI (see Methods). (A) The motor index (MI, black bars) was calculated by subtracting the activation to ipsilateral hand movements from the activation to the contralateral hand movements (symbolically indicated by black rectangles in the experimental design cartoon) and dividing by the sum of all responses. (B) The visual index was similarly calculated, by taking the difference between the responses to the contralateral tools and the responses to ipsilateral tools (as symbolically marked by white) and dividing as in (A). (C) Index values show 2 opposite gradients along the occipito-parietal-frontal cortex: a visual gradient descending from the posterior to the anterior ROIs, and an opposite motor gradient descending in the other direction, from the anterior ROIs to the posterior ones. The y-axis represents visual and motor index values (VI, MI) and the additional y-axis on the right hand side measures the laterality ratio (C/I). Indices whose average value is above the dotted line are significantly different from 0 (P<0.05).

To summarize, 2 opposite gradients were found along the occipito-parieto-frontal cortex: a descending visual gradient, from the posterior to the anterior ROIs; and an opposite motor gradient, ascending from the posterior to the anterior ROIs (Fig. 5).

Discussion

The goal of this study was to assess the relative contribution of the visual and motor elements in visuomotor processing. To this end, we designed a visually guided reaching and grasping task and compared conditions in which either the contralateral or ipsilateral acting-hand was used to reach and grasp objects presented in either the contralateral or ipsilateral visual-field. As expected, occipital, and frontal regions were involved in the exclusive representation of visual or motor task elements, respectively. In contrast, both visual and motor elements affected the response in the parietal cortex, but the relative weight of the 2 components differed among the regions within the IPS. This change in relative weights was much more evident in the ROI analysis than in the individual subject maps, probably due to intersubject spatial variance in the location of the ROIs. Although previous studies have suggested the existence of a visuomotor gradient (for example, Ellermann et al. 1998; Battaglia-Mayer et al. 2006), this issue was not tested directly. We found evidence for 2 opposite gradients along the posterior to anterior axis in the IPS: the weight of the acting-hand gradually increased from the h cIPS via the hMIP to the hAIP. In contrast, the importance of the visual-field position of the target objects gradually decreased along the same axis.

Moreover, we found an acting-hand by visual-field interaction term in all IPS regions, h cIPS, hMIP and hAIP, and only in them. This indicates that the parietal cortex integrates both visual and motor information, rather than representing either of the single sources, suggesting parietal involvement in visuomotor transformations. Generally, an interaction term is interpreted as a nonlinear combination of the individual factors. In this study, movements of the contralateral acting-hand to the contralateral visual-field (condition CHCVF) elicited a larger BOLD response than any of the other conditions in the IPS (in h cIPS and hMIP). This response could not be explained by the additive effect of the 2 factors. Thus, we found evidence that a specific combination of the object's position in space and the hand used to manipulate it affect parietal activity. This may be due to the overrepresentation of movements to the contralateral space by the contralateral limb, as found in the monkey PPC (Battaglia-Mayer et al. 2005). Such brain activation may also be related to the trend to usually grasp objects located in peripersonal space with the hand that is closer to the object (e.g., using the right hand for grasping objects in the right side of the body, and vise versa; Scherberger et al. 2003). In humans, such movements are initiated more quickly, completed more rapidly, and performed more accurately (Fisk and Goodale 1985). Anecdotally, this behavior has its parallel in Hebrew: the phrase “leyad” meaning “next to” originated from the same phonological root as the word “yad” which means “hand,” and literally means “to (the) hand.”

Taken together, our results show that unlike the occipital or frontal cortex, both visual and motor aspects of visually guided reaching and grasping are represented in the human IPS, supporting its role as a network engaged in visuomotor transformations. However, within the IPS, the representation changes from a visually dominant to a motor-driven one. The gradual change in the relative weights of the visual and motor attributes along the IPS suggests that areas within this network encode different aspects of visuomotor transformations, as described below.

Possible Confounding Factors

We first address the possibility that factors other than the visual-field and acting-hand may have influenced the results. For example, it may be argued that conditions in which one hand grasped tools in the right visual-field differed from conditions in which the same hand grasped tools placed in the left visual-field, in both (the highlighted) visual aspects and the (ignored) motor aspect of movement direction. It could therefore be argued that the ascribed visual effect actually results from motor aspects. However, we note that due to the known characteristics of representation of limb movement in the cortex, this is highly unlikely. Neurons in the motor cortex of the monkey typically have a preferred direction of movement (Georgopoulos et al. 1988), with the representation of preferred directions being uniform over the population (Schwartz et al. 1988). A similar uniform distribution of representation of movements of the contralateral hand was found in areas within the parietal cortex (PRR: Quian Quiroga et al. 2006; SPL: Battaglia-Mayer et al. 2006; V6A: Fattori et al. 2005). Furthermore, although the preferred directions of neighboring neurons are somewhat more similar than expected by chance (Ben-Shaul et al. 2003), such nonrandom mapping can not be detected by the resolution of standard fMRI (mm, in which a single voxel consists of millions of neurons). Moreover, movements in which the acting-hand crossed the midline are of somewhat larger amplitude. It was previously shown that movements of larger amplitude elicit stronger fMRI responses (Waldvogel et al. 1999). However, we did not observe any differences in the BOLD response in primary motor cortex during movements of the contralateral acting-hand to the right or left visual-fields. We therefore rule out the possibility that visual-field effects are due to differences in movement direction or amplitude.

Another potentially troubling issue is that unwarranted eye-movements may have affected the results. After all, grasping a tool placed in a peripheral location may be accompanied by a saccadic eye-movement toward that location (Land et al. 1999; Prado et al. 2005). The visual-field effect may therefore be due to saccade specificity or gaze position specificity (as has been extensively reported by Andersen et al., for example, see Batista et al. 1999; Cohen and Andersen 2002). We can not entirely rule out these confounding factors because we did not record eye-movements in the scanner, as it was practically impossible to do so in the presence of the special head-mask and head-apparatus, combined with the unusual gaze angle. However, we note that unwarranted eye-movements probably did not play a major role in our experiment. First, prior to the scan, all our subjects were extensively trained in maintaining central fixation while reaching and grasping the tools, and thus saccades were expected to be rare events. Indeed, the subjects reported that they managed to maintain fixation throughout the experiment. But because it may still be possible that they were not aware of their saccadic behavior, we also designed a simple statistical model taking into account the measured fMRI noise levels. Using this model, we tested if the visuomotor interaction effects found along the IPS would remain significant even if saccades occurred at a given proportion of the trials (see details in Supplementary Data). The results of this analysis indicate that saccades would have had to occur very frequently (in about 45%, 5%, and 20% of the trials) to explain the interaction effects found in cIPS, MIP, and AIP, respectively. We therefore suggest that the observed interaction effects are unlikely to result from saccadic eye-movements. Second, had subjects made the saccades toward the peripheral tools, we would not have been able to see a clear preference in the BOLD response to tools presented in the contralateral side in retinotopic visual areas (see Supplementary Fig.).

On a related note, in conditions in which the acting-hand crossed the midline, the acting-hand could have potentially passed through the observed part of the “tool-less” visual-field along its path. To prevent this, subjects were trained to perform their tool grasping movements without entering this visual-field. The clear preference for tools presented in the contralateral side in retinotopic visual areas (Supplementary Fig.) suggests that this was not a serious problem. Such movements, in which the hand crossed the midline, may require extra “cognitive supervision,” thereby generating greater fMRI activation. However, we found that the summed activation during the noncrossing conditions (i.e., CHCVF + IHIVF) is in fact greater than during midline crossing (i.e., CHIVF + IHCVF). This reverse effect, meaning a positive interaction term, is statistically significant in the ROIs along the IPS, indicating the interaction does not emerge from differences in cognitive load.

Another potential concern is that because the orientation of a tool's handle determined the acting-hand in the current experiment, the reported acting-hand effects could potentially result from visual differences in the tool's appearance. However, such orientation specificity can only be detected when using nonstandard fMRI techniques such as fMRI adaptation (Tootell et al. 1998; James et al. 2002; Valyear et al. 2006).

Finally, throughout the manuscript, we referred to the visual-field in which the tools were positioned. Because the fixation point was centrally located, the relevant coordinate system of the visual position (e.g., retinal-, body-, or head-based) can not be determined. However, this does not affect the result showing a change in the relative weight of the visual versus motor aspects of visuomotor function along the parietal cortex.

To summarize, we believe that none of the factors that could potentially have confounded our results is of material significance. We next discuss the observed pattern of activation in each of the regions within the IPS in light of previous knowledge of their function.

h cIPS Maps Visual Elements and Visoumotor Transformations

Neuronal activity in the monkey cIPS depends on the shape and orientation of the visual stimuli (Sakata et al. 1999; Tsutsui et al. 2003). In humans, the putative homolog of monkey cIPS is functionally defined as the area within the posterior part of the IPS which is selective to the visual orientation of textures (Shikata et al. 2001, 2003), irrespective of whether action (mimicking the same orientation) is required or not. Voxels in cIPS show clear fMRI adaptation to repeated presentation of pictures of 3D tools, as long as the orientation of the tool's handle is maintained (regardless of whether the tool's identity is maintained or not; James et al. 2002; Valyear et al. 2006). The representation in regions corresponding anatomically to h cIPS is based on retinotopic coordinates, mapping the contralateral visual-field (Silver et al. 2005).

Using a visuomotor task, we found a predominant representation of the contralateral visual-field in h cIPS, corresponding to the previously reported retinotopic mapping. In addition, we found an interaction between the visual-field and the acting-hand (though the acting-hand per se did not affect the fMRI signal), emphasizing that the nature of the motor action affects the responses in h cIPS. These findings suggest that the representation in this area is still in retinotopic coordinates rather than motor based.

hMIP Maps Motor Elements and Visuomotor Transformations

MIP neurons are typically direction selective, with their firing rates usually modulated by the movement direction of the arm, although some MIP neurons also show visual selectivity to the same direction of motion (Colby and Goldberg 1999; Eskandar and Assad 2002). The PRR in the monkey, consisting of areas MIP and V6A, is involved in planning and execution of reach movements with the contralateral limb, more than saccadic eye-movements to the same target (Snyder et al. 1997). The putative human PRR-homolog (Connolly et al. 2003), has similar characteristics. Using optical left/right reversing prisms, activity in the human PRR was found to be selective to the visual direction of movements and not to the movement direction of the contralateral reaching hand (Fernandez-Ruiz et al. 2007). Furthermore, human MIP is active during movements of the contralateral hand, irrespective of the retinal position of the reaching (central or peripherial; Prado et al. 2005). Additionally, human MIP shows a greater BOLD response when subjects manipulate a joystick which leads to the movement of a visual stimulus, more than when the stimulus moves independently (Grefkes et al. 2004). A mirror neuron area in the human parietal cortex was revealed by fMRI, as it was found to be active during the execution of reaching movements as well as during observed and imagined reaching (though with a weaker activation; Filimon et al. 2007).

The functionally-and-anatomically defined hMIP in the current study exhibited a preference for the contralateral hand, as may be expected from a human PRR-homolog (although to our knowledge, hand-specificity in MIP has not been directly checked in monkeys). We found no preference for a specific visual-field, suggesting that both fields are similarly represented in hMIP. Nonetheless, the visual position of the object to be reached affected the fMRI signal through its interaction with the acting-hand. Thus, we propose that although h cIPS can be referred to as a visual area, in which the activity is modulated by the nature of the motor action; hMIP is a motor area, coding the identity of the reaching hand, whose activity is affected by the visual position of the target.

Visuomotor Transformations and Stronger Representation of Motor Elements in hAIP

Activity in monkeys’ AIP has been related to grasping movements (Sakata et al. 1995). Similarly, neuroimaging studies have shown activation in the putative human homolog of AIP during grasping with (Culham et al. 2003; Frey et al. 2005) or without visual guidance, showing a preference for the contralateral hand (Culham et al. 2006). Moreover, when no movement was required, AIP was active in response to visual stimuli only when these had high affordance for grasping (Chao and Martin 2000; Shikata et al. 2001, 2003; Shmuelof and Zohary 2005; Culham et al. 2006). However, human AIP activation was stronger during movements than during movement imagery, which in turn elicited a stronger response than during visual discrimination per se (Shikata et al. 2003). Although AIP shows a stronger activation during grasping then reaching, it does not show differential activation when the size of 3D objects is computed for pure perceptual purposes, during pattern discrimination, or during passive viewing of the objects (Cavina-Pratesi et al. 2007).

Consistent with this, we found a strong hand-effect in hAIP. As in MIP, the visual-field contributed to hAIP activation via the interaction effect, despite the lack of a main visual-field effect. However, the contralateral hand preference was somewhat more pronounced in hAIP than in hMIP. Because grasping requires a delicate planning of hand configuration, using more joints than needed for reaching, it seems reasonable that motor aspects would be relatively more dominant in hAIP than in hMIP.

Previous-Related Studies

This work extends previous studies which used a similar 2-by-2 paradigm manipulating target visual-field and acting-hand (Kertzman et al. 1997; Medendorp et al. 2005; Beurze et al. 2007). However, there are several methodological differences between the past studies and our study. Previously, 2D dot targets were presented on a screen, whereas we presented subjects with mock familiar 3D tools. Moreover, in previous studies subjects performed pointing (Medendorp et al. 2005) or reaching and touching movements toward the targets (Kertzman et al. 1997), whereas our subjects performed prehension movements, which involve both reaching toward tools and grasping them.

We found that movements of the contralateral hand to the contralateral visual-field elicit a stronger response than movements of the ipsilateral hand to the same contralateral visual-field. This was also seen in the study by Medendorp et al., within the anterior occipital cortex (aOC) and retinotopic IPS (retIPS) regions, which anatomically correspond to our h cIPS and hMIP, respectively (Medendorp et al. 2005). Both aOC and retIPS additionally showed a preference for the contralateral visual-field and significant modulations related to both the acting-hand and target visual location. Our results differentiated between the 2 corresponding regions, demonstrating a contralateral hand-effect in the more anterior region of hMIP (their retIPS), as opposed to the visual-field effect found in the more posterior region of h cIPS (their aOC). One way of interpreting this difference is that our reaching and grasping task required a much more extensive motor plan, and therefore a greater differential signal between the hands, than the mere finger pointing used by Medendorp et al.. The additional evidence for visuomotor interactions found in these 2 regions, suggests that despite the differential preferences of these regions in the visuomotor process, both regions are involved in integrating visual and motor information.

Visuomotor interaction effects in the IPS were also found by Beurze et al. (Beurze et al. 2007), with a stronger preference for the contralateral hand (over the ipsilateral hand) than for the contralateral visual-field in which the target was positioned (over the ipsilateral visual-field). The coordinates of the peak activation in their IPS are similar to those of our hMIP.

Stronger involvement of the left hemisphere was previously found in the majority of apraxia/optic ataxia patient studies (Perenin and Vighetto 1988; Koski et al. 2002; Wheaton and Hallett 2007), in which objects were held or that the function of the tool had to be appreciated. In our study, although subjects were presented with tools, they were instructed to reach and grasp the tools’ handle with a precision grip, without actually picking the tools or using them. As all mock tools were of similar size and weight, stereotypical grasping was suitable. This may possibly explain the lack of activation asymmetry between hemispheres in our study, in contrast with the abovementioned patient studies. Moreover, as the main focus of our study was on activity within anatomically defined ROIs, differences in the size of purely functionally defined areas between the 2 hemispheres could not be detected. Therefore, although our study supports the hypothesis that both parietal cortices are engaged in visuomotor transformations necessary for visually guided prehension, one can not conclude that the 2 hemispheres play similar roles in visuomotor transformations.

Future Directions

The visuomotor interactions found in the IPS between the visual-field position and the acting-hand reflect, in our opinion, the process of visuomotor transformations. An open question is which coordinate frame/s is/are used to code the information along the IPS. Manipulating the starting position of the acting-hand and/or the location of the fixation similar to the experimental paradigms used by others (monkeys: Batista et al. 1999; Buneo et al. 2002; Scherberger et al. 2003; humans: DeSouza et al. 2000) may offer a direct way to examine the coordinate frames used and perhaps reveal additional differences between brain areas involved in natural prehension of 3D tools.

Supplementary Material

Supplementary material can be found at: http://www.cercor.oxfordjournals.org/.

Funding

Israel Science Foundation of the Israel Academy of Sciences (grant #8009).

We thank Eran Stark for insightful comments and for help in developing the saccade-induced interaction model, Ya'acov Ritov for help with the statistical modeling, and Lior Shmuelof for insightful comments. We are grateful to Yitshak Simhayoff for constructing the fMRI roller apparatus.

Conflict of Interest: None declared.

References

Andersen
RA
Buneo
CA
Intentional maps in posterior parietal cortex
Annu Rev Neurosci.
 , 
2002
, vol. 
25
 (pg. 
189
-
220
)
Barash
S
Bracewell
RM
Fogassi
L
Gnadt
JW
Andersen
RA
Saccade-related activity in the lateral intraparietal area. I. Temporal properties; comparison with area 7a
J Neurophysiol.
 , 
1991
, vol. 
66
 (pg. 
1095
-
1108
)
Batista
AP
Buneo
CA
Snyder
LH
Andersen
RA
Reach plans in eye-centered coordinates
Science.
 , 
1999
, vol. 
285
 (pg. 
257
-
260
)
Battaglia-Mayer
A
Archambault
PS
Caminiti
R
The cortical network for eye-hand coordination and its relevance to understanding motor disorders of parietal patients
Neuropsychologia.
 , 
2006
, vol. 
44
 (pg. 
2607
-
2620
)
Battaglia-Mayer
A
Mascaro
M
Brunamonti
E
Caminiti
R
The over-representation of contralateral space in parietal cortex: a positive image of directional motor components of neglect?
Cereb Cortex.
 , 
2005
, vol. 
15
 (pg. 
514
-
525
)
Ben-Shaul
Y
Stark
E
Asher
I
Drori
R
Nadasdy
Z
Abeles
M
Dynamical organization of directional tuning in the primate premotor and primary motor cortex
J Neurophysiol.
 , 
2003
, vol. 
89
 (pg. 
1136
-
1142
)
Beurze
SM
de Lange
FP
Toni
I
Medendorp
WP
Integration of target and effector information in the human brain during reach planning
J Neurophysiol.
 , 
2007
, vol. 
97
 (pg. 
188
-
199
)
Binkofski
F
Butler
A
Buccino
G
Heide
W
Fink
G
Freund
HJ
Seitz
RJ
Mirror apraxia affects the peripersonal mirror space. A combined lesion and cerebral activation study
Exp Brain Res.
 , 
2003
, vol. 
153
 (pg. 
210
-
219
)
Binkofski
F
Dohle
C
Posse
S
Stephan
KM
Hefter
H
Seitz
RJ
Freund
HJ
Human anterior intraparietal area subserves prehension: a combined lesion and functional MRI activation study
Neurology.
 , 
1998
, vol. 
50
 (pg. 
1253
-
1259
)
Boynton
GM
Engel
SA
Glover
GH
Heeger
DJ
Linear systems analysis of functional magnetic resonance imaging in human V1
J Neurosci.
 , 
1996
, vol. 
16
 (pg. 
4207
-
4221
)
Buneo
CA
Jarvis
MR
Batista
AP
Andersen
RA
Direct visuomotor transformations for reaching
Nature.
 , 
2002
, vol. 
416
 (pg. 
632
-
636
)
Cavina-Pratesi
C
Goodale
MA
Culham
JC
FMRI reveals a dissociation between grasping and perceiving the size of real 3D objects
PLoS ONE.
 , 
2007
, vol. 
2
 pg. 
e424
 
Chao
LL
Martin
A
Representation of manipulable man-made objects in the dorsal stream
Neuroimage.
 , 
2000
, vol. 
12
 (pg. 
478
-
484
)
Cohen
YE
Andersen
RA
A common reference frame for movement plans in the posterior parietal cortex
Nat Rev Neurosci.
 , 
2002
, vol. 
3
 (pg. 
553
-
562
)
Colby
CL
Goldberg
ME
Space and attention in parietal cortex
Annu Rev Neurosci.
 , 
1999
, vol. 
22
 (pg. 
319
-
349
)
Connolly
JD
Andersen
RA
Goodale
MA
FMRI evidence for a ‘parietal reach region’ in the human brain
Exp Brain Res.
 , 
2003
, vol. 
153
 (pg. 
140
-
145
)
Culham
JC
Cavina-Pratesi
C
Singhal
A
The role of parietal cortex in visuomotor control: what have we learned from neuroimaging?
Neuropsychologia.
 , 
2006
, vol. 
44
 (pg. 
2668
-
2684
)
Culham
JC
Danckert
SL
DeSouza
JF
Gati
JS
Menon
RS
Goodale
MA
Visually guided grasping produces fMRI activation in dorsal but not ventral stream brain areas
Exp Brain Res.
 , 
2003
, vol. 
153
 (pg. 
180
-
189
)
Culham
JC
Valyear
KF
Human parietal cortex in action
Curr Opin Neurobiol.
 , 
2006
, vol. 
16
 (pg. 
205
-
212
)
DeSouza
JF
Dukelow
SP
Gati
JS
Menon
RS
Andersen
RA
Vilis
T
Eye position signal modulates a human parietal pointing region during memory-guided movements
J Neurosci.
 , 
2000
, vol. 
20
 (pg. 
5835
-
5840
)
Driver
J
Vuilleumier
P
Perceptual awareness and its loss in unilateral neglect and extinction
Cognition.
 , 
2001
, vol. 
79
 (pg. 
39
-
88
)
Duhamel
JR
Colby
CL
Goldberg
ME
The updating of the representation of visual space in parietal cortex by intended eye movements
Science.
 , 
1992
, vol. 
255
 (pg. 
90
-
92
)
Ellermann
JM
Siegal
JD
Strupp
JP
Ebner
TJ
Ugurbil
K
Activation of visuomotor systems during visually guided movements: a functional MRI study
J Magn Reson.
 , 
1998
, vol. 
131
 (pg. 
272
-
285
)
Eskandar
EN
Assad
JA
Distinct nature of directional signals among parietal cortical areas during visual guidance
J Neurophysiol.
 , 
2002
, vol. 
88
 (pg. 
1777
-
1790
)
Fattori
P
Kutz
DF
Breveglieri
R
Marzocchi
N
Galletti
C
Spatial tuning of reaching activity in the medial parieto-occipital cortex (area V6A) of macaque monkey
Eur J Neurosci.
 , 
2005
, vol. 
22
 (pg. 
956
-
972
)
Fernandez-Ruiz
J
Goltz
HC
Desouza
JF
Vilis
T
Crawford
JD
Human parietal “reach region” primarily encodes intrinsic visual direction, not extrinsic movement direction, in a visual-motor dissociation task
Cereb Cortex
 , 
2007
, vol. 
17
 (pg. 
2283
-
2292
)
Filimon
F
Nelson
JD
Hagler
DJ
Sereno
MI
Human cortical representations for reaching: Mirror neurons for execution, observation, and imagery
Neuroimage.
 , 
2007
, vol. 
37
 (pg. 
1315
-
1328
)
Fisk
JD
Goodale
MA
The organization of eye and limb movements during unrestricted reaching to targets in contralateral and ipsilateral visual space
Exp Brain Res.
 , 
1985
, vol. 
60
 (pg. 
159
-
178
)
Forman
SD
Cohen
JD
Fitzgerald
M
Eddy
WF
Mintun
MA
Noll
DC
Improved assessment of significant activation in functional magnetic resonance imaging (fMRI): use of a cluster-size threshold
Magn Reson Med.
 , 
1995
, vol. 
33
 (pg. 
636
-
647
)
Frey
SH
Vinton
D
Norlund
R
Grafton
ST
Cortical topography of human anterior intraparietal cortex active during visually guided grasping
Brain Res Cogn Brain Res.
 , 
2005
, vol. 
23
 (pg. 
397
-
405
)
Galletti
C
Fattori
P
Kutz
DF
Gamberini
M
Brain location and visual topography of cortical area V6A in the macaque monkey
Eur J Neurosci.
 , 
1999
, vol. 
11
 (pg. 
575
-
582
)
Georgopoulos
AP
Kettner
RE
Schwartz
AB
Primate motor cortex and free arm movements to visual targets in three-dimensional space. II. Coding of the direction of movement by a neuronal population
J Neurosci.
 , 
1988
, vol. 
8
 (pg. 
2928
-
2937
)
Goodale
MA
Milner
AD
Separate visual pathways for perception and action
Trends Neurosci.
 , 
1992
, vol. 
15
 (pg. 
20
-
25
)
Grefkes
C
Fink
GR
The functional organization of the intraparietal sulcus in humans and monkeys
J Anat.
 , 
2005
, vol. 
207
 (pg. 
3
-
17
)
Grefkes
C
Ritzl
A
Zilles
K
Fink
GR
Human medial intraparietal cortex subserves visuomotor coordinate transformation
Neuroimage.
 , 
2004
, vol. 
23
 (pg. 
1494
-
1506
)
James
TW
Humphrey
GK
Gati
JS
Menon
RS
Goodale
MA
Differential effects of viewpoint on object-driven activation in dorsal and ventral streams
Neuron.
 , 
2002
, vol. 
35
 (pg. 
793
-
801
)
Kertzman
C
Schwarz
U
Zeffiro
TA
Hallett
M
The role of posterior parietal cortex in visually guided reaching movements in humans
Exp Brain Res.
 , 
1997
, vol. 
114
 (pg. 
170
-
183
)
Koski
L
Iacoboni
M
Mazziotta
JC
Deconstructing apraxia: understanding disorders of intentional movement after stroke
Curr Opin Neurol.
 , 
2002
, vol. 
15
 (pg. 
71
-
77
)
Land
M
Mennie
N
Rusted
J
The roles of vision and eye movements in the control of activities of daily living
Perception.
 , 
1999
, vol. 
28
 (pg. 
1311
-
1328
)
Levy
I
Schluppeck
D
Heeger
DJ
Glimcher
PW
Specificity of human cortical areas for reaches and saccades
J Neurosci.
 , 
2007
, vol. 
27
 (pg. 
4687
-
4696
)
Mayka
MA
Corcos
DM
Leurgans
SE
Vaillancourt
DE
Three-dimensional locations and boundaries of motor and premotor cortices as defined by functional brain imaging: a meta-analysis
Neuroimage.
 , 
2006
, vol. 
31
 (pg. 
1453
-
1474
)
Medendorp
WP
Goltz
HC
Crawford
JD
Vilis
T
Integration of target and effector information in human posterior parietal cortex for the planning of action
J Neurophysiol.
 , 
2005
, vol. 
93
 (pg. 
954
-
962
)
Moore
CI
Stern
CE
Corkin
S
Fischl
B
Gray
AC
Rosen
BR
Dale
AM
Segregation of somatosensory activation in the human rolandic cortex using fMRI
J Neurophysiol.
 , 
2000
, vol. 
84
 (pg. 
558
-
569
)
Oldfield
RC
The assessment and analysis of handedness: the Edinburgh inventory
Neuropsychologia.
 , 
1971
, vol. 
9
 (pg. 
97
-
113
)
Perenin
MT
Vighetto
A
Optic ataxia: a specific disruption in visuomotor mechanisms. I. Different aspects of the deficit in reaching for objects
Brain.
 , 
1988
, vol. 
111
 
Pt 3
(pg. 
643
-
674
)
Picard
N
Strick
PL
Imaging the premotor areas
Curr Opin Neurobiol.
 , 
2001
, vol. 
11
 (pg. 
663
-
672
)
Prado
J
Clavagnier
S
Otzenberger
H
Scheiber
C
Kennedy
H
Perenin
MT
Two cortical systems for reaching in central and peripheral vision
Neuron.
 , 
2005
, vol. 
48
 (pg. 
849
-
858
)
Quian Quiroga
R
Snyder
LH
Batista
AP
Cui
H
Andersen
RA
Movement intention is better predicted than attention in the posterior parietal cortex
J Neurosci.
 , 
2006
, vol. 
26
 (pg. 
3615
-
3620
)
Rondot
P
de Recondo
J
Dumas
JL
Visuomotor ataxia
Brain.
 , 
1977
, vol. 
100
 (pg. 
355
-
376
)
Sakata
H
Taira
M
Kusunoki
M
Murata
A
Tsutsui
K
Tanaka
Y
Shein
WN
Miyashita
Y
Neural representation of three-dimensional features of manipulation objects with stereopsis
Exp Brain Res.
 , 
1999
, vol. 
128
 (pg. 
160
-
169
)
Sakata
H
Taira
M
Murata
A
Mine
S
Neural mechanisms of visual guidance of hand action in the parietal cortex of the monkey
Cereb Cortex.
 , 
1995
, vol. 
5
 (pg. 
429
-
438
)
Scherberger
H
Goodale
MA
Andersen
RA
Target selection for reaching and saccades share a similar behavioral reference frame in the macaque
J Neurophysiol.
 , 
2003
, vol. 
89
 (pg. 
1456
-
1466
)
Schwartz
AB
Kettner
RE
Georgopoulos
AP
Primate motor cortex and free arm movements to visual targets in three-dimensional space. I. Relations between single cell discharge and direction of movement
J Neurosci.
 , 
1988
, vol. 
8
 (pg. 
2913
-
2927
)
Sereno
MI
Pitzalis
S
Martinez
A
Mapping of contralateral space in retinotopic coordinates by a parietal cortical area in humans
Science.
 , 
2001
, vol. 
294
 (pg. 
1350
-
1354
)
Shikata
E
Hamzei
F
Glauche
V
Knab
R
Dettmers
C
Weiller
C
Buchel
C
Surface orientation discrimination activates caudal and anterior intraparietal sulcus in humans: an event-related fMRI study
J Neurophysiol.
 , 
2001
, vol. 
85
 (pg. 
1309
-
1314
)
Shikata
E
Hamzei
F
Glauche
V
Koch
M
Weiller
C
Binkofski
F
Buchel
C
Functional properties and interaction of the anterior and posterior intraparietal areas in humans
Eur J Neurosci.
 , 
2003
, vol. 
17
 (pg. 
1105
-
1110
)
Shmuelof
L
Zohary
E
Dissociation between ventral and dorsal fMRI activation during object and action recognition
Neuron.
 , 
2005
, vol. 
47
 (pg. 
457
-
470
)
Shmuelof
L
Zohary
E
A mirror representation of others’ actions in the human anterior parietal cortex
J Neurosci.
 , 
2006
, vol. 
26
 (pg. 
9736
-
9742
)
Silver
MA
Ress
D
Heeger
DJ
Topographic maps of visual spatial attention in human parietal cortex
J Neurophysiol.
 , 
2005
, vol. 
94
 (pg. 
1358
-
1371
)
Snyder
LH
Batista
AP
Andersen
RA
Coding of intention in the posterior parietal cortex
Nature.
 , 
1997
, vol. 
386
 (pg. 
167
-
170
)
Tootell
RB
Hadjikhani
NK
Vanduffel
W
Liu
AK
Mendola
JD
Sereno
MI
Dale
AM
Functional analysis of primary visual cortex (V1) in humans
Proc Natl Acad Sci USA.
 , 
1998
, vol. 
95
 (pg. 
811
-
817
)
Tootell
RB
Mendola
JD
Hadjikhani
NK
Ledden
PJ
Liu
AK
Reppas
JB
Sereno
MI
Dale
AM
Functional analysis of V3A and related areas in human visual cortex
J Neurosci.
 , 
1997
, vol. 
17
 (pg. 
7060
-
7078
)
Tsutsui
K
Jiang
M
Sakata
H
Taira
M
Short-term memory and perceptual decision for three-dimensional visual features in the caudal intraparietal sulcus (Area CIP)
J Neurosci.
 , 
2003
, vol. 
23
 (pg. 
5486
-
5495
)
Valyear
KF
Culham
JC
Sharif
N
Westwood
D
Goodale
MA
A double dissociation between sensitivity to changes in object identity and object orientation in the ventral and dorsal visual streams: a human fMRI study
Neuropsychologia.
 , 
2006
, vol. 
44
 (pg. 
218
-
228
)
Waldvogel
D
van Gelderen
P
Ishii
K
Hallet
M
The effect of movement amplitude on activation in functional magnetic resonance imaging studies
J Cereb Blood Flow Metab.
 , 
1999
, vol. 
19
 (pg. 
1209
-
1212
)
Weinrich
M
Wise
SP
The premotor cortex of the monkey
J Neurosci.
 , 
1982
, vol. 
2
 (pg. 
1329
-
1345
)
Wheaton
LA
Hallett
M
Ideomotor apraxia: a review
J Neurol Sci.
 , 
2007
, vol. 
260
 (pg. 
1
-
10
)