Abstract

Many psychophysical studies suggest that target depth and direction during reaches are processed independently, but the neurophysiological support to this view is so far limited. Here, we investigated the representation of reach depth and direction by single neurons in area V6A of the medial posterior parietal cortex (PPC) of macaques, while a fixation-to-reach task in 3-dimensional (3D) space was performed. We found that, in a substantial percentage of V6A neurons, depth and direction signals jointly influenced fixation, planning, and arm movement-related activity. While target depth and direction were equally encoded during fixation, depth tuning became stronger during arm movement planning, execution, and target holding. The spatial tuning of fixation activity was often maintained across epochs, and depth tuning persisted more than directional tuning across epochs. These findings support for the first time the existence of a common neural substrate for the encoding of target depth and direction during reaches in the PPC. Present results also highlight the presence of several types of V6A cells that process independently or jointly signals about eye position and arm movement planning and execution in order to control reaches in 3D space. A conceptual framework for the processing of depth and direction for reaching is proposed.

Introduction

In humans, the superior parietal lobule (SPL) is considered crucial for action in depth. Patients with lesions in the SPL show larger visuomotor deficits in depth than in the frontal plane that result in a specific impairment of hand movements toward targets arranged at different distances from their body (Holmes and Horrax 1919; Baylis and Baylis 2001; Danckert et al. 2009). However, single-neuron activity in the SPL of nonhuman primates has been studied neglecting depth as dimension for reaching. Most of the single-unit studies used center-out reaching tasks, with the initial hand and target positions being on the same frontal plane (Snyder et al. 1997; Batista et al. 1999; Battaglia-Mayer et al. 2001; Buneo et al. 2002; Andersen and Cui 2009; Chang et al. 2009). Fewer works have involved depth by employing body-out reaching movements, with the arm moving from a position near the trunk to targets located on a single plane (Fattori et al. 2001, 2005; Bosco et al. 2010), or at different depths (Bhattacharyya et al. 2009; Ferraina et al. 2009). Nevertheless, these studies did not compare the influence of direction and depth signals on reach-related activity in single cells. This has been only addressed by Lacquaniti et al. (1995) in area 5/PE, where separate populations of neurons were found to encode the depth and direction, respectively, of reaching targets. This finding was in agreement with several psychophysical studies (Gordon et al. 1994; Sainburg et al. 2003; Vindras et al. 2005; Bagesteiro et al. 2006; Van Pelt and Medendorp 2008), thus creating a largely held view of these 2 spatial parameters being processed independently (Crawford et al. 2011). The aim of our study was to investigate whether there is an encoding of both depth and direction information for reaching in single SPL neurons, and to compare the depth and direction tuning of fixation, planning, and reaching epochs.

We studied the above issues in the medial posterior parietal area V6A of macaques (Galletti et al. 1999), where several types of neurons are involved in various phases of visually guided reaches (Fattori et al. 2001, 2005). V6A contains neurons that encode the spatial location of visual targets (Galletti et al. 1993, 1995), neurons sensitive to the version and vergence angle of the eyes during fixation and saccades (Galletti et al. 1995; Hadjidimitrakis et al. 2011; Breveglieri et al. 2012), and cells whose activity is modulated by the arm-reaching movement (Fattori et al. 2001, 2004, 2005) and arm position in space (Breveglieri et al. 2002; Fattori et al. 2005). In the present work, single cells were recorded while animals performed a reaching task to foveated targets located at different depths and directions in 3-dimensional (3D) space. We have found that depth and direction signals jointly influence the activity of a large number of cells during reaches in 3D space. Our results demonstrate for the first time the convergence of depth and directional spatial information on single SPL neurons during 3D reaches. In addition, they show that several subpopulations of V6A cells are recruited during the progress of a fixate-to-reach task in 3D space.

Materials and Methods

Experimental Procedures

Two male macaque monkeys (Macaca fascicularis) weighing 4.4 kg (Monkey A) and 3.8 kg (Monkey B) were used. Initially, the animals were habituated to sit in a primate chair and to interact with the experimenters. Then, a head-restraint system and the recording chamber were surgically implanted under general anesthesia (sodium thiopenthal, 8 mg/kg · h, intravenously) following the procedures reported by Galletti et al. (1995). A full program of postoperative analgesia (ketorolac tromethamine, 1 mg/kg intramuscularly immediately after surgery, and 1.6 mg/kg i.m. on the following days) and antibiotic care (Ritardomicina, benzatinic benzylpenicillin + dihydrostreptomycin + streptomycin, 1–1.4 mL/10 kg every 5–6 days) followed surgery. Experiments were performed in accordance with national laws on care and use of laboratory animals and with the European Communities Council Directive of 24 November 1986 (86/609/EEC) and that of 22 September 2010 (2010/63/EU). All the experimental protocols were approved by the Bioethical Committee of the University of Bologna. During training and recording sessions, particular care was taken to avoid any behavioral and clinical sign of pain or distress.

Extracellular recording techniques and procedures to reconstruct microelectrode penetrations were similar to those described in other reports (e.g. Galletti et al. 1996). Single-cell activity was extracellularly recorded from the anterior bank of the parieto-occipital sulcus. Area V6A was initially recognized on functional grounds following the criteria described in Galletti et al. (1999), and later confirmed following the cytoarchitectonic criteria according to Luppino et al. (2005). We performed multiple electrode penetrations using a 5-channel multielectrode recording system (5-channel MiniMatrix, Thomas Recording). The electrode signals were amplified (at a gain of 10 000) and filtered (bandpass between 0.5 and 5 kHz). Action potentials in each channel were isolated with a waveform discriminator (Multi Spike Detector; Alpha Omega Engineering) and were sampled at 100 kHz. Histological reconstructions have been performed following the procedures detailed in a recent publication from our lab (Gamberini et al. 2011). Briefly, electrode tracks and the approximate location of each recording site were reconstructed on histological sections of the brain on the basis of electrolytic lesions and several other cues, such as the coordinates of penetrations within the recording chamber, the kind of cortical areas passed through before reaching the region of interest, and the depths of passage points between gray and white matter. All neurons of the present work were assigned to area V6A (Fig. 1).

Figure 1.

Anatomical location of V6A and of the recording sites. (Top) The location of area V6A in the parieto-occipital sulcus (POs) is shown in both the dorsal (left) and medial (right) view of a hemisphere reconstructed in 3D using the Caret software (http://brainmap.wustl.edu/caret/). Dashed contours (left) and white area (right) represent the borders and mesial extent, respectively, of V6A. (Bottom-left) A posterior view of a reconstructed hemisphere with the occipital lobe removed to show the extent of V6A (white area). (Bottom-right) Flattened map of area V6A showing the recording sites. Each circle represents a single site. White and gray circles represent sites where no modulated and at least one modulated neuron, respectively, was found. Cs: central sulcus; Cin: cinguate sulcus; Lat: lateral sulcus; STs: superior temporal sulcus; IPs: intraparietal sulcus; Ls: lunate sulcus; POs: parieto-occipital sulcus; Cal: calcarine sulcus; V6: area V6; V6Ad: dorsal V6A; V6Av: ventral V6A; MIP: medial intraparietal area; PEc: caudal area PE.

Figure 1.

Anatomical location of V6A and of the recording sites. (Top) The location of area V6A in the parieto-occipital sulcus (POs) is shown in both the dorsal (left) and medial (right) view of a hemisphere reconstructed in 3D using the Caret software (http://brainmap.wustl.edu/caret/). Dashed contours (left) and white area (right) represent the borders and mesial extent, respectively, of V6A. (Bottom-left) A posterior view of a reconstructed hemisphere with the occipital lobe removed to show the extent of V6A (white area). (Bottom-right) Flattened map of area V6A showing the recording sites. Each circle represents a single site. White and gray circles represent sites where no modulated and at least one modulated neuron, respectively, was found. Cs: central sulcus; Cin: cinguate sulcus; Lat: lateral sulcus; STs: superior temporal sulcus; IPs: intraparietal sulcus; Ls: lunate sulcus; POs: parieto-occipital sulcus; Cal: calcarine sulcus; V6: area V6; V6Ad: dorsal V6A; V6Av: ventral V6A; MIP: medial intraparietal area; PEc: caudal area PE.

Behavioral Task

Electrophysiological signals were collected while monkeys were performing a fixation-to-reach task. The animals performed arm movement with the contralateral limb, with the head restrained, in darkness, while maintaining steady fixation of the target. Before starting the movement, the monkeys had their arm on a button (home button [HB], 2.5 cm in diameter) located next to its trunk (Fig. 2A). Reaches were performed to 1 of the 9 light emitting diodes (LEDs, 6 mm in diameter) positioned at eye level. The LEDs were mounted on the panel at different distances and directions with respect to the eyes, always at eye level. As shown in Figure 2B, target LEDs were arranged in 3 rows: 1 central, along the sagittal midline and 2 lateral, at version angles of −15° and +15°, respectively. Along each row, 3 LEDs were located at vergence angles of 17.1°, 11.4°, and 6.9°. Given that the interocular distance for both animals was 30 mm, the nearest targets were located at 10 cm from the eyes, whereas the LEDs placed at intermediate and far positions were at a distance of 15 and 25 cm, respectively. The range of vergence angles was selected in order to include most of the peripersonal space in front of the animal, from the very near space (10 cm) up to the farthest distances reachable by the monkeys (25 cm).

Figure 2.

Experimental set-up and task sequence. (A) Scheme of the set-up used for the reaching in depth task. Eye and hand movements were performed in darkness toward 1 of the 9 LEDs located at eye level at different depths and directions. (B) Top view of the reaching in depth set-up task indicating the vergence and version angle of the targets with respect to the eyes. (C) Time sequence of task events with LED and arm status, the eye's vergence and version traces, and the spike train of neural activity during a single trial. From left-to-right vertical continuous lines indicate: trial start (HB press), target appearance (LEDon), go signal (GO), start of the arm movement period (M), beginning of the holding the target period (H), switching off of the LED (Redoff), and trial end (HB press). Long vertical dashed line indicates the end of the saccade (left) and the start of the inward arm movement (right).

Figure 2.

Experimental set-up and task sequence. (A) Scheme of the set-up used for the reaching in depth task. Eye and hand movements were performed in darkness toward 1 of the 9 LEDs located at eye level at different depths and directions. (B) Top view of the reaching in depth set-up task indicating the vergence and version angle of the targets with respect to the eyes. (C) Time sequence of task events with LED and arm status, the eye's vergence and version traces, and the spike train of neural activity during a single trial. From left-to-right vertical continuous lines indicate: trial start (HB press), target appearance (LEDon), go signal (GO), start of the arm movement period (M), beginning of the holding the target period (H), switching off of the LED (Redoff), and trial end (HB press). Long vertical dashed line indicates the end of the saccade (left) and the start of the inward arm movement (right).

The time sequence of the task with the LED and arm status and the vergence and version angles of the eyes are shown in Figure 2C. A trial began when the monkey pressed the button near its chest (HB press). After 1000 ms, 1 of the 9 LEDs was switched on (LEDon). The monkey had to fixate the LED while keeping the HB button pressed. Then, the monkey had to wait for 1000–1500 ms for a change in the color of the LED without performing any eye or arm movement. The color change was the go signal (GO) for the animal to release the HB and start an arm movement toward the target (M). Then, the monkey reached the target (H) and held its hand on the target for 800–1200 ms. The switching off of the target (Redoff) cued the monkey to release the LED and to return to the HB (HB press), which ended the trial and allowed the monkey to receive its reward. The presentation of stimuli and the animal's performance were monitored using custom software written in Labview (National Instruments), as described previously (Kutz et al. 2005). Eye position signals were sampled with 2 cameras (one for each eye) of an infrared oculometer system (ISCAN) at 100 Hz and were controlled by an electronic window (4 × 4°) centered on the fixation target. If the monkey fixated outside this window, the trial was aborted. The task was performed in darkness, in blocks of 90 randomized trials, 10 for each LED target. The luminance of LEDs was adjusted in order to compensate for difference in retinal size between LEDs located at different distances. The background light was switched on briefly between blocks to avoid dark adaptation.

At the beginning of each recording session, the monkeys were required to perform a calibration task where they fixated 10 LEDs mounted on a frontal panel at a distance of 15 cm from the eyes. For each eye, signals to be used for calibration were extracted during the fixation of 5 LEDs, 1 central aligned with the eye's straight ahead position, and 4 peripheral placed at an angle of ±15° (distance: 4 cm) both in the horizontal and vertical axes. From the 2 individual calibrated eye position signals, we derived the mean of the 2 eyes (the conjugate or version signal), and the difference between the 2 eyes (the disconjugate or vergence signal) using the equations: Version = (R + L)/2 and vergence = R − L, where R and L was the position of the right and left eye, respectively.

Data Analysis

The effect of different target positions on neural activity was analyzed in different time periods during the task. The task epochs taken into account for the analysis are indicated in the bottom part of Figure 2C. They were: (1) The early fixation epoch (FIX) from 50 ms after the end of the fixation saccade till 450 ms after it, (2) the preparation epoch (PLAN) was the last 500 ms of fixation before the GO signal, (3) the reach epoch (REACH) from 200 ms before the start of the arm movement (M) till the end of it signaled by the pressing of the LED target (H), and (4) the hold epoch (HOLD), from the pressing of the LED target (H) till the switching off of the target (Redoff) that was the signal for the monkey to start a return movement to press the HB. The HOLD epoch lasted either 800 or 1200 ms, depending on the trial. Rasters of spiking activity were aligned on specific events of the task sequence, depending on the epoch analyzed. The effect of target depth and direction on activity was analyzed only in those units with a mean firing rate higher than 3 spikes/s and in those neurons that were tested in at least 7 trials for each spatial position. The reasons for this conservative choice are connected to the implicit high variability of biological responses and are explained in detail in Kutz et al. (2003).

A significant modulation of neural activity relative to different target locations was studied using a 2-way analysis of variance (ANOVA) test performed separately for each epoch with factors being target's depth and direction. Target depth was defined as the distance of the target from the animal (near, intermediate, and far), and target direction was its position with respect to the recording hemisphere (contralateral, central, and ipsilateral). Neurons were considered modulated by a given factor only when the factor's main effect was significant (P < 0.05). To find whether the incidence of each of the main effects differed significantly between 2 epochs a 2-proportion z-test (Zar 1999) was applied, as detailed in Fluet et al. (2010). To quantify the selectivity of neuronal activity in each epoch for depth and/or direction signals, we calculated an index termed eta squared (η2, Zar 1999) using the values obtained from the ANOVA test, and by applying the following formula: η2 = SSeffect/SStotal, where SSeffect is the deviance of the main effect, and SStotal the total deviance. We calculated this index for each of the 2 main effects (i.e., depth and direction) and for each of the 4 epochs of interest. To compare the index of the same cell in different epochs, confidence intervals on the η2 indices were estimated using a bootstrap test. Synthetic response profiles were created by drawing N firing rates (with replacement) from the N repetitions of experimentally determined firing rates. The η2 was recomputed using these N firing rates. Ten thousand iterations were performed, and confidence intervals were estimated as the range that delimited 95% of the computed indices (Batista et al. 2007).

To analyze the spatial tuning of activity, a stepwise multilinear regression model was applied in each epoch considered. Regression methods quantify the relationship between dependent (neural activity) and independent (target depth and direction) variables. Given that the target was foveated in all epochs of interest, its depth and direction in space were represented in head-centered coordinates and were equal to the vergence and version angles of the eyes, respectively. We are aware that our experimental configuration cannot distinguish between eye- and head/body-centered frames of reference of target encoding. That being said, in the rest of the paper, when we refer to spatial tuning analysis and data, the terms depth and vergence, as well as direction and version, are interchangeable.

In the multiple linear regression model relating the neural activity in the epochs of interest to the different target positions, we used this equation for the firing rate: 

$$A{\rm (}X_i {\rm ,}Y_i {\rm )} = b_{\rm 0} {\rm + }b_{\rm 1} X_i {\rm + }b_{\rm 2} Y_i ,$$
where A was the neural activity in spikes per second for the ith trials; Xi and Yi the positions of the target defined as vergence and version angles, respectively, of the eyes during target fixation; b1 and b2 were regression coefficients and b0 the intercept. After being tested for their significance, the vergence and version coefficients were normalized with the standard deviation of vergence and version, correspondingly. The standarized coefficients allow a comparison among the independent variables and provide information about its relative influence in the regression equation. In our study, this allowed to compare the vergence and version coefficients and to account for the fact that angle range was different for vergence and version. The regression coefficients were selected using a backward stepwise algorithm (Matlab function “stepwise”) that determined whether the coefficients were significantly different from zero. At the conclusion of the stepwise algorithm, only the coefficients that were statistically significant from zero to P < 0.05 remained. These coefficients were then used to determine the spatial preference only in the cells with a significant main effect (ANOVA, P < 0.05) in a certain epoch. In modulated neurons without significant linear coefficients, a Bonferroni post hoc test (P < 0.05) was applied to define the preferred position.

Population averaged spike density functions (SDFs) were generated for the cells modulated by target depth/direction in the epochs of interest. In every cell, an SDF was calculated for each trial (Gaussian kernel, half width at half maximum 40 ms) and averaged across all the trials of the preferred and the opposite depths and directions as defined by the linear regression analysis. The peak discharge of the preferred condition was used to normalize the SDF. Population SDF curves representing the activity of the preferred and opposite condition were constructed by averaging the individual SDFs of the cells, aligned at the behavioral event of interest. In the cells with linear spatial tuning of movement activity (REACH), we calculated the response latency to movement execution for the preferred condition. The cell's response latency was defined as the mean latency of the 3 target positions of the preferred condition (near/far and ipsi/contra). For each position, we quantified the firing activity in the epoch PLAN. To find the onset of the reach-related response, a sliding window (width = 20 ms, shifted of 2 ms) was used to measure the activity starting from 200 ms before the movement start. The distributions of activities in the 2 windows across trials were compared with a Student's t-test (P < 0.05). The onset of the response was determined as the time of the first of 5 consecutive bins (10 ms), where comparisons were statistically significant (P < 0.05). The above procedure, also used in a recent paper on V6A (Breveglieri et al. 2012), was adapted from an earlier work (Nakamura and Colby 2000).

All analyses were performed using custom scripts written in MATLAB (Mathworks, Natick, MA, USA).

Results

We recorded neuronal activity in V6A and identified 288 well isolated, stable cells in 2 monkeys (Monkey A: 192 and Monkey B: 96). Animals were required to execute reaches to foveated targets located at different directions and depths. Targets' elevation was kept constant at eye level. Figure 3 illustrates 4 examples of modulated neurons. All cells were tuned in several time epochs, both in depth and direction. The first neuron (Fig. 3A) was modulated by target depth in all epochs and preferred intermediate to far positions. The cell was also tuned for target direction during both fixation and arm movement planning, showing higher activity for contralateral positions. The second neuron (Fig. 3B) responded strongly during all the epochs for targets located in the near space. In PLAN and REACH epochs, an additional preference for targets located in the contralateral space emerged. The third neuron (Fig. 3C) was modulated by target direction in all epochs and preferred ipsilateral positions. In addition, it showed a preference for near space only during PLAN and REACH. Finally, the fourth cell (Fig. 3D) was modulated by both depth and direction in the first 2 epochs, before arm movement execution, responding strongly for far positions and showed a small—though significant—preference for contralateral space. In REACH and HOLD epochs, the effect of direction disappeared, while a strong depth tuning with a preference for targets located in near space emerged.

Figure 3.

Example neurons with depth and direction tuning in several epochs. (AD) Spike histograms and version and vergence eye traces for the 9 target positions arranged at 3 directions (columns) and 3 depths (rows). Vertical bars indicate the alignment of activity and eye traces at the start of fixation and at the start of arm movement. Realignment is evidenced with a gap in histograms and eye traces. (A, B, and D) Neurons showing depth tuning starting from fixation till the holding the target period. Modulations by direction in A, D occurred quite early before and disappeared during arm movement, whereas in B it occurred shortly before and was preserved during arm movement. (C) Neuron modulated by target direction from fixation till the holding the target period and by depth shortly before and during the arm movement. In both A and B, the neuron showed the same depth preference before and after the arm movement, whereas in D the preference was inverted.

Figure 3.

Example neurons with depth and direction tuning in several epochs. (AD) Spike histograms and version and vergence eye traces for the 9 target positions arranged at 3 directions (columns) and 3 depths (rows). Vertical bars indicate the alignment of activity and eye traces at the start of fixation and at the start of arm movement. Realignment is evidenced with a gap in histograms and eye traces. (A, B, and D) Neurons showing depth tuning starting from fixation till the holding the target period. Modulations by direction in A, D occurred quite early before and disappeared during arm movement, whereas in B it occurred shortly before and was preserved during arm movement. (C) Neuron modulated by target direction from fixation till the holding the target period and by depth shortly before and during the arm movement. In both A and B, the neuron showed the same depth preference before and after the arm movement, whereas in D the preference was inverted.

The examples in Figure 3 highlight the major characteristics of V6A cells during reaches in 3D space, that is, the coexistence in single cells of modulations by both target direction and depth, and the fact that direction and depth can affect all epochs or be present only early or late in the task.

Tuning for Depth and Direction in the Different Task Phases

To quantify the effect of depth and direction, a 2-way ANOVA was performed in each epoch. In total, 98% of the cells were modulated (P < 0.05) by at least 1 of the 2 factors in at least 1 epoch (94% for depth and 86% for direction). Interaction effect between the 2 factors was observed in 20–35% of the cells across epochs. However, very few neurons (3–4%) showed only interaction effect, so only the main effects of depth and direction were considered. As shown in Figure 4A, during FIX, similar numbers of cells were modulated by depth only, direction only, and both signals. In the subsequent epochs, the percentage of cells modulated by depth only and by both signals slightly increased, whereas the incidence of tuning by direction only significantly decreased (2-proportion z-test, P < 0.05). As shown in Table 1, in epochs PLAN, REACH, and HOLD, the overall effect of target depth and direction was not equally represented, with the effect of depth being 10–20% more frequent than that of direction. In all epochs, a good percentage of neurons were jointly sensitive to both depth and direction signals, with more and more cells of this kind as the task progressed.

Table 1

Number of neurons modulated for depth and direction in each epoch

Epoch Depth
 
Direction
 
ANOVA Linear regression ANOVA Linear regression 
FIX 155/288 (53.8%) 133/155 (85.8%) 143/288 (49.6%) 110/143 (76.9%) 
PLAN 170/288 (59%) 141/170 (82.9%) 124/288 (43%) 100/124 (80.6%) 
REACH 182/288 (63.2%) 154/182 (84.6%) 146/288 (50.7%) 115/146 (78.8%) 
HOLD 189/288 (65.6%) 159/189 (84.1%) 140/288 (48.6%) 101/140 (72.1%) 
Epoch Depth
 
Direction
 
ANOVA Linear regression ANOVA Linear regression 
FIX 155/288 (53.8%) 133/155 (85.8%) 143/288 (49.6%) 110/143 (76.9%) 
PLAN 170/288 (59%) 141/170 (82.9%) 124/288 (43%) 100/124 (80.6%) 
REACH 182/288 (63.2%) 154/182 (84.6%) 146/288 (50.7%) 115/146 (78.8%) 
HOLD 189/288 (65.6%) 159/189 (84.1%) 140/288 (48.6%) 101/140 (72.1%) 
Figure 4.

Incidence of depth and direction tuning and comparison of depth and direction selectivity across epochs in the population (n = 288) of V6A neurons. (A) Percentage of neurons with tuning for depth only (black), direction only (white), or both signals (gray) during several task epochs (fixation, planning, movement, and holding, ANOVA, P < 0.05). (B) Time course of depth (black) and direction (gray) selectivity calculated as mean ± standard error of the eta squared (η2) index for the population of neurons modulated in each epoch. Depth selectivity increased after the fixation epoch and reached a maximum at the movement epoch. Direction selectivity remained constant during task progress. Asterisks indicate a significant (Student's t-test, P < 0.05) difference between the average values of indexes. (C) Scatter plots of the η2 index in neurons modulated by depth (upper panels) and direction (lower panels) in pairs of adjacent epochs. Each point represents one neuron. Filled and empty circles indicate cells with η2 index that was significantly different (bootstrap test, 10 000 iterations, P < 0.05) or not, respectively, between 2 adjacent epochs. In single neurons, depth selectivity was enhanced during the REACH epoch.

Figure 4.

Incidence of depth and direction tuning and comparison of depth and direction selectivity across epochs in the population (n = 288) of V6A neurons. (A) Percentage of neurons with tuning for depth only (black), direction only (white), or both signals (gray) during several task epochs (fixation, planning, movement, and holding, ANOVA, P < 0.05). (B) Time course of depth (black) and direction (gray) selectivity calculated as mean ± standard error of the eta squared (η2) index for the population of neurons modulated in each epoch. Depth selectivity increased after the fixation epoch and reached a maximum at the movement epoch. Direction selectivity remained constant during task progress. Asterisks indicate a significant (Student's t-test, P < 0.05) difference between the average values of indexes. (C) Scatter plots of the η2 index in neurons modulated by depth (upper panels) and direction (lower panels) in pairs of adjacent epochs. Each point represents one neuron. Filled and empty circles indicate cells with η2 index that was significantly different (bootstrap test, 10 000 iterations, P < 0.05) or not, respectively, between 2 adjacent epochs. In single neurons, depth selectivity was enhanced during the REACH epoch.

To evaluate the time course of depth/direction selectivity in the different task epochs, we calculated the eta square index (η2) as detailed in Materials and Methods. The η2 index was used to measure the strength of the effect of the 2 factors on the firing rate. Figure 4B plots the average values of η2 for depth and direction in the neurons with a significant main effect of these variables in each epoch. The depth and direction selectivity were not significantly different during FIX (Student's t-test, P > 0.05), whereas depth selectivity was significantly higher than direction selectivity in all the other epochs (Student's t-test, P < 0.05).

Figure 4C illustrates the selectivity of depth and direction factors in single cells modulated in pairs of temporally adjacent epochs (FIX–PLAN, PLAN–REACH, and REACH–HOLD). The η2 indexes found in each epoch for depth (Fig. 4C, top) and direction (Fig. 4C, bottom) were used to plot single points, which represent single cells. Filled circles represent neurons with a significantly different index between 2 adjacent epochs (bootstrap test, 10 000 iterations, P < 0.05); and empty circles indicate cells with similar selectivity (bootstrap test, 10 000 iterations, P > 0.05). Figure 4C confirmed, at the single-cell level, the results shown for the population of V6A neurons in Figure 4B, in that neurons were significantly more affected by depth as the task progressed, that is, in PLAN versus FIX, in REACH versus PLAN, and in REACH versus HOLD, while direction selectivity did not significantly change in the different epochs.

Spatial Tuning in the Different Task Phases

To quantify the spatial tuning of the neurons, a linear regression analysis was performed with depth and direction of the target in space as independent variables. Since the target to be reached out was always foveated, the depth and direction in space of the target could be defined in head/body-centered coordinates, that is, with the vergence and version angles, respectively, of the eyes. The linear regression model was used because we observed that few neurons displayed their maximal firing rates for intermediate and central positions, and these positions were the least preferred in our population (10% of cells, Bonferroni post hoc). As shown in Table 1, most of the neurons that were significantly modulated by target depth and direction (ANOVA, P < 0.05) had discharges that were linearly correlated (P < 0.05) with vergence and version angles, respectively. In each neuron, the sign of the linear correlation coefficients (standarized) was used to determine the spatial preference in a certain epoch. Neurons with a significant linear vergence tuning were classified as near or far, whereas cells linearly tuned by version angle were classified as contralateral or ipsilateral, depending on both the sign of the linear version coefficient and the recording hemisphere.

The percentage of cells falling into the above groups in each epoch is illustrated in Figure 5A. Neurons tuned for “far” reachable space were found to be more than those tuned for “near” reachable space (Fig. 5A, top). The difference was statistically significant in all epochs apart from REACH (χ2, P < 0.05 in FIX, PLAN, and HOLD). Regarding the directional tuning (Fig. 5A, middle), contralateral neurons were more numerous than ipsilateral ones in all epochs, but the difference was never statistically significant (χ2, P > 0.05). The bottom part of Figure 5A shows that near and far cells were similarly tuned for contralateral and ipsilateral spaces (2-way χ2, P > 0.05). In summary, the analysis of the spatial tuning showed that the distributions of spatial preference within the reachable space tested were quite similar across epochs.

Figure 5.

Spatial tuning in single epochs and tuning consistency across epochs. (A) Top: Percentage of neurons linearly modulated by depth that preferred far (dark gray) and near (medium gray) space in each epoch. Middle: Percentage of the neurons linearly modulated by direction that preferred contralateral (medium gray) and ipsilateral (light gray) space in each epoch. Bottom: Percentage of the neurons belonging to the combination of classes in cells linearly modulated by both depth and direction. Asterisks indicate a statistically significant (χ2, P < 0.05) spatial preference. (B) Percentages of cells that showed the same (white), different (black) tuning, and those that lost (light gray) or acquired later (medium gray) their tuning in depth (left) and direction (right) in pairs of consecutive epochs during the task progress.

Figure 5.

Spatial tuning in single epochs and tuning consistency across epochs. (A) Top: Percentage of neurons linearly modulated by depth that preferred far (dark gray) and near (medium gray) space in each epoch. Middle: Percentage of the neurons linearly modulated by direction that preferred contralateral (medium gray) and ipsilateral (light gray) space in each epoch. Bottom: Percentage of the neurons belonging to the combination of classes in cells linearly modulated by both depth and direction. Asterisks indicate a statistically significant (χ2, P < 0.05) spatial preference. (B) Percentages of cells that showed the same (white), different (black) tuning, and those that lost (light gray) or acquired later (medium gray) their tuning in depth (left) and direction (right) in pairs of consecutive epochs during the task progress.

We then addressed the question of whether the constancy in the distribution of preferred depths and directions across the task was the result of a single group of neurons being active, or whether different subpopulations of cells were spatially tuned in each epoch. For this purpose, we defined and quantified the cells that preserved, changed, lost, or acquired their spatial tuning from one epoch to the next. The results of this analysis are shown in Figure 5B. The spatial tuning of depth modulations (Fig. 5B, left) was remarkably consistent across epochs (40–50% of the cases), and the consistency of spatial tuning increased as the task progressed through the epochs. In only 3% of neurons, the spatial tuning changed as the task progressed. The coefficients of determination (R2) were calculated to measure how well the coefficients of one epoch can predict the value of the coefficients in the next one. The depth coefficients were strongly correlated with highly significant R2 values (range 0.73–0.77, P < 0.0001). It is interesting to note that these values were quite constant across the epoch comparisons, thus demonstrating an equal strength of depth tuning consistency as the task progressed.

As illustrated in the right part of Figure 5B, the direction tuning was less consistent across epochs than depth tuning (<30% of the cases), without significant changes as the task progressed. In about 35% of cases, the directional tuning was lost and, in another 35% of cases, it emerged only at the later epoch of each pair. As a result, the subpopulation of cells tuned in direction in a certain epoch was, in large part, different from that recruited in the next one. The version coefficients of adjacent epochs were strongly correlated, as for vergence, with highly significant R2 values (range 0.56–0.86, P < 0.0001). In contrast to what observed in depth tuning, these values were more variable across epoch comparisons, with the PLAN/REACH pair showing the highest and the FIX/PLAN the lowest R2 value, respectively. In other words, the spatial tuning that appeared early in the task exerted a strong influence on the spatial tuning of the activity in the latter epochs. However, a considerable number of neurons lost their spatial tuning, and different subpopulations of spatially tuned cells became active during planning, arm movement execution, and holding of the static arm position. This latter finding, together with the similar one relative to the depth tuning, suggests that additional spatial information—other than the eye position—became available for the tuning of activity in PLAN, REACH, and HOLD epochs.

To characterize the sources of additional spatial input during movement execution, we determined the latency of response in the neurons linearly modulated in REACH. Latency was measured as the time at which REACH activity became significantly higher than PLAN activity (see Materials and Methods). The mean latency of reaching responses (n = 143) was 41.6 ± 155 (standard deviation) ms after the movement onset. In 74 (52%) cells, the response started before the movement onset, whereas in 69 (48%) neurons the response started after the movement onset. The first group of cells are likely activated by a corollary discharge from the premotor cortex (Matelli et al. 1998; Shipp et al. 1998; Gamberini et al. 2009), whereas the second one could reflect proprioceptive and tactile signals from the arm that are known to affect an important fraction of V6A neurons (Breveglieri et al. 2002). To test whether there was a difference in the onset of depth reaching responses compared with the direction ones, mean latencies were calculated seperately for the preferred depth and direction. Neurons with depth modulations (n = 113) had a mean latency of +24.6 ± 148.8 ms; directionally tuned cells (n = 78) a mean latency of 61.6 ± 163.6 ms. The 2 latency distributions were not statistically different (Kolmogorov–Smirnov test, P > 0.05 and Wilcoxon-signed rank test, P > 0.05). This suggests that depth and direction signals affect the reaching-related activity with a similar time course.

Cell Categories

We divided the V6A cells reported in this study into 3 main categories based on the presence of modulation in epochs FIX and REACH. Neurons were classified as “FIX cells” when they showed spatial tuning in FIX, but not in REACH, “REACH cells” when the opposite condition occurred, and “FIX–REACH cells” when the neurons were modulated in both epochs. The percentage of cells ascribed to each category are reported in Figure 6. Cells modulated by depth (Fig. 6, top) were more frequently represented in the category of FIX–REACH cells (χ2, P < 0.05), whereas neurons modulated by direction were evenly distributed between the 3 categories (Fig. 6, bottom). We then calculated the percentage of cells modulated in PLAN or HOLD that fell into each of these categories (Fig. 6, middle and right panels). Exact numbers are reported in the figure legend. Both PLAN and HOLD cells belonged mostly to the FIX–REACH cells category, whereas a minority of them did not fall to any of the 3 main categories (∼15% for depth and ∼20% for direction). Overall, the above analysis confirmed the coincidence of modulations between the epochs and revealed the existence of distinct categories of cells.

Figure 6.

Depth and direction tuning in subpopulations of V6A cells. Left: Percentage of neurons with modulations by depth (upper left panel) and direction (lower left panel) present in both FIX and REACH epochs (“FIX–REACH cells”, dark gray), in FIX but not in REACH (“FIX cells”, black), and vice versa (“REACH cells”, light gray), or in none of them (white). Middle and right: Percentage of neurons modulated by depth (upper panels) and direction (lower panels) in both PLAN (central panels) and HOLD (right panels) epochs that fell into the coincident (dark gray) or mutually exclusive (black and light gray) FIX–REACH modulations. For depth tuning (top, middle, and right panels), both PLAN and HOLD cells were more likely defined as FIX–REACH cells (82 cells in both PLAN and HOLD, 48% and 43%, respectively, χ2, P < 0.05), whereas the FIX cells category had the fewer number of neurons (PLAN: 26 cells/15%; HOLD: 22 cells/12%). For the cells modulated by direction (bottom, middle, and right panels), a similar clustering of PLAN and HOLD cells into the 3 categories was found, with a clear prevalence of the FIX–REACH cells type (χ2, P < 0.05).

Figure 6.

Depth and direction tuning in subpopulations of V6A cells. Left: Percentage of neurons with modulations by depth (upper left panel) and direction (lower left panel) present in both FIX and REACH epochs (“FIX–REACH cells”, dark gray), in FIX but not in REACH (“FIX cells”, black), and vice versa (“REACH cells”, light gray), or in none of them (white). Middle and right: Percentage of neurons modulated by depth (upper panels) and direction (lower panels) in both PLAN (central panels) and HOLD (right panels) epochs that fell into the coincident (dark gray) or mutually exclusive (black and light gray) FIX–REACH modulations. For depth tuning (top, middle, and right panels), both PLAN and HOLD cells were more likely defined as FIX–REACH cells (82 cells in both PLAN and HOLD, 48% and 43%, respectively, χ2, P < 0.05), whereas the FIX cells category had the fewer number of neurons (PLAN: 26 cells/15%; HOLD: 22 cells/12%). For the cells modulated by direction (bottom, middle, and right panels), a similar clustering of PLAN and HOLD cells into the 3 categories was found, with a clear prevalence of the FIX–REACH cells type (χ2, P < 0.05).

Population SDFs allowed us to investigate the temporal pattern of activity in the 3 main categories of cells. The population SDFs were constructed by averaging the single neuron SDFs for the preferred and opposite condition. Figure 7 illustrates the average population activity of each category of cells for depth (left panels) and direction (right panels) modulations. In FIX cells, the activity in the preferred and opposite conditions started to diverge about 200 ms before fixation onset, at around the time of fixation saccade. The activity of nonpreferred depths and directions then decreased rapidly at or below the baseline level, while that of the preferred depths and directions remained high during the first part of fixation (FIX), and decreased more slowly for depth than for direction (Fig. 7, top left and right panels, respectively). At the population level, both depth and directionally tuned FIX cells showed arm movement-related responses that were not significantly different in the preferred and opposite conditions.

Figure 7.

Population average activity in the main categories of V6A cells. Average normalized SDFs for each defined subpopulation. Top/middle/bottom: Population activity represented as SDF of FIX/REACH/FIX–REACH cells modulated by depth (left) and direction (right) doubly aligned at the beginning of fixation and at movement onset. For each cell category and type of modulation, the average SDFs for the preferred (black) and opposite (gray) conditions are plotted. Solid and dashed curves represent the population average and standard errors, respectively. Scale bar in all SDF plots: 100% of normalized activity. Gray-shaded areas indicate the time span of the 4 task epochs.

Figure 7.

Population average activity in the main categories of V6A cells. Average normalized SDFs for each defined subpopulation. Top/middle/bottom: Population activity represented as SDF of FIX/REACH/FIX–REACH cells modulated by depth (left) and direction (right) doubly aligned at the beginning of fixation and at movement onset. For each cell category and type of modulation, the average SDFs for the preferred (black) and opposite (gray) conditions are plotted. Solid and dashed curves represent the population average and standard errors, respectively. Scale bar in all SDF plots: 100% of normalized activity. Gray-shaded areas indicate the time span of the 4 task epochs.

It is worth noting that the activity in the preferred depth, but not in the preferred direction, was higher than that in nonpreferred condition during the whole fixation period, including the PLAN epoch before the reaching movement. The difference in the temporal evolution of activity between the FIX cells modulated in depth and those directionally tuned is in line with a recent study from our lab that showed a more tonic effect of vergence with respect to version (Breveglieri et al. 2012). It also agrees with the increased consistency in spatial tuning between FIX and PLAN in the depth dimension that we observed in the present study (Fig. 5).

In REACH cells, the time course of modulation was very similar for depth and direction tuning (Fig. 7, middle panels). The population activity for the preferred and opposite conditions diverged well after the fixation onset. In the preferred condition, the population activity remained stable during fixation, whereas in the nonpreferred one it progressively decreased till the go signal for reaching. After the go signal, the activity increased (much more in the preferred condition), reaching its peak at movement onset. The difference in activity between the preferred and opposite conditions was evident during the whole movement period and during most of the HOLD period.

FIX–REACH neurons (Fig. 7, bottom panels) showed a temporal pattern of activity that combined those of FIX and REACH cell categories. In this category of cells, the preferred condition was defined based on the activity in REACH epoch, which was congruent in the vast majority of cases (>90%) with the preferred condition in FIX. FIX–REACH cells displayed 2 peaks of activity, one around fixation onset and another around arm movement onset. Between these 2 events, the curves representing the preferred and opposite conditions were well separated. The time course of population activities was similar in depth (Fig. 7, bottom-left panel) and direction (Fig. 7, bottom-right panel) modulations. Interestingly, the activity in preferred conditions (both for depth and direction) decreased during fixation as in FIX neurons, but then increased in the last part of fixation (PLAN), just before the movement onset, as it occured in REACH neurons. In summary, V6A neurons were found to encode different types of signals during a reach-in-depth task. Eye position and arm movement-related information influenced either independently (FIX cells and REACH cells), or jointly (FIX–REACH cells), the activity of V6A neurons. Regarding the anatomical distribution in V6A of the above types of cells, the histological reconstruction of the recording sites did not show any segregation of the main cell categories.

Discussion

The major goals of the present study were to investigate the coexistence in the same cells, as well as the incidence and temporal evolution, of depth and direction tuning of V6A activity during eye–hand coordinated movements in 3D space. We found in V6A an extensive convergence of target depth and direction signals on single neurons during reaches. In addition, the influence of depth signals was somewhat stronger than direction during planning and execution of reaches, and during holding of the targets. In many cells, spatial modulations of activity occurred in multiple epochs, from fixation through reach planning and execution, till holding period. An important fraction of V6A neurons maintained their spatial preference over the time course of the task, whereas a few cells changed it. This consistency was more frequent in the neurons tuned in depth than in those tuned in direction. Below, we discuss the implications of our major findings for the encoding of arm reaching in 3D space, and the sensory-to-motor signal transformations underlying eye–hand coordinated movements.

Encoding of Target Depth and Direction During Reaches

The finding that during reaching a significant number of neurons are modulated by both target depth and direction is in contrast with the general view that depth and direction of reaching targets are processed independently (Flanders et al. 1992; Crawford et al. 2011). The view of separate pathways for depth and direction is based on several behavioral studies (Soechting and Flanders 1989; Flanders and Soechting 1990; Gordon et al. 1994; Sainburg et al. 2003; Vindras et al. 2005; Bagesteiro et al. 2006; Van Pelt and Medendorp 2008), but the neurophysiological support to it is relatively weak. To our knowledge, only Lacquaniti et al. (1995) who studied this issue in the macaque posterior parietal cortex (PPC) reported distinct subpopulations of neurons in area PE representing the distance, azimuth, and elevation of target location. In contrast, in the premotor cortex, most of the studies found a convergence on single neurons of distance and directional information (Fu et al. 1993, 1995; Kurata 1993; Messier and Kalaska 2000). A conceptual framework of how the parietal and frontal networks might code reach direction and distance is presented below.

Figure 8 shows a network of areas where reach depth and direction could be encoded. Visual signals and eye position information interact in the nodes of this network with somatosensory signals related to the arm position, to generate the motor output. Visual input arising from the striate and extrastriate cortex enters the network at the level of PPC areas V6A (Galletti et al. 2001; Passarelli et al. 2011) and in the medial intraparietal area (MIP; Colby and Duhamel 1991). The direction of a reaching target is precisely calculated from its retinal location, whereas to define its distance the brain must also use signals related to the vergence angle of the eyes, binocular disparity, and monocular depth cues. Vergence angle information has been shown to influence the activity of many neurons in V6A (Breveglieri et al. 2012) and in the parietal reach region (PRR; Bhattacharyya et al. 2009) that mostly includes parts of MIP and caudal area PE (PEc). In addition to their tuning by vergence, a substantial fraction of PRR neurons were found to be modulated also by retinal disparity (Bhattacharyya et al. 2009). This integration of vergence and disparity signals, also reported in the lateral intraparietal area in the lateral bank of the intraparietal sulcus (Gnadt and Mays 1995; Genovesio and Ferraina 2004), is sufficient to encode the egocentric distance of visual targets.

Figure 8.

Distance and direction coding in the cortical reach-related areas. Areas are depicted in different grayscale gradients according to the relative proportion of visual (white) and proprioceptive (black) information they receive. Areas receiving predominantly visual input tend to process jointly target distance and direction information, whereas those that receive mainly somatosensory input are more likely to represent spatial parameters separately and show greater sensitivity for depth encoding. V1: primary visual cortex; V2: area V2; V3: area V3; V6: area V6; MT: middle temporal area; MST: medial superior temporal area; S1: primary somatosensory cortex; S2: secondary somatosensory area; 3b: area 3b. For other abbreviations see text.

Figure 8.

Distance and direction coding in the cortical reach-related areas. Areas are depicted in different grayscale gradients according to the relative proportion of visual (white) and proprioceptive (black) information they receive. Areas receiving predominantly visual input tend to process jointly target distance and direction information, whereas those that receive mainly somatosensory input are more likely to represent spatial parameters separately and show greater sensitivity for depth encoding. V1: primary visual cortex; V2: area V2; V3: area V3; V6: area V6; MT: middle temporal area; MST: medial superior temporal area; S1: primary somatosensory cortex; S2: secondary somatosensory area; 3b: area 3b. For other abbreviations see text.

For foveated targets, vergence angle is the most important signal that the reach-related areas could use to calculate the reach amplitude (Foley 1980; Melmoth et al. 2007). Moreover, as shown in the present study (FIX cells) and in Breveglieri et al. (2012), many V6A cells are modulated by both the vergence and version angles, so they are able to encode the 3D spatial coordinates of a foveated-reaching target. Similar convergence of version and vergence signals—though not tested yet—could also occur in areas MIP and PEc, which are known to receive visual and eye position input (Johnson et al. 1996; Eskandar and Assad 1999; Battaglia-Mayer et al. 2001; Breveglieri et al. 2008; Bakola et al. 2010). All the above areas could send the 3D spatial information about target location to the dorsal premotor cortex (PMd), with which they are strongly connected (Tanne et al. 1995; Matelli et al. 1998; Shipp et al. 1998; Tanne-Gariepy et al. 2002; Gamberini et al. 2009), and then from PMd this information is transmitted to the primary motor cortex.

The other major contribution to the reach circuit regards the proprioceptive information about the hand position. This input arises from the areas of the somatosensory cortex and enters the circuit mainly at the level of area PE. In the primary somatosensory area (S1), the neurons were reported to be more sensitive to movement amplitude than to direction (Tillery et al. 1996). In PE, the neurons modulated by target distance were twice as much as those modulated by azimuth and elevation (Lacquaniti et al. 1995), and the incidence of depth-modulated neurons increased going from target presentation to movement execution in a reach-in-depth task (Ferraina et al. 2009). These findings provide neurophysiological support to behavioral data arguing that in depth proprioception is more reliable than vision (Flanders et al. 1992; van Beers et al. 1998, 2002; Sainburg et al. 2003; Monaco et al. 2010). Furthermore, PE is strongly and reciprocally connected with the primary motor cortex (Jones et al. 1978; Strick and Kim 1978; Johnson et al. 1996; Bakola et al. 2012), where there is a represention bias for reaches in depth (Naselaris et al. 2006), as during the execution of arm movements in 3D space the preferred direction in the majority of neurons was aligned with the depth axis. This also suggests that the reach-related areas that mainly receive proprioceptive information are more specialized for the control of reaches in depth.

The proprioceptive signals are also sent to V6A, MIP, and PEc, where they can be combined with visual- and vergence angle-related signals in order to establish neurons that jointly process information on depth and direction. Area PE, on the contrary, does not receive visual information (Johnson et al. 1996; Bakola et al. 2012), and vergence angle influences the reaching activity of a small fraction of PE cells (Ferraina et al. 2009). This could explain why in PE depth and direction signals are represented by distinct neuronal populations (Lacquaniti et al. 1995).

The PMd encodes a representation of both movement and sensory-related information (Alexander and Crutcher 1990; Johnson et al. 1996; Shen and Alexander 1997; Matelli et al. 1998). Fu et al. (1993) demonstrated that movement direction, distance, and target location are encoded in overlapping neuronal populations in PMd. In line with this evidence, Messier and Kalaska (2000) found that the majority of PMd cells encoded both the movement distance and direction. Interestingly, both these studies reported that directional information was specified earlier in the task, that is, during the target cue or movement planning period, whereas movement distance exerted its effect mostly during movement execution. The present study reveals an increase in V6A of the number of depth-modulated neurons and an enhanced depth sensitivity as the task progressed. However, differently to the results in PMd, in V6A the direction and depth effects were comparable in the early stages of the task. It could be that signals about the retinotopic target location (i.e., target direction) are transmitted directly to PMd cells without interacting with vergence signals, in order to first specify the movement direction that is more crucial in the initial stages of movement preparation and execution. Overall, the above-mentioned neurophysiological studies, as well as a recent functional magnetic resonance imaging study in humans (Fabbri et al. 2012), suggest that the processing of distance and direction information is more independent in PMd than in the visuomotor parietal areas like V6A. This difference could be advantageous for the online control of arm movement, when the parietal and frontal reach nodes must interact more closely (Wise et al. 1997; Rizzolatti and Matelli 2003). Changes in the motor plan, as when target's location is perturbed, are first reflected in the PMd activity (Archambault et al. 2011) and then are relayed in the parietal areas presumably via top-down processing. Thus, the updating of the movement plans highlights the importance of feedback mechanisms in the encoding of reach direction and depth.

Localizing and moving toward targets in depth are much more demanding computationally and require a better degree of control (Danckert et al. 2009). In several studies where arm movements in 3D were performed, the variability of endpoints was found to be larger along the depth axis where visual uncertaintly is higher (Gordon et al. 1994; McIntyre et al. 1997; Apker et al. 2010). A way to achieve a better motor control in depth could be to recruit neurons receiving inputs other than visual (i.e. proprioceptive, efference copy). SPL neurons do receive these signals and are presumably well suited for controlling reaching, especially in depth, as suggested by the data from neurological patients. A larger impairment of depth processing after damage in SPL was reported almost 100 years ago in a human case study (Holmes and Horrax 1919), and it was recently confirmed that patients with lesions in SPL showed a stronger deficit in depth than in direction during reaching (Baylis and Baylis 2001) and pointing (Danckert et al. 2009) movements.

In the present study, reaches were always performed toward foveated targets, so we cannot determine the frame of reference of reach-related activity. The issue of the reference frames was beyond the scope of the present work. In our study, depth information was more represented than direction. This difference could be attributed to the fact that, in our experimental set-up, the depth range explored included most of the possible depths, whereas tested directions comprised a smaller fraction of directional space. Although we cannot exclude this explanation, there are several lines of evidence that argue against it. For instance, we did not find the influence of depth to be stronger than direction influence in fixation epoch. In addition, although the range of explored directions is small (30°), it comprises most of the space where the gaze and hands interact with objects in everyday life. It is worthwhile to note that shifts of gaze >15–20° are always accompanied by head movements (Freedman and Sparks 1997). Another factor that could explain the stronger depth effect is the difference in retinal size between targets located at different depths. However, if the stronger depth effect was due to the difference in the retinal stimulation between near and far LEDs, we would expect it to be more pronounced in the early phases of the task, especially shortly after the target was fixated (FIX epoch), but this was not the case. In addition, as reported in Galletti et al. (1999), a substantial fraction of V6A cells are not visually responsive, so in those cases the different retinal stimulation cannot account for the stronger effect of depth. Overall, our view is that the limitations of our study listed above do not bear on our major finding that depth and direction of reach targets are jointly processed in V6A.

Spatial Tuning in the Different Task Phases

In the present study, we reproduced the naturalistic conditions of eye–hand coordination in reaching, where the eyes fixate the target before the arm movement begins (Neggers and Bekkering 2001; Hayhoe et al. 2003). Shortly after target fixation, about 70% of V6A neurons were modulated by target depth and/or direction in space. These modulations likely reflect gaze-related activity, and well agree with the previously demonstrated sensitivity of many V6A neurons to the eye position, both in a frontoparallel plane (Galletti et al. 1995; Nakamura et al. 1999) and in depth (Hadjidimitrakis et al. 2011; Breveglieri et al. 2012). The spatial tuning of neural activity during target fixation could represent the target location in 3D space, an essential information to control the reaching action.

During the planning epoch, the monkey continued to fixate the target waiting for the go signal to start arm movement. Activity during this time period is likely affected by both eye position and attentional signals, both reported to be present in V6A (Galletti et al. 1995, 2010). In addition, PLAN activity could be related to the programming of the arm movement, one of the key functions of the PPC (Snyder et al. 1997; Andersen and Buneo 2002). The increase in selectivity of depth encoding that we found in this time epoch probably reflects the motor preparation processing, but more studies are needed at this regard.

The neural activity during REACH epoch is expected to have many different contributions. During movement execution, V6A neurons could receive visual information about target position, proprioceptive and somatosensory inputs about hand and arm positions, and efferent copy of the arm motor command (Matelli et al. 1998; Breveglieri et al. 2002; Fattori et al. 2005; Gamberini et al. 2009; Bosco et al. 2010). Furthermore, the latency analysis revealed that the responses of the neurons modulated in REACH could depend on hand sensory signals and certainly depend on motor signals. Behavioral evidence suggests that target and hand somatomotor information is continuously compared during movement execution (Pélisson et al. 1986). In our body-out reaching task, the monkeys performed the arm movement in darkness, that is, without seeing the arm, so they relied on somatomotor information, but not on visual feedback. As mentioned in a previous section of Discussion, movement in depth depends more on proprioception, and this agrees well with the fact that V6A cells showed the maximum of depth selectivity during the movement period. The tuning of neural activity in the holding phase (HOLD) could be mainly attributed to proprioceptive and somatosensory inputs related to the static arm position in 3D space (Fattori et al. 2001; Breveglieri et al. 2002). It is interesting to note that HOLD period was characterized by the highest incidence of modulation in depth.

It has been suggested that neurons in the PPC integrate spatially consistent retinal, eye, and hand information into “global tuning fields”, and that this type of neural processing could be the substrate for the eye–hand coordination (Battaglia-Mayer et al. 2001). Evidence of neurons with consistent spatial tuning between eye position, arm movement, and arm position-related activities were reported in the SPL, in particular in area PEc (Battaglia-Mayer et al. 2001). In that area about 60% of neurons were found to have global tuning in direction across several epochs and tasks, whereas, in the inferior parietal lobule area 7a, a smaller incidence (∼25%) of such cells was reported (Battaglia-Mayer et al. 2005). A recent study employing a body-out reaching task reported that approximately 60% of 7a neurons changed or lost their directional tuning from fixation to preparation and movement execution (Heider et al. 2010). In the present work, we found a similar incidence (∼30%) of directional tuning consistency in area V6A, but we also found that tuning consistency was more frequent in depth than in direction, occurring in depth in about 50% of V6A neurons. This reflects, we believe, the higher difficulty in reaching objects located along the same line of sight, but at different depth, with respect to objects located along different lines of sight.

Based on the coincidence of modulations during the fixation and reaching epochs, we identified 3 main categories of V6A neurons that are activated during foveated reaches in 3D space, that is, the “FIX”, “REACH”, and “FIX–REACH” cells. This classification scheme might seem arbitrary, since our population of cells was characterized by a continuoum of multiple modulations in the epochs studied. However, our intention was to emphasize the 2 crucial phases of the visuomotor transformation underlying the control of visually guided reaches that are the target localization and the encoding of arm movement. FIX cells process target position through gaze signals, whereas REACH cells could encode the parameters of the arm movement regardless of the spatial location of the target. FIX–REACH cells integrate multiple signals on target location and arm movement. The ability to process independently (FIX cells and REACH cells), or in combination (FIX–REACH cells), eye- and arm-related signals highlights the key role of area V6A in transforming, also in the depth domain, the spatial information about the target into a motor command to reach it.

Funding

This work was supported by EU FP7-IST-217077-EYESHOTS; by Ministero dell'Università e della Ricerca (Italy), and by Fondazione del Monte di Bologna e Ravenna (Italy).

Notes

We thank Dr N. Marzocchi and G. Placenti for setting-up the experimental apparatus, Dr L. Passarelli and Dr M. Gamberini for valuable assistance in the reconstruction of the penetrations, G. Dal Bo' for help during recordings and A. Caroselli for help with Matlab. Conflict of Interest: None declared.

References

Alexander
GE
Crutcher
MD
Neural representations of the target (goal) of visually guided movements in three motor areas of the monkey
J Neurophysiol
 , 
1990
, vol. 
64
 (pg. 
164
-
178
)
Andersen
RA
Buneo
CA
Intentional maps in posterior parietal cortex
Annu Rev Neurosci
 , 
2002
, vol. 
25
 (pg. 
189
-
220
)
Andersen
RA
Cui
H
Intention, action planning, and decision making in parietal-frontal circuits
Neuron
 , 
2009
, vol. 
63
 (pg. 
568
-
583
)
Apker
GA
Darling
TK
Buneo
CA
Interacting noise sources shape patterns of arm movement variability in three-dimensional space
J Neurophysiol
 , 
2010
, vol. 
104
 (pg. 
2654
-
2666
)
Archambault
PS
Ferrari-Toniolo
S
Battaglia-Mayer
A
Online control of hand trajectory and evolution of motor intention in the parietofrontal system
J Neurosci
 , 
2011
, vol. 
31
 (pg. 
742
-
752
)
Bagesteiro
L
Sarlegna
F
Sainburg
R
Differential influence of vision and proprioception on control of movement distance
Exp Brain Res
 , 
2006
, vol. 
171
 (pg. 
358
-
370
)
Bakola
S
Gamberini
M
Passarelli
L
Fattori
P
Galletti
C
Cortical connections of parietal field PEc in the macaque: linking vision and somatic sensation for the control of limb action
Cereb Cortex
 , 
2010
, vol. 
20
 (pg. 
2592
-
2604
)
Bakola
S
Gamberini
M
Passarelli
L
Fattori
P
Galletti
C
Cortical input to posterior parietal area PE in the macaque monkey
2012
Australian Neuroscience Society Meeting
Gold Coast, Queensland, Australia
Batista
AP
Buneo
CA
Snyder
LH
Andersen
RA
Reach plans in eye-centered coordinates
Science
 , 
1999
, vol. 
285
 (pg. 
257
-
260
)
Batista
AP
Santhanam
G
Yu
BM
Ryu
SI
Afshar
A
Shenoy
KV
Reference frames for reach planning in macaque dorsal premotor cortex
J Neurophysiol
 , 
2007
, vol. 
98
 (pg. 
966
-
983
)
Battaglia-Mayer
A
Ferraina
S
Genovesio
A
Marconi
B
Squatrito
S
Molinari
M
Lacquaniti
F
Caminiti
R
Eye-hand coordination during reaching. II. An analysis of the relationships between visuomanual signals in parietal cortex and parieto-frontal association projections
Cereb Cortex
 , 
2001
, vol. 
11
 (pg. 
528
-
544
)
Battaglia-Mayer
A
Mascaro
M
Brunamonti
E
Caminiti
R
The over-representation of contralateral space in parietal cortex: a positive image of directional motor components of neglect?
Cereb Cortex
 , 
2005
, vol. 
15
 (pg. 
514
-
525
)
Baylis
GC
Baylis
LL
Visually misguided reaching in Balint's syndrome
Neuropsychologia
 , 
2001
, vol. 
39
 (pg. 
865
-
875
)
Bhattacharyya
R
Musallam
S
Andersen
RA
Parietal reach region encodes reach depth using retinal disparity and vergence angle signals
J Neurophysiol
 , 
2009
, vol. 
102
 (pg. 
805
-
816
)
Bosco
A
Breveglieri
R
Chinellato
E
Galletti
C
Fattori
P
Reaching activity in the medial posterior parietal cortex of monkeys is modulated by visual feedback
J Neurosci
 , 
2010
, vol. 
30
 (pg. 
14773
-
14785
)
Breveglieri
R
Galletti
C
Monaco
S
Fattori
P
Visual, somatosensory, and bimodal activities in the macaque parietal area PEc
Cereb Cortex
 , 
2008
, vol. 
18
 (pg. 
806
-
816
)
Breveglieri
R
Hadjidimitrakis
K
Bosco
A
Sabatini
SP
Galletti
C
Fattori
P
Eye position encoding in three-dimensional space: integration of version and vergence signals in the medial posterior parietal cortex
J Neurosci
 , 
2012
, vol. 
32
 (pg. 
159
-
169
)
Breveglieri
R
Kutz
DF
Fattori
P
Gamberini
M
Galletti
C
Somatosensory cells in the parieto-occipital area V6A of the macaque
Neuroreport
 , 
2002
, vol. 
13
 (pg. 
2113
-
2116
)
Buneo
CA
Jarvis
MR
Batista
AP
Andersen
RA
Direct visuomotor transformations for reaching
Nature
 , 
2002
, vol. 
416
 (pg. 
632
-
636
)
Chang
SW
Papadimitriou
C
Snyder
LH
Using a compound gain field to compute a reach plan
Neuron
 , 
2009
, vol. 
64
 (pg. 
744
-
755
)
Colby
CL
Duhamel
JR
Heterogeneity of extrastriate visual areas and multiple parietal areas in the macaque monkey
Neuropsychologia
 , 
1991
, vol. 
29
 (pg. 
517
-
537
)
Crawford
JD
Henriques
DY
Medendorp
WP
Three-dimensional transformations for goal-directed action
Annu Rev Neurosci
 , 
2011
, vol. 
34
 (pg. 
309
-
331
)
Danckert
J
Goldberg
L
Broderick
C
Damage to superior parietal cortex impairs pointing in the sagittal plane
Exp Brain Res
 , 
2009
, vol. 
195
 (pg. 
183
-
191
)
Eskandar
EN
Assad
JA
Dissociation of visual, motor and predictive signals in parietal cortex during visual guidance
Nat Neurosci
 , 
1999
, vol. 
2
 (pg. 
88
-
93
)
Fabbri
S
Caramazza
A
Lingnau
A
Distributed sensitivity for movement amplitude in directionally tuned neuronal populations
J Neurophysiol
 , 
2012
, vol. 
107
 (pg. 
1845
-
1856
)
Fattori
P
Breveglieri
R
Amoroso
K
Galletti
C
Evidence for both reaching and grasping activity in the medial parieto-occipital cortex of the macaque
Eur J Neurosci
 , 
2004
, vol. 
20
 (pg. 
2457
-
2466
)
Fattori
P
Gamberini
M
Kutz
DF
Galletti
C
“Arm-reaching” neurons in the parietal area V6A of the macaque monkey
Eur J Neurosci
 , 
2001
, vol. 
13
 (pg. 
2309
-
2313
)
Fattori
P
Kutz
DF
Breveglieri
R
Marzocchi
N
Galletti
C
Spatial tuning of reaching activity in the medial parieto-occipital cortex (area V6A) of macaque monkey
Eur J Neurosci
 , 
2005
, vol. 
22
 (pg. 
956
-
972
)
Ferraina
S
Brunamonti
E
Giusti
MA
Costa
S
Genovesio
A
Caminiti
R
Reaching in depth: hand position dominates over binocular eye position in the rostral superior parietal lobule
J Neurosci
 , 
2009
, vol. 
29
 (pg. 
11461
-
11470
)
Flanders
M
Helms Tillery
SI
Soechting
JF
Early stages in a sensorimotor transformation
Behav Brain Sci
 , 
1992
, vol. 
15
 (pg. 
309
-
362
)
Flanders
M
Soechting
JF
Parcellation of sensorimotor transformations for arm movements
J Neurosci
 , 
1990
, vol. 
10
 (pg. 
2420
-
2427
)
Fluet
M-C
Baumann
MA
Scherberger
H
Context-specific grasp movement representation in macaque ventral premotor cortex
J Neurosci
 , 
2010
, vol. 
30
 (pg. 
15175
-
15184
)
Foley
JM
Binocular distance perception
Psychol Rev
 , 
1980
, vol. 
87
 (pg. 
411
-
434
)
Freedman
EG
Sparks
DL
Eye-head coordination during head-unrestrained gaze shifts in rhesus monkeys
J Neurophysiol
 , 
1997
, vol. 
77
 (pg. 
2328
-
2348
)
Fu
Q-G
Flament
D
Coltz
JD
Ebner
TJ
Temporal encoding of movement kinematics in the discharge of primate primary motor and premotor neurons
J Neurophysiol
 , 
1995
, vol. 
73
 (pg. 
836
-
854
)
Fu
Q-G
Suarez
JI
Ebner
TJ
Neuronal specification of direction and distance during reaching movements in the superior precentral premotor area and primary motor cortex of monkeys
J Neurophysiol
 , 
1993
, vol. 
70
 (pg. 
2097
-
2116
)
Galletti
C
Battaglini
PP
Fattori
P
Eye position influence on the parieto-occipital area PO (V6) of the macaque monkey
Eur J Neurosci
 , 
1995
, vol. 
7
 (pg. 
2486
-
2501
)
Galletti
C
Battaglini
PP
Fattori
P
Parietal neurons encoding spatial locations in craniotopic coordinates
Exp Brain Res
 , 
1993
, vol. 
96
 (pg. 
221
-
229
)
Galletti
C
Breveglieri
R
Lappe
M
Bosco
A
Ciavarro
M
Fattori
P
Covert shift of attention modulates the ongoing neural activity in a reaching area of the macaque dorsomedial visual stream
PLoS One
 , 
2010
, vol. 
5
 pg. 
e15078
 
Galletti
C
Fattori
P
Battaglini
PP
Shipp
S
Zeki
S
Functional demarcation of a border between areas V6 and V6A in the superior parietal gyrus of the macaque monkey
Eur J Neurosci
 , 
1996
, vol. 
8
 (pg. 
30
-
52
)
Galletti
C
Fattori
P
Kutz
DF
Gamberini
M
Brain location and visual topography of cortical area V6A in the macaque monkey
Eur J Neurosci
 , 
1999
, vol. 
11
 (pg. 
575
-
582
)
Galletti
C
Gamberini
M
Kutz
DF
Fattori
P
Luppino
G
Matelli
M
The cortical connections of area V6: an occipito-parietal network processing visual information
Eur J Neurosci
 , 
2001
, vol. 
13
 (pg. 
1572
-
1588
)
Gamberini
M
Galletti
C
Bosco
A
Breveglieri
R
Fattori
P
Is the medial posterior parietal area V6A a single functional area?
J Neurosci
 , 
2011
, vol. 
31
 (pg. 
5145
-
5157
)
Gamberini
M
Passarelli
L
Fattori
P
Zucchelli
M
Bakola
S
Luppino
G
Galletti
C
Cortical connections of the visuomotor parietooccipital area V6Ad of the macaque monkey
J Comp Neurol
 , 
2009
, vol. 
513
 (pg. 
622
-
642
)
Genovesio
A
Ferraina
S
Integration of retinal disparity and fixation-distance related signals toward an egocentric coding of distance in the posterior parietal cortex of primates
J Neurophysiol
 , 
2004
, vol. 
91
 (pg. 
2670
-
2684
)
Gnadt
JW
Mays
LE
Neurons in monkey parietal area LIP are tuned for eye-movement parameters in three-dimensional space
J Neurophysiol
 , 
1995
, vol. 
73
 (pg. 
280
-
297
)
Gordon
J
Ghilardi
MF
Ghez
C
Accuracy of planar reaching movements. I. Independence of direction and extent variability
Exp Brain Res
 , 
1994
, vol. 
99
 (pg. 
97
-
111
)
Hadjidimitrakis
K
Breveglieri
R
Placenti
G
Bosco
A
Sabatini
SP
Fattori
P
Fix your eyes in the space you could reach: neurons in the macaque medial parietal cortex prefer gaze positions in peripersonal space
PLoS ONE
 , 
2011
, vol. 
6
 pg. 
e23335
 
Hayhoe
MM
Shrivastava
A
Mruczek
R
Pelz
JB
Visual memory and motor planning in a natural task
J Vis
 , 
2003
, vol. 
3
 (pg. 
49
-
63
)
Heider
B
Karnik
A
Ramalingam
N
Siegel
RM
Neural representation during visually guided reaching in macaque posterior parietal cortex
J Neurophysiol
 , 
2010
, vol. 
104
 (pg. 
3494
-
3509
)
Holmes
G
Horrax
G
Disturbances of spatial orientation and visual attention, with loss of stereoscopic vision
Arch Neurol Psychiatry
 , 
1919
, vol. 
1
 (pg. 
385
-
407
)
Johnson
PB
Ferraina
S
Bianchi
L
Caminiti
R
Cortical networks for visual reaching: physiological and anatomical organization of frontal and parietal lobe arm regions
Cereb Cortex
 , 
1996
, vol. 
6
 (pg. 
102
-
119
)
Jones
EG
Coulter
JD
Hendry
SHC
Intracortical connectivity of architectonic fields in the somatic sensory, motor and parietal cortex of monkey
J Comp Neurol
 , 
1978
, vol. 
181
 (pg. 
291
-
348
)
Kurata
K
Premotor cortex of monkeys: set- and movement-related activity reflecting amplitude and direction of wrist movements
J Neurophysiol
 , 
1993
, vol. 
69
 (pg. 
187
-
200
)
Kutz
DF
Fattori
P
Gamberini
M
Breveglieri
R
Galletti
C
Early- and late-responding cells to saccadic eye movements in the cortical area V6A of macaque monkey
Exp Brain Res
 , 
2003
, vol. 
149
 (pg. 
83
-
95
)
Kutz
DF
Marzocchi
N
Fattori
P
Cavalcanti
S
Galletti
C
Real-time supervisor system based on trinary logic to control experiments with behaving animals and humans
J Neurophysiol
 , 
2005
, vol. 
93
 (pg. 
3674
-
3686
)
Lacquaniti
F
Guigon
E
Bianchi
L
Ferraina
S
Caminiti
R
Representing spatial information for limb movement: role of area 5 in the monkey
Cereb Cortex
 , 
1995
, vol. 
5
 (pg. 
391
-
409
)
Luppino
G
Ben Hamed
S
Gamberini
M
Matelli
M
Galletti
C
Occipital (V6) and parietal (V6A) areas in the anterior wall of the parieto-occipital sulcus of the macaque: a cytoarchitectonic study
Eur J Neurosci
 , 
2005
, vol. 
21
 (pg. 
3056
-
3076
)
Matelli
M
Govoni
P
Galletti
C
Kutz
DF
Luppino
G
Superior area 6 afferents from the superior parietal lobule in the macaque monkey
J Comp Neurol
 , 
1998
, vol. 
402
 (pg. 
327
-
352
)
McIntyre
J
Stratta
F
Lacquaniti
F
Viewer-centered frame of reference for pointing to memorized targets in three-dimensional space
J Neurophysiol
 , 
1997
, vol. 
78
 (pg. 
1601
-
1618
)
Melmoth
D
Storoni
M
Todd
G
Finlay
A
Grant
S
Dissociation between vergence and binocular disparity cues in the control of prehension
Exp Brain Res
 , 
2007
, vol. 
183
 (pg. 
283
-
298
)
Messier
J
Kalaska
JF
Covariation of primate dorsal premotor cell activity with direction and amplitude during a memorized-delay reaching task
J Neurophysiol
 , 
2000
, vol. 
84
 (pg. 
152
-
165
)
Monaco
S
Kroliczak
G
Quinlan
DJ
Fattori
P
Galletti
C
Goodale
MA
Culham
JC
Contribution of visual and proprioceptive information to the precision of reaching movements
Exp Brain Res
 , 
2010
, vol. 
202
 (pg. 
15
-
32
)
Nakamura
K
Chung
HH
Graziano
MSA
Gross
CG
Dynamic representation of eye position in the parieto-occipital sulcus
J Neurophysiol
 , 
1999
, vol. 
81
 (pg. 
2374
-
2385
)
Nakamura
K
Colby
CL
Visual, saccade-related, and cognitive activation of single neurons in monkey extrastriate area V3A
J Neurophysiol
 , 
2000
, vol. 
84
 (pg. 
677
-
692
)
Naselaris
T
Merchant
H
Amirikian
B
Georgopoulos
AP
Large-scale organization of preferred directions in the motor cortex. I. Motor cortical hyperacuity for forward reaching
J Neurophysiol
 , 
2006
, vol. 
96
 (pg. 
3231
-
3236
)
Neggers
SFW
Bekkering
H
Gaze anchoring to a pointing target is present during the entire pointing movement and is driven by a non-visual signal
J Neurophysiol
 , 
2001
, vol. 
86
 (pg. 
961
-
970
)
Passarelli
L
Rosa
MG
Gamberini
M
Bakola
S
Burman
KJ
Fattori
P
Galletti
C
Cortical connections of area V6Av in the macaque: a visual-input node to the eye/hand coordination system
J Neurosci
 , 
2011
, vol. 
31
 (pg. 
1790
-
1801
)
Pélisson
D
Prablanc
C
Goodale
MA
Jeannerod
M
Visual control of reaching movements without vision of the limb. II. Evidence of fast unconscious processes correcting the trajectory of the hand in the final position of a double-step stimulus
Exp Brain Res
 , 
1986
, vol. 
62
 (pg. 
303
-
311
)
Rizzolatti
G
Matelli
M
Two different streams form the dorsal visual system: anatomy and functions
Exp Brain Res
 , 
2003
, vol. 
153
 (pg. 
146
-
157
)
Sainburg
RL
Lateiner
JE
Latash
ML
Bagesteiro
LB
Effects of altering initial position on movement direction and extent
J Neurophysiol
 , 
2003
, vol. 
89
 (pg. 
401
-
415
)
Shen
L
Alexander
GE
Preferential representation of instructed target location versus limb trajectory in dorsal premotor area
J Neurophysiol
 , 
1997
, vol. 
77
 (pg. 
1195
-
1212
)
Shipp
S
Blanton
M
Zeki
S
A visuo-somatomotor pathway through superior parietal cortex in the macaque monkey: cortical connections of areas V6 and V6A
Eur J Neurosci
 , 
1998
, vol. 
10
 (pg. 
3171
-
3193
)
Snyder
LH
Batista
AP
Andersen
RA
Coding of intention in the posterior parietal cortex
Nature
 , 
1997
, vol. 
386
 (pg. 
167
-
170
)
Soechting
JF
Flanders
M
Sensorimotor representations for pointing to targets in three-dimensional space
J Neurophysiol
 , 
1989
, vol. 
62
 (pg. 
582
-
594
)
Strick
PL
Kim
CC
Input to primate motor cortex from posterior parietal cortex. area 5). I. Demonstration by retrograde transport
Brain Res
 , 
1978
, vol. 
157
 (pg. 
325
-
330
)
Tanne
J
Boussaoud
D
Boyer-Zeller
N
Rouiller
EM
Direct visual pathways for reaching movements in the macaque monkey
Neuroreport
 , 
1995
, vol. 
7
 (pg. 
267
-
272
)
Tanne-Gariepy
J
Rouiller
EM
Boussaoud
D
Parietal inputs to dorsal versus ventral premotor areas in the macaque monkey: evidence for largely segregated visuomotor pathways
Exp Brain Res
 , 
2002
, vol. 
145
 (pg. 
91
-
103
)
Tillery
SI
Soechting
JF
Ebner
TJ
Somatosensory cortical activity in relation to arm posture: nonuniform spatial tuning
J Neurophysiol
 , 
1996
, vol. 
76
 (pg. 
2423
-
2438
)
van Beers
RJ
Sittig
AC
Denier van der Gon
JJ
The precision of proprioceptive position sense
Exp Brain Res
 , 
1998
, vol. 
122
 (pg. 
367
-
377
)
van Beers
RJ
Wolpert
DM
Haggard
P
When feeling is more important than seeing in sensorimotor adaptation
Curr Biol
 , 
2002
, vol. 
12
 (pg. 
834
-
837
)
Van Pelt
S
Medendorp
WP
Updating target distance across eye movements in depth
J Neurophysiol
 , 
2008
, vol. 
99
 (pg. 
2281
-
2290
)
Vindras
P
Desmurget
M
Viviani
P
Error parsing in visuomotor pointing reveals independent processing of amplitude and direction
J Neurophysiol
 , 
2005
, vol. 
94
 (pg. 
1212
-
1224
)
Wise
SP
Boussaoud
D
Johnson
PB
Caminiti
R
Premotor and parietal cortex: corticocortical connectivity and combinatorial computations
Annu Rev Neurosci
 , 
1997
, vol. 
20
 (pg. 
25
-
42
)
Zar
J
Biostatistical analysis
 , 
1999
Upper Saddle River (NJ)
Prentice-Hall