Many psychophysical studies suggest that target depth and direction during reaches are processed independently, but the neurophysiological support to this view is so far limited. Here, we investigated the representation of reach depth and direction by single neurons in area V6A of the medial posterior parietal cortex (PPC) of macaques, while a fixation-to-reach task in 3-dimensional (3D) space was performed. We found that, in a substantial percentage of V6A neurons, depth and direction signals jointly influenced fixation, planning, and arm movement-related activity. While target depth and direction were equally encoded during fixation, depth tuning became stronger during arm movement planning, execution, and target holding. The spatial tuning of fixation activity was often maintained across epochs, and depth tuning persisted more than directional tuning across epochs. These findings support for the first time the existence of a common neural substrate for the encoding of target depth and direction during reaches in the PPC. Present results also highlight the presence of several types of V6A cells that process independently or jointly signals about eye position and arm movement planning and execution in order to control reaches in 3D space. A conceptual framework for the processing of depth and direction for reaching is proposed.
In humans, the superior parietal lobule (SPL) is considered crucial for action in depth. Patients with lesions in the SPL show larger visuomotor deficits in depth than in the frontal plane that result in a specific impairment of hand movements toward targets arranged at different distances from their body (Holmes and Horrax 1919; Baylis and Baylis 2001; Danckert et al. 2009). However, single-neuron activity in the SPL of nonhuman primates has been studied neglecting depth as dimension for reaching. Most of the single-unit studies used center-out reaching tasks, with the initial hand and target positions being on the same frontal plane (Snyder et al. 1997; Batista et al. 1999; Battaglia-Mayer et al. 2001; Buneo et al. 2002; Andersen and Cui 2009; Chang et al. 2009). Fewer works have involved depth by employing body-out reaching movements, with the arm moving from a position near the trunk to targets located on a single plane (Fattori et al. 2001, 2005; Bosco et al. 2010), or at different depths (Bhattacharyya et al. 2009; Ferraina et al. 2009). Nevertheless, these studies did not compare the influence of direction and depth signals on reach-related activity in single cells. This has been only addressed by Lacquaniti et al. (1995) in area 5/PE, where separate populations of neurons were found to encode the depth and direction, respectively, of reaching targets. This finding was in agreement with several psychophysical studies (Gordon et al. 1994; Sainburg et al. 2003; Vindras et al. 2005; Bagesteiro et al. 2006; Van Pelt and Medendorp 2008), thus creating a largely held view of these 2 spatial parameters being processed independently (Crawford et al. 2011). The aim of our study was to investigate whether there is an encoding of both depth and direction information for reaching in single SPL neurons, and to compare the depth and direction tuning of fixation, planning, and reaching epochs.
We studied the above issues in the medial posterior parietal area V6A of macaques (Galletti et al. 1999), where several types of neurons are involved in various phases of visually guided reaches (Fattori et al. 2001, 2005). V6A contains neurons that encode the spatial location of visual targets (Galletti et al. 1993, 1995), neurons sensitive to the version and vergence angle of the eyes during fixation and saccades (Galletti et al. 1995; Hadjidimitrakis et al. 2011; Breveglieri et al. 2012), and cells whose activity is modulated by the arm-reaching movement (Fattori et al. 2001, 2004, 2005) and arm position in space (Breveglieri et al. 2002; Fattori et al. 2005). In the present work, single cells were recorded while animals performed a reaching task to foveated targets located at different depths and directions in 3-dimensional (3D) space. We have found that depth and direction signals jointly influence the activity of a large number of cells during reaches in 3D space. Our results demonstrate for the first time the convergence of depth and directional spatial information on single SPL neurons during 3D reaches. In addition, they show that several subpopulations of V6A cells are recruited during the progress of a fixate-to-reach task in 3D space.
Materials and Methods
Two male macaque monkeys (Macaca fascicularis) weighing 4.4 kg (Monkey A) and 3.8 kg (Monkey B) were used. Initially, the animals were habituated to sit in a primate chair and to interact with the experimenters. Then, a head-restraint system and the recording chamber were surgically implanted under general anesthesia (sodium thiopenthal, 8 mg/kg · h, intravenously) following the procedures reported by Galletti et al. (1995). A full program of postoperative analgesia (ketorolac tromethamine, 1 mg/kg intramuscularly immediately after surgery, and 1.6 mg/kg i.m. on the following days) and antibiotic care (Ritardomicina, benzatinic benzylpenicillin + dihydrostreptomycin + streptomycin, 1–1.4 mL/10 kg every 5–6 days) followed surgery. Experiments were performed in accordance with national laws on care and use of laboratory animals and with the European Communities Council Directive of 24 November 1986 (86/609/EEC) and that of 22 September 2010 (2010/63/EU). All the experimental protocols were approved by the Bioethical Committee of the University of Bologna. During training and recording sessions, particular care was taken to avoid any behavioral and clinical sign of pain or distress.
Extracellular recording techniques and procedures to reconstruct microelectrode penetrations were similar to those described in other reports (e.g. Galletti et al. 1996). Single-cell activity was extracellularly recorded from the anterior bank of the parieto-occipital sulcus. Area V6A was initially recognized on functional grounds following the criteria described in Galletti et al. (1999), and later confirmed following the cytoarchitectonic criteria according to Luppino et al. (2005). We performed multiple electrode penetrations using a 5-channel multielectrode recording system (5-channel MiniMatrix, Thomas Recording). The electrode signals were amplified (at a gain of 10 000) and filtered (bandpass between 0.5 and 5 kHz). Action potentials in each channel were isolated with a waveform discriminator (Multi Spike Detector; Alpha Omega Engineering) and were sampled at 100 kHz. Histological reconstructions have been performed following the procedures detailed in a recent publication from our lab (Gamberini et al. 2011). Briefly, electrode tracks and the approximate location of each recording site were reconstructed on histological sections of the brain on the basis of electrolytic lesions and several other cues, such as the coordinates of penetrations within the recording chamber, the kind of cortical areas passed through before reaching the region of interest, and the depths of passage points between gray and white matter. All neurons of the present work were assigned to area V6A (Fig. 1).
Electrophysiological signals were collected while monkeys were performing a fixation-to-reach task. The animals performed arm movement with the contralateral limb, with the head restrained, in darkness, while maintaining steady fixation of the target. Before starting the movement, the monkeys had their arm on a button (home button [HB], 2.5 cm in diameter) located next to its trunk (Fig. 2A). Reaches were performed to 1 of the 9 light emitting diodes (LEDs, 6 mm in diameter) positioned at eye level. The LEDs were mounted on the panel at different distances and directions with respect to the eyes, always at eye level. As shown in Figure 2B, target LEDs were arranged in 3 rows: 1 central, along the sagittal midline and 2 lateral, at version angles of −15° and +15°, respectively. Along each row, 3 LEDs were located at vergence angles of 17.1°, 11.4°, and 6.9°. Given that the interocular distance for both animals was 30 mm, the nearest targets were located at 10 cm from the eyes, whereas the LEDs placed at intermediate and far positions were at a distance of 15 and 25 cm, respectively. The range of vergence angles was selected in order to include most of the peripersonal space in front of the animal, from the very near space (10 cm) up to the farthest distances reachable by the monkeys (25 cm).
The time sequence of the task with the LED and arm status and the vergence and version angles of the eyes are shown in Figure 2C. A trial began when the monkey pressed the button near its chest (HB press). After 1000 ms, 1 of the 9 LEDs was switched on (LEDon). The monkey had to fixate the LED while keeping the HB button pressed. Then, the monkey had to wait for 1000–1500 ms for a change in the color of the LED without performing any eye or arm movement. The color change was the go signal (GO) for the animal to release the HB and start an arm movement toward the target (M). Then, the monkey reached the target (H) and held its hand on the target for 800–1200 ms. The switching off of the target (Redoff) cued the monkey to release the LED and to return to the HB (HB press), which ended the trial and allowed the monkey to receive its reward. The presentation of stimuli and the animal's performance were monitored using custom software written in Labview (National Instruments), as described previously (Kutz et al. 2005). Eye position signals were sampled with 2 cameras (one for each eye) of an infrared oculometer system (ISCAN) at 100 Hz and were controlled by an electronic window (4 × 4°) centered on the fixation target. If the monkey fixated outside this window, the trial was aborted. The task was performed in darkness, in blocks of 90 randomized trials, 10 for each LED target. The luminance of LEDs was adjusted in order to compensate for difference in retinal size between LEDs located at different distances. The background light was switched on briefly between blocks to avoid dark adaptation.
At the beginning of each recording session, the monkeys were required to perform a calibration task where they fixated 10 LEDs mounted on a frontal panel at a distance of 15 cm from the eyes. For each eye, signals to be used for calibration were extracted during the fixation of 5 LEDs, 1 central aligned with the eye's straight ahead position, and 4 peripheral placed at an angle of ±15° (distance: 4 cm) both in the horizontal and vertical axes. From the 2 individual calibrated eye position signals, we derived the mean of the 2 eyes (the conjugate or version signal), and the difference between the 2 eyes (the disconjugate or vergence signal) using the equations: Version = (R + L)/2 and vergence = R − L, where R and L was the position of the right and left eye, respectively.
The effect of different target positions on neural activity was analyzed in different time periods during the task. The task epochs taken into account for the analysis are indicated in the bottom part of Figure 2C. They were: (1) The early fixation epoch (FIX) from 50 ms after the end of the fixation saccade till 450 ms after it, (2) the preparation epoch (PLAN) was the last 500 ms of fixation before the GO signal, (3) the reach epoch (REACH) from 200 ms before the start of the arm movement (M) till the end of it signaled by the pressing of the LED target (H), and (4) the hold epoch (HOLD), from the pressing of the LED target (H) till the switching off of the target (Redoff) that was the signal for the monkey to start a return movement to press the HB. The HOLD epoch lasted either 800 or 1200 ms, depending on the trial. Rasters of spiking activity were aligned on specific events of the task sequence, depending on the epoch analyzed. The effect of target depth and direction on activity was analyzed only in those units with a mean firing rate higher than 3 spikes/s and in those neurons that were tested in at least 7 trials for each spatial position. The reasons for this conservative choice are connected to the implicit high variability of biological responses and are explained in detail in Kutz et al. (2003).
A significant modulation of neural activity relative to different target locations was studied using a 2-way analysis of variance (ANOVA) test performed separately for each epoch with factors being target's depth and direction. Target depth was defined as the distance of the target from the animal (near, intermediate, and far), and target direction was its position with respect to the recording hemisphere (contralateral, central, and ipsilateral). Neurons were considered modulated by a given factor only when the factor's main effect was significant (P < 0.05). To find whether the incidence of each of the main effects differed significantly between 2 epochs a 2-proportion z-test (Zar 1999) was applied, as detailed in Fluet et al. (2010). To quantify the selectivity of neuronal activity in each epoch for depth and/or direction signals, we calculated an index termed eta squared (η2, Zar 1999) using the values obtained from the ANOVA test, and by applying the following formula: η2 = SSeffect/SStotal, where SSeffect is the deviance of the main effect, and SStotal the total deviance. We calculated this index for each of the 2 main effects (i.e., depth and direction) and for each of the 4 epochs of interest. To compare the index of the same cell in different epochs, confidence intervals on the η2 indices were estimated using a bootstrap test. Synthetic response profiles were created by drawing N firing rates (with replacement) from the N repetitions of experimentally determined firing rates. The η2 was recomputed using these N firing rates. Ten thousand iterations were performed, and confidence intervals were estimated as the range that delimited 95% of the computed indices (Batista et al. 2007).
To analyze the spatial tuning of activity, a stepwise multilinear regression model was applied in each epoch considered. Regression methods quantify the relationship between dependent (neural activity) and independent (target depth and direction) variables. Given that the target was foveated in all epochs of interest, its depth and direction in space were represented in head-centered coordinates and were equal to the vergence and version angles of the eyes, respectively. We are aware that our experimental configuration cannot distinguish between eye- and head/body-centered frames of reference of target encoding. That being said, in the rest of the paper, when we refer to spatial tuning analysis and data, the terms depth and vergence, as well as direction and version, are interchangeable.
Population averaged spike density functions (SDFs) were generated for the cells modulated by target depth/direction in the epochs of interest. In every cell, an SDF was calculated for each trial (Gaussian kernel, half width at half maximum 40 ms) and averaged across all the trials of the preferred and the opposite depths and directions as defined by the linear regression analysis. The peak discharge of the preferred condition was used to normalize the SDF. Population SDF curves representing the activity of the preferred and opposite condition were constructed by averaging the individual SDFs of the cells, aligned at the behavioral event of interest. In the cells with linear spatial tuning of movement activity (REACH), we calculated the response latency to movement execution for the preferred condition. The cell's response latency was defined as the mean latency of the 3 target positions of the preferred condition (near/far and ipsi/contra). For each position, we quantified the firing activity in the epoch PLAN. To find the onset of the reach-related response, a sliding window (width = 20 ms, shifted of 2 ms) was used to measure the activity starting from 200 ms before the movement start. The distributions of activities in the 2 windows across trials were compared with a Student's t-test (P < 0.05). The onset of the response was determined as the time of the first of 5 consecutive bins (10 ms), where comparisons were statistically significant (P < 0.05). The above procedure, also used in a recent paper on V6A (Breveglieri et al. 2012), was adapted from an earlier work (Nakamura and Colby 2000).
All analyses were performed using custom scripts written in MATLAB (Mathworks, Natick, MA, USA).
We recorded neuronal activity in V6A and identified 288 well isolated, stable cells in 2 monkeys (Monkey A: 192 and Monkey B: 96). Animals were required to execute reaches to foveated targets located at different directions and depths. Targets' elevation was kept constant at eye level. Figure 3 illustrates 4 examples of modulated neurons. All cells were tuned in several time epochs, both in depth and direction. The first neuron (Fig. 3A) was modulated by target depth in all epochs and preferred intermediate to far positions. The cell was also tuned for target direction during both fixation and arm movement planning, showing higher activity for contralateral positions. The second neuron (Fig. 3B) responded strongly during all the epochs for targets located in the near space. In PLAN and REACH epochs, an additional preference for targets located in the contralateral space emerged. The third neuron (Fig. 3C) was modulated by target direction in all epochs and preferred ipsilateral positions. In addition, it showed a preference for near space only during PLAN and REACH. Finally, the fourth cell (Fig. 3D) was modulated by both depth and direction in the first 2 epochs, before arm movement execution, responding strongly for far positions and showed a small—though significant—preference for contralateral space. In REACH and HOLD epochs, the effect of direction disappeared, while a strong depth tuning with a preference for targets located in near space emerged.
The examples in Figure 3 highlight the major characteristics of V6A cells during reaches in 3D space, that is, the coexistence in single cells of modulations by both target direction and depth, and the fact that direction and depth can affect all epochs or be present only early or late in the task.
Tuning for Depth and Direction in the Different Task Phases
To quantify the effect of depth and direction, a 2-way ANOVA was performed in each epoch. In total, 98% of the cells were modulated (P < 0.05) by at least 1 of the 2 factors in at least 1 epoch (94% for depth and 86% for direction). Interaction effect between the 2 factors was observed in 20–35% of the cells across epochs. However, very few neurons (3–4%) showed only interaction effect, so only the main effects of depth and direction were considered. As shown in Figure 4A, during FIX, similar numbers of cells were modulated by depth only, direction only, and both signals. In the subsequent epochs, the percentage of cells modulated by depth only and by both signals slightly increased, whereas the incidence of tuning by direction only significantly decreased (2-proportion z-test, P < 0.05). As shown in Table 1, in epochs PLAN, REACH, and HOLD, the overall effect of target depth and direction was not equally represented, with the effect of depth being 10–20% more frequent than that of direction. In all epochs, a good percentage of neurons were jointly sensitive to both depth and direction signals, with more and more cells of this kind as the task progressed.
|ANOVA||Linear regression||ANOVA||Linear regression|
|FIX||155/288 (53.8%)||133/155 (85.8%)||143/288 (49.6%)||110/143 (76.9%)|
|PLAN||170/288 (59%)||141/170 (82.9%)||124/288 (43%)||100/124 (80.6%)|
|REACH||182/288 (63.2%)||154/182 (84.6%)||146/288 (50.7%)||115/146 (78.8%)|
|HOLD||189/288 (65.6%)||159/189 (84.1%)||140/288 (48.6%)||101/140 (72.1%)|
|ANOVA||Linear regression||ANOVA||Linear regression|
|FIX||155/288 (53.8%)||133/155 (85.8%)||143/288 (49.6%)||110/143 (76.9%)|
|PLAN||170/288 (59%)||141/170 (82.9%)||124/288 (43%)||100/124 (80.6%)|
|REACH||182/288 (63.2%)||154/182 (84.6%)||146/288 (50.7%)||115/146 (78.8%)|
|HOLD||189/288 (65.6%)||159/189 (84.1%)||140/288 (48.6%)||101/140 (72.1%)|
To evaluate the time course of depth/direction selectivity in the different task epochs, we calculated the eta square index (η2) as detailed in Materials and Methods. The η2 index was used to measure the strength of the effect of the 2 factors on the firing rate. Figure 4B plots the average values of η2 for depth and direction in the neurons with a significant main effect of these variables in each epoch. The depth and direction selectivity were not significantly different during FIX (Student's t-test, P > 0.05), whereas depth selectivity was significantly higher than direction selectivity in all the other epochs (Student's t-test, P < 0.05).
Figure 4C illustrates the selectivity of depth and direction factors in single cells modulated in pairs of temporally adjacent epochs (FIX–PLAN, PLAN–REACH, and REACH–HOLD). The η2 indexes found in each epoch for depth (Fig. 4C, top) and direction (Fig. 4C, bottom) were used to plot single points, which represent single cells. Filled circles represent neurons with a significantly different index between 2 adjacent epochs (bootstrap test, 10 000 iterations, P < 0.05); and empty circles indicate cells with similar selectivity (bootstrap test, 10 000 iterations, P > 0.05). Figure 4C confirmed, at the single-cell level, the results shown for the population of V6A neurons in Figure 4B, in that neurons were significantly more affected by depth as the task progressed, that is, in PLAN versus FIX, in REACH versus PLAN, and in REACH versus HOLD, while direction selectivity did not significantly change in the different epochs.
Spatial Tuning in the Different Task Phases
To quantify the spatial tuning of the neurons, a linear regression analysis was performed with depth and direction of the target in space as independent variables. Since the target to be reached out was always foveated, the depth and direction in space of the target could be defined in head/body-centered coordinates, that is, with the vergence and version angles, respectively, of the eyes. The linear regression model was used because we observed that few neurons displayed their maximal firing rates for intermediate and central positions, and these positions were the least preferred in our population (10% of cells, Bonferroni post hoc). As shown in Table 1, most of the neurons that were significantly modulated by target depth and direction (ANOVA, P < 0.05) had discharges that were linearly correlated (P < 0.05) with vergence and version angles, respectively. In each neuron, the sign of the linear correlation coefficients (standarized) was used to determine the spatial preference in a certain epoch. Neurons with a significant linear vergence tuning were classified as near or far, whereas cells linearly tuned by version angle were classified as contralateral or ipsilateral, depending on both the sign of the linear version coefficient and the recording hemisphere.
The percentage of cells falling into the above groups in each epoch is illustrated in Figure 5A. Neurons tuned for “far” reachable space were found to be more than those tuned for “near” reachable space (Fig. 5A, top). The difference was statistically significant in all epochs apart from REACH (χ2, P < 0.05 in FIX, PLAN, and HOLD). Regarding the directional tuning (Fig. 5A, middle), contralateral neurons were more numerous than ipsilateral ones in all epochs, but the difference was never statistically significant (χ2, P > 0.05). The bottom part of Figure 5A shows that near and far cells were similarly tuned for contralateral and ipsilateral spaces (2-way χ2, P > 0.05). In summary, the analysis of the spatial tuning showed that the distributions of spatial preference within the reachable space tested were quite similar across epochs.
We then addressed the question of whether the constancy in the distribution of preferred depths and directions across the task was the result of a single group of neurons being active, or whether different subpopulations of cells were spatially tuned in each epoch. For this purpose, we defined and quantified the cells that preserved, changed, lost, or acquired their spatial tuning from one epoch to the next. The results of this analysis are shown in Figure 5B. The spatial tuning of depth modulations (Fig. 5B, left) was remarkably consistent across epochs (40–50% of the cases), and the consistency of spatial tuning increased as the task progressed through the epochs. In only 3% of neurons, the spatial tuning changed as the task progressed. The coefficients of determination (R2) were calculated to measure how well the coefficients of one epoch can predict the value of the coefficients in the next one. The depth coefficients were strongly correlated with highly significant R2 values (range 0.73–0.77, P < 0.0001). It is interesting to note that these values were quite constant across the epoch comparisons, thus demonstrating an equal strength of depth tuning consistency as the task progressed.
As illustrated in the right part of Figure 5B, the direction tuning was less consistent across epochs than depth tuning (<30% of the cases), without significant changes as the task progressed. In about 35% of cases, the directional tuning was lost and, in another 35% of cases, it emerged only at the later epoch of each pair. As a result, the subpopulation of cells tuned in direction in a certain epoch was, in large part, different from that recruited in the next one. The version coefficients of adjacent epochs were strongly correlated, as for vergence, with highly significant R2 values (range 0.56–0.86, P < 0.0001). In contrast to what observed in depth tuning, these values were more variable across epoch comparisons, with the PLAN/REACH pair showing the highest and the FIX/PLAN the lowest R2 value, respectively. In other words, the spatial tuning that appeared early in the task exerted a strong influence on the spatial tuning of the activity in the latter epochs. However, a considerable number of neurons lost their spatial tuning, and different subpopulations of spatially tuned cells became active during planning, arm movement execution, and holding of the static arm position. This latter finding, together with the similar one relative to the depth tuning, suggests that additional spatial information—other than the eye position—became available for the tuning of activity in PLAN, REACH, and HOLD epochs.
To characterize the sources of additional spatial input during movement execution, we determined the latency of response in the neurons linearly modulated in REACH. Latency was measured as the time at which REACH activity became significantly higher than PLAN activity (see Materials and Methods). The mean latency of reaching responses (n = 143) was 41.6 ± 155 (standard deviation) ms after the movement onset. In 74 (52%) cells, the response started before the movement onset, whereas in 69 (48%) neurons the response started after the movement onset. The first group of cells are likely activated by a corollary discharge from the premotor cortex (Matelli et al. 1998; Shipp et al. 1998; Gamberini et al. 2009), whereas the second one could reflect proprioceptive and tactile signals from the arm that are known to affect an important fraction of V6A neurons (Breveglieri et al. 2002). To test whether there was a difference in the onset of depth reaching responses compared with the direction ones, mean latencies were calculated seperately for the preferred depth and direction. Neurons with depth modulations (n = 113) had a mean latency of +24.6 ± 148.8 ms; directionally tuned cells (n = 78) a mean latency of 61.6 ± 163.6 ms. The 2 latency distributions were not statistically different (Kolmogorov–Smirnov test, P > 0.05 and Wilcoxon-signed rank test, P > 0.05). This suggests that depth and direction signals affect the reaching-related activity with a similar time course.
We divided the V6A cells reported in this study into 3 main categories based on the presence of modulation in epochs FIX and REACH. Neurons were classified as “FIX cells” when they showed spatial tuning in FIX, but not in REACH, “REACH cells” when the opposite condition occurred, and “FIX–REACH cells” when the neurons were modulated in both epochs. The percentage of cells ascribed to each category are reported in Figure 6. Cells modulated by depth (Fig. 6, top) were more frequently represented in the category of FIX–REACH cells (χ2, P < 0.05), whereas neurons modulated by direction were evenly distributed between the 3 categories (Fig. 6, bottom). We then calculated the percentage of cells modulated in PLAN or HOLD that fell into each of these categories (Fig. 6, middle and right panels). Exact numbers are reported in the figure legend. Both PLAN and HOLD cells belonged mostly to the FIX–REACH cells category, whereas a minority of them did not fall to any of the 3 main categories (∼15% for depth and ∼20% for direction). Overall, the above analysis confirmed the coincidence of modulations between the epochs and revealed the existence of distinct categories of cells.
Population SDFs allowed us to investigate the temporal pattern of activity in the 3 main categories of cells. The population SDFs were constructed by averaging the single neuron SDFs for the preferred and opposite condition. Figure 7 illustrates the average population activity of each category of cells for depth (left panels) and direction (right panels) modulations. In FIX cells, the activity in the preferred and opposite conditions started to diverge about 200 ms before fixation onset, at around the time of fixation saccade. The activity of nonpreferred depths and directions then decreased rapidly at or below the baseline level, while that of the preferred depths and directions remained high during the first part of fixation (FIX), and decreased more slowly for depth than for direction (Fig. 7, top left and right panels, respectively). At the population level, both depth and directionally tuned FIX cells showed arm movement-related responses that were not significantly different in the preferred and opposite conditions.
It is worth noting that the activity in the preferred depth, but not in the preferred direction, was higher than that in nonpreferred condition during the whole fixation period, including the PLAN epoch before the reaching movement. The difference in the temporal evolution of activity between the FIX cells modulated in depth and those directionally tuned is in line with a recent study from our lab that showed a more tonic effect of vergence with respect to version (Breveglieri et al. 2012). It also agrees with the increased consistency in spatial tuning between FIX and PLAN in the depth dimension that we observed in the present study (Fig. 5).
In REACH cells, the time course of modulation was very similar for depth and direction tuning (Fig. 7, middle panels). The population activity for the preferred and opposite conditions diverged well after the fixation onset. In the preferred condition, the population activity remained stable during fixation, whereas in the nonpreferred one it progressively decreased till the go signal for reaching. After the go signal, the activity increased (much more in the preferred condition), reaching its peak at movement onset. The difference in activity between the preferred and opposite conditions was evident during the whole movement period and during most of the HOLD period.
FIX–REACH neurons (Fig. 7, bottom panels) showed a temporal pattern of activity that combined those of FIX and REACH cell categories. In this category of cells, the preferred condition was defined based on the activity in REACH epoch, which was congruent in the vast majority of cases (>90%) with the preferred condition in FIX. FIX–REACH cells displayed 2 peaks of activity, one around fixation onset and another around arm movement onset. Between these 2 events, the curves representing the preferred and opposite conditions were well separated. The time course of population activities was similar in depth (Fig. 7, bottom-left panel) and direction (Fig. 7, bottom-right panel) modulations. Interestingly, the activity in preferred conditions (both for depth and direction) decreased during fixation as in FIX neurons, but then increased in the last part of fixation (PLAN), just before the movement onset, as it occured in REACH neurons. In summary, V6A neurons were found to encode different types of signals during a reach-in-depth task. Eye position and arm movement-related information influenced either independently (FIX cells and REACH cells), or jointly (FIX–REACH cells), the activity of V6A neurons. Regarding the anatomical distribution in V6A of the above types of cells, the histological reconstruction of the recording sites did not show any segregation of the main cell categories.
The major goals of the present study were to investigate the coexistence in the same cells, as well as the incidence and temporal evolution, of depth and direction tuning of V6A activity during eye–hand coordinated movements in 3D space. We found in V6A an extensive convergence of target depth and direction signals on single neurons during reaches. In addition, the influence of depth signals was somewhat stronger than direction during planning and execution of reaches, and during holding of the targets. In many cells, spatial modulations of activity occurred in multiple epochs, from fixation through reach planning and execution, till holding period. An important fraction of V6A neurons maintained their spatial preference over the time course of the task, whereas a few cells changed it. This consistency was more frequent in the neurons tuned in depth than in those tuned in direction. Below, we discuss the implications of our major findings for the encoding of arm reaching in 3D space, and the sensory-to-motor signal transformations underlying eye–hand coordinated movements.
Encoding of Target Depth and Direction During Reaches
The finding that during reaching a significant number of neurons are modulated by both target depth and direction is in contrast with the general view that depth and direction of reaching targets are processed independently (Flanders et al. 1992; Crawford et al. 2011). The view of separate pathways for depth and direction is based on several behavioral studies (Soechting and Flanders 1989; Flanders and Soechting 1990; Gordon et al. 1994; Sainburg et al. 2003; Vindras et al. 2005; Bagesteiro et al. 2006; Van Pelt and Medendorp 2008), but the neurophysiological support to it is relatively weak. To our knowledge, only Lacquaniti et al. (1995) who studied this issue in the macaque posterior parietal cortex (PPC) reported distinct subpopulations of neurons in area PE representing the distance, azimuth, and elevation of target location. In contrast, in the premotor cortex, most of the studies found a convergence on single neurons of distance and directional information (Fu et al. 1993, 1995; Kurata 1993; Messier and Kalaska 2000). A conceptual framework of how the parietal and frontal networks might code reach direction and distance is presented below.
Figure 8 shows a network of areas where reach depth and direction could be encoded. Visual signals and eye position information interact in the nodes of this network with somatosensory signals related to the arm position, to generate the motor output. Visual input arising from the striate and extrastriate cortex enters the network at the level of PPC areas V6A (Galletti et al. 2001; Passarelli et al. 2011) and in the medial intraparietal area (MIP; Colby and Duhamel 1991). The direction of a reaching target is precisely calculated from its retinal location, whereas to define its distance the brain must also use signals related to the vergence angle of the eyes, binocular disparity, and monocular depth cues. Vergence angle information has been shown to influence the activity of many neurons in V6A (Breveglieri et al. 2012) and in the parietal reach region (PRR; Bhattacharyya et al. 2009) that mostly includes parts of MIP and caudal area PE (PEc). In addition to their tuning by vergence, a substantial fraction of PRR neurons were found to be modulated also by retinal disparity (Bhattacharyya et al. 2009). This integration of vergence and disparity signals, also reported in the lateral intraparietal area in the lateral bank of the intraparietal sulcus (Gnadt and Mays 1995; Genovesio and Ferraina 2004), is sufficient to encode the egocentric distance of visual targets.
For foveated targets, vergence angle is the most important signal that the reach-related areas could use to calculate the reach amplitude (Foley 1980; Melmoth et al. 2007). Moreover, as shown in the present study (FIX cells) and in Breveglieri et al. (2012), many V6A cells are modulated by both the vergence and version angles, so they are able to encode the 3D spatial coordinates of a foveated-reaching target. Similar convergence of version and vergence signals—though not tested yet—could also occur in areas MIP and PEc, which are known to receive visual and eye position input (Johnson et al. 1996; Eskandar and Assad 1999; Battaglia-Mayer et al. 2001; Breveglieri et al. 2008; Bakola et al. 2010). All the above areas could send the 3D spatial information about target location to the dorsal premotor cortex (PMd), with which they are strongly connected (Tanne et al. 1995; Matelli et al. 1998; Shipp et al. 1998; Tanne-Gariepy et al. 2002; Gamberini et al. 2009), and then from PMd this information is transmitted to the primary motor cortex.
The other major contribution to the reach circuit regards the proprioceptive information about the hand position. This input arises from the areas of the somatosensory cortex and enters the circuit mainly at the level of area PE. In the primary somatosensory area (S1), the neurons were reported to be more sensitive to movement amplitude than to direction (Tillery et al. 1996). In PE, the neurons modulated by target distance were twice as much as those modulated by azimuth and elevation (Lacquaniti et al. 1995), and the incidence of depth-modulated neurons increased going from target presentation to movement execution in a reach-in-depth task (Ferraina et al. 2009). These findings provide neurophysiological support to behavioral data arguing that in depth proprioception is more reliable than vision (Flanders et al. 1992; van Beers et al. 1998, 2002; Sainburg et al. 2003; Monaco et al. 2010). Furthermore, PE is strongly and reciprocally connected with the primary motor cortex (Jones et al. 1978; Strick and Kim 1978; Johnson et al. 1996; Bakola et al. 2012), where there is a represention bias for reaches in depth (Naselaris et al. 2006), as during the execution of arm movements in 3D space the preferred direction in the majority of neurons was aligned with the depth axis. This also suggests that the reach-related areas that mainly receive proprioceptive information are more specialized for the control of reaches in depth.
The proprioceptive signals are also sent to V6A, MIP, and PEc, where they can be combined with visual- and vergence angle-related signals in order to establish neurons that jointly process information on depth and direction. Area PE, on the contrary, does not receive visual information (Johnson et al. 1996; Bakola et al. 2012), and vergence angle influences the reaching activity of a small fraction of PE cells (Ferraina et al. 2009). This could explain why in PE depth and direction signals are represented by distinct neuronal populations (Lacquaniti et al. 1995).
The PMd encodes a representation of both movement and sensory-related information (Alexander and Crutcher 1990; Johnson et al. 1996; Shen and Alexander 1997; Matelli et al. 1998). Fu et al. (1993) demonstrated that movement direction, distance, and target location are encoded in overlapping neuronal populations in PMd. In line with this evidence, Messier and Kalaska (2000) found that the majority of PMd cells encoded both the movement distance and direction. Interestingly, both these studies reported that directional information was specified earlier in the task, that is, during the target cue or movement planning period, whereas movement distance exerted its effect mostly during movement execution. The present study reveals an increase in V6A of the number of depth-modulated neurons and an enhanced depth sensitivity as the task progressed. However, differently to the results in PMd, in V6A the direction and depth effects were comparable in the early stages of the task. It could be that signals about the retinotopic target location (i.e., target direction) are transmitted directly to PMd cells without interacting with vergence signals, in order to first specify the movement direction that is more crucial in the initial stages of movement preparation and execution. Overall, the above-mentioned neurophysiological studies, as well as a recent functional magnetic resonance imaging study in humans (Fabbri et al. 2012), suggest that the processing of distance and direction information is more independent in PMd than in the visuomotor parietal areas like V6A. This difference could be advantageous for the online control of arm movement, when the parietal and frontal reach nodes must interact more closely (Wise et al. 1997; Rizzolatti and Matelli 2003). Changes in the motor plan, as when target's location is perturbed, are first reflected in the PMd activity (Archambault et al. 2011) and then are relayed in the parietal areas presumably via top-down processing. Thus, the updating of the movement plans highlights the importance of feedback mechanisms in the encoding of reach direction and depth.
Localizing and moving toward targets in depth are much more demanding computationally and require a better degree of control (Danckert et al. 2009). In several studies where arm movements in 3D were performed, the variability of endpoints was found to be larger along the depth axis where visual uncertaintly is higher (Gordon et al. 1994; McIntyre et al. 1997; Apker et al. 2010). A way to achieve a better motor control in depth could be to recruit neurons receiving inputs other than visual (i.e. proprioceptive, efference copy). SPL neurons do receive these signals and are presumably well suited for controlling reaching, especially in depth, as suggested by the data from neurological patients. A larger impairment of depth processing after damage in SPL was reported almost 100 years ago in a human case study (Holmes and Horrax 1919), and it was recently confirmed that patients with lesions in SPL showed a stronger deficit in depth than in direction during reaching (Baylis and Baylis 2001) and pointing (Danckert et al. 2009) movements.
In the present study, reaches were always performed toward foveated targets, so we cannot determine the frame of reference of reach-related activity. The issue of the reference frames was beyond the scope of the present work. In our study, depth information was more represented than direction. This difference could be attributed to the fact that, in our experimental set-up, the depth range explored included most of the possible depths, whereas tested directions comprised a smaller fraction of directional space. Although we cannot exclude this explanation, there are several lines of evidence that argue against it. For instance, we did not find the influence of depth to be stronger than direction influence in fixation epoch. In addition, although the range of explored directions is small (30°), it comprises most of the space where the gaze and hands interact with objects in everyday life. It is worthwhile to note that shifts of gaze >15–20° are always accompanied by head movements (Freedman and Sparks 1997). Another factor that could explain the stronger depth effect is the difference in retinal size between targets located at different depths. However, if the stronger depth effect was due to the difference in the retinal stimulation between near and far LEDs, we would expect it to be more pronounced in the early phases of the task, especially shortly after the target was fixated (FIX epoch), but this was not the case. In addition, as reported in Galletti et al. (1999), a substantial fraction of V6A cells are not visually responsive, so in those cases the different retinal stimulation cannot account for the stronger effect of depth. Overall, our view is that the limitations of our study listed above do not bear on our major finding that depth and direction of reach targets are jointly processed in V6A.
Spatial Tuning in the Different Task Phases
In the present study, we reproduced the naturalistic conditions of eye–hand coordination in reaching, where the eyes fixate the target before the arm movement begins (Neggers and Bekkering 2001; Hayhoe et al. 2003). Shortly after target fixation, about 70% of V6A neurons were modulated by target depth and/or direction in space. These modulations likely reflect gaze-related activity, and well agree with the previously demonstrated sensitivity of many V6A neurons to the eye position, both in a frontoparallel plane (Galletti et al. 1995; Nakamura et al. 1999) and in depth (Hadjidimitrakis et al. 2011; Breveglieri et al. 2012). The spatial tuning of neural activity during target fixation could represent the target location in 3D space, an essential information to control the reaching action.
During the planning epoch, the monkey continued to fixate the target waiting for the go signal to start arm movement. Activity during this time period is likely affected by both eye position and attentional signals, both reported to be present in V6A (Galletti et al. 1995, 2010). In addition, PLAN activity could be related to the programming of the arm movement, one of the key functions of the PPC (Snyder et al. 1997; Andersen and Buneo 2002). The increase in selectivity of depth encoding that we found in this time epoch probably reflects the motor preparation processing, but more studies are needed at this regard.
The neural activity during REACH epoch is expected to have many different contributions. During movement execution, V6A neurons could receive visual information about target position, proprioceptive and somatosensory inputs about hand and arm positions, and efferent copy of the arm motor command (Matelli et al. 1998; Breveglieri et al. 2002; Fattori et al. 2005; Gamberini et al. 2009; Bosco et al. 2010). Furthermore, the latency analysis revealed that the responses of the neurons modulated in REACH could depend on hand sensory signals and certainly depend on motor signals. Behavioral evidence suggests that target and hand somatomotor information is continuously compared during movement execution (Pélisson et al. 1986). In our body-out reaching task, the monkeys performed the arm movement in darkness, that is, without seeing the arm, so they relied on somatomotor information, but not on visual feedback. As mentioned in a previous section of Discussion, movement in depth depends more on proprioception, and this agrees well with the fact that V6A cells showed the maximum of depth selectivity during the movement period. The tuning of neural activity in the holding phase (HOLD) could be mainly attributed to proprioceptive and somatosensory inputs related to the static arm position in 3D space (Fattori et al. 2001; Breveglieri et al. 2002). It is interesting to note that HOLD period was characterized by the highest incidence of modulation in depth.
It has been suggested that neurons in the PPC integrate spatially consistent retinal, eye, and hand information into “global tuning fields”, and that this type of neural processing could be the substrate for the eye–hand coordination (Battaglia-Mayer et al. 2001). Evidence of neurons with consistent spatial tuning between eye position, arm movement, and arm position-related activities were reported in the SPL, in particular in area PEc (Battaglia-Mayer et al. 2001). In that area about 60% of neurons were found to have global tuning in direction across several epochs and tasks, whereas, in the inferior parietal lobule area 7a, a smaller incidence (∼25%) of such cells was reported (Battaglia-Mayer et al. 2005). A recent study employing a body-out reaching task reported that approximately 60% of 7a neurons changed or lost their directional tuning from fixation to preparation and movement execution (Heider et al. 2010). In the present work, we found a similar incidence (∼30%) of directional tuning consistency in area V6A, but we also found that tuning consistency was more frequent in depth than in direction, occurring in depth in about 50% of V6A neurons. This reflects, we believe, the higher difficulty in reaching objects located along the same line of sight, but at different depth, with respect to objects located along different lines of sight.
Based on the coincidence of modulations during the fixation and reaching epochs, we identified 3 main categories of V6A neurons that are activated during foveated reaches in 3D space, that is, the “FIX”, “REACH”, and “FIX–REACH” cells. This classification scheme might seem arbitrary, since our population of cells was characterized by a continuoum of multiple modulations in the epochs studied. However, our intention was to emphasize the 2 crucial phases of the visuomotor transformation underlying the control of visually guided reaches that are the target localization and the encoding of arm movement. FIX cells process target position through gaze signals, whereas REACH cells could encode the parameters of the arm movement regardless of the spatial location of the target. FIX–REACH cells integrate multiple signals on target location and arm movement. The ability to process independently (FIX cells and REACH cells), or in combination (FIX–REACH cells), eye- and arm-related signals highlights the key role of area V6A in transforming, also in the depth domain, the spatial information about the target into a motor command to reach it.
This work was supported by EU FP7-IST-217077-EYESHOTS; by Ministero dell'Università e della Ricerca (Italy), and by Fondazione del Monte di Bologna e Ravenna (Italy).
We thank Dr N. Marzocchi and G. Placenti for setting-up the experimental apparatus, Dr L. Passarelli and Dr M. Gamberini for valuable assistance in the reconstruction of the penetrations, G. Dal Bo' for help during recordings and A. Caroselli for help with Matlab. Conflict of Interest: None declared.