Abstract

During foveal reaching, the activity of neurons in the macaque medial posterior parietal area V6A is modulated by both gaze and arm direction. In the present work, we dissociated the position of gaze and reaching targets, and studied the neural activity of single V6A cells while the eyes and reaching targets were arranged in different spatial configurations (peripheral and foveal combinations). Target position influenced neural activity in all stages of the task, from visual presentation of target and movement planning, through reach execution and holding time. The majority of neurons preferred reaches directed toward peripheral targets, rather than foveal. Most neurons discharged in both premovement and action epochs. In most cases, reaching activity was tuned coherently across action planning and execution. When reaches were planned and executed in different eye/target configurations, multiple analyses revealed that few neurons coded reaching actions according to the absolute position of target, or to the position of target relative to the eye. The majority of cells responded to a combination of both these factors. These data suggest that V6A contains multiple representations of spatial information for reaching, consistent with a role of this area in forming cross-reference frame representations to be used by premotor cortex.

Introduction

Area V6A, located in the caudal-most part of the superior parietal lobule (SPL) (Galletti et al. 1996, 1999), is a center of visuomotor integration involved in the control of arm reaching movements (Galletti et al. 2003; Gamberini et al. 2011). Usually, primates perform reaching movements while directing the gaze toward the object to be grasped. Under these conditions (foveal reaching), it was found that both direction of gaze (Galletti et al. 1995; Nakamura et al. 1999; Hadjidimitrakis et al. 2011; Breveglieri et al. 2012) and direction of reaching movements (Fattori et al. 2005; Hadjidimitrakis et al. 2014b) strongly influence the activity of V6A neurons. A study specifically designed to evaluate the influence of gaze on V6A reach-related discharges showed that reaching activity was dramatically influenced by the direction of gaze. Indeed, V6A neurons that discharged vigorously for arm movements with a particular gaze direction could be completely silent during the same arm movement if the gaze was directed elsewhere (Marzocchi et al. 2008). The strong influence of gaze direction on the excitability of V6A neurons places severe constraints on the interpretation of data when changes in gaze direction accompany changes in the direction of arm movement (i.e., in foveal reaching). Dissociation of the 2 parameters is needed to understand the role played by each of them. Thus, to investigate the influence of target position and of arm direction independent of gaze direction, we employed a reaching task in which gaze was held constant during reaching movements toward different spatial locations.

A second aim of this work was to study the reference frame used by V6A reaching cells. In the parietal reach region (PRR), which sits near V6A, a predominantly eye-centered frame of reference was initially described (Snyder et al. 1997; Batista et al. 1999). Recently, Chang and Snyder (2010) reported that the reference frames in PRR are idiosyncratic to each neuron, with many neurons exhibiting representation of reaching targets that are intermediate between eye- and hand-related reference frames. McGuire and Sabes (2011) reported that neurons in areas 5 and MIP (the latter partially overlapping with PRR) use heterogeneous representations that do not belong to one simple and discrete reference frame. Using different combinations of gaze and target positions, we investigated whether V6A encodes the target position in an eye- or space-centered reference frame. Although we found a few neurons that encoded target position exclusively in an eye-centered frame of reference and a few that preferred a space-centered reference frame, most V6A neurons clearly preferred an intermediate frame of reference, consistent with recent findings in other posterior parietal areas.

Materials and Methods

Experiments were performed in accordance with National laws on care and use of laboratory animals, with European Council Directive of 24 November 1986 (86/609/EEC), and with the Directive of 22 September 2010 (2010/63/EU). All procedures were approved by the Bioethical Committee of the University of Bologna. Animals were purpose-bred and single-housed in a spacious and enriched environment.

Three trained Macaca fascicularis participated in this study (weight 3, 3.5, and 4.4 kg, respectively). During the behavioral experiments, monkeys were comfortably restrained in a custom primate chair designed for performance of controlled arm movements. Single cell activity was recorded extracellularly from the anterior bank of the parieto-occipital sulcus using glass-coated metal microelectrodes with tip impedances of 0.8–2 MΩ at 1 kHz. Action potentials were sampled at 1 kHz for one monkey and at 100 kHz for the second case. In the third monkey, multiple electrode penetrations were made using a 5-channel multielectrode recording system (Mini Matrix; Thomas Recording). Electrodes were quartz-platinum/tungsten fibers with an impedance of 0.5–2 MΩ at 1 kHz (Thomas Recording). The electrode signals were amplified (gain 10 000) and filtered (bandpass 0.5–5 kHz). Action potentials were isolated using a waveform discriminator (Multi Spike Detector; Alpha Omega Engineering), and spikes were sampled at 100 kHz. Eye movements were simultaneously recorded using an infrared oculometer (Dr Bouis, Germany for 2 cases, ISCAN for the third) and sampled at 100 Hz.

Surgery to implant the recording apparatus and restraint headpost was performed aseptically and under general anesthesia. Animals were deeply anesthetized with sodium thiopenthal (8 mg/kg/h, i.v.). Heart rate, respiratory rate, and body temperature were monitored. Body temperature was controlled with a heating pad. In all animals, a metal head-holder and a steel recording chamber 20 mm in diameter were implanted on the skull using Refobacin Placos R (Merck) neurosurgical cement. The implanted recording chamber was oriented for access to the posterior part of SPL of both hemispheres. Analgesics were administered postoperatively (ketorolac tromethamine, 1 mg/kg i.m. immediately after surgery, and 1.6 mg/kg i.m. on the following days) to minimize pain or discomfort.

Constant-Gaze Reaching Task

All reaching tasks were performed in darkness to avoid visual feedback from arm movement, as well as other visual stimulation. The light background was switched on for a few minutes after each block of triplets of target locations to avoid dark adaptation. To further minimize the role of vision during reaching, LED brightness was strongly reduced. Arm movement tasks were executed with the contralateral limb, while maintaining gaze fixation on the central, straight-ahead position (Fig. 1A). As shown in Figure 1A, reaching movements started from a fixed position (home button, 2.5 cm in diameter) outside the animal's field of view, 5 cm in front of the chest on the midsagittal plane, and reached targets located in different spatial positions on a fronto-parallel panel. Each target was formed by a central green/red light-emitting diode (LED) encircled by a circular ring (12 mm in diameter, 4.8° of visual angle) illuminated by a yellow LED. The target was mounted on a microswitch embedded in the panel. The bicolor LED instructed the animal where to direct and maintain the gaze fixation (fixation target), whereas the yellow LED instructed the animal where to reach (reaching target).

Figure 1.

Experimental setup and time sequence of the Constant-Gaze Reaching Task. (A) Schematic of experimental setup. Reaching movements were performed in darkness, from a home button (black circle) toward 1 of the 3 targets (open circles) located on a panel in front of the animal. During the task, the monkey fixated on a LED directly to the front (indicated by the cross). (B) Time course of the task. The sequence of occurrence of the home button (HB), target button (TB), color of the fixation point (LED), and reaching target (TARGET) are shown. Arrows indicate beginning and end of the time intervals for each epoch. In the bottom series, typical examples of eye traces during a single trial are shown. From left to right, dotted lines indicate task and behavioral markers for: trial start (HB press), eye traces entering the fixation window after eye target appearance, reaching cue appearance (TARGET on), reaching cue offset (TARGET off), go signal for outward movement (LED red), start and end of outward reach movements (HB release and TB press, respectively), go signal for inward movement (LED offset), start and end of the inward movement (TB release and HB press, respectively), end of data acquisition. (C) Time epochs of trial.

Figure 1.

Experimental setup and time sequence of the Constant-Gaze Reaching Task. (A) Schematic of experimental setup. Reaching movements were performed in darkness, from a home button (black circle) toward 1 of the 3 targets (open circles) located on a panel in front of the animal. During the task, the monkey fixated on a LED directly to the front (indicated by the cross). (B) Time course of the task. The sequence of occurrence of the home button (HB), target button (TB), color of the fixation point (LED), and reaching target (TARGET) are shown. Arrows indicate beginning and end of the time intervals for each epoch. In the bottom series, typical examples of eye traces during a single trial are shown. From left to right, dotted lines indicate task and behavioral markers for: trial start (HB press), eye traces entering the fixation window after eye target appearance, reaching cue appearance (TARGET on), reaching cue offset (TARGET off), go signal for outward movement (LED red), start and end of outward reach movements (HB release and TB press, respectively), go signal for inward movement (LED offset), start and end of the inward movement (TB release and HB press, respectively), end of data acquisition. (C) Time epochs of trial.

The targets and the fixation points were placed on a panel located 14 cm in front of the monkey (Fig. 1A). The panel had 3 reaching targets distributed along a line, each 3.7 cm (15.4°) apart. The fixation point was placed at eye level, directly in front of the animal.

The time sequence of the reaching task is shown in Figure 1B. A trial began when the monkey pressed the home button. The animal was free to look around, and was not required to perform any eye or arm movement. After 1000 ms, the central LED lit up (LED green), which signaled the monkey to gaze at the fixation point and to maintain the button press while awaiting the instructional cue. After a delay of 1000–1500 ms, the yellow LED in one of the 3 target positions was illuminated for 150 ms, cueing the target for intervening reaching movement. The monkey then had to wait an additional 1000–1500 ms for a change in color of the fixation LED (green to red) without performing any eye or arm movement. The color change of the fixation target was the go-signal for the monkey to release the home button, and perform an arm movement toward the reaching target and press it. The animal held its hand on the reaching target until the fixation LED switched off (after 800–1200 ms). The offset of the fixation LED informed the monkey to release the reaching target, and to press the home button again, to be rewarded and start another trial.

An electronic window control on eye movements forced the monkeys to fixate on the LED from LED onset (i.e., well before the go-signal for the outward reach) until offset (which cued the return reach). If fixation was broken during this interval, trials were interrupted on-line and discarded.

Correct performance of reaching movements was detected via monopolar microswitches (RS Components, UK) mounted under the home button and the reaching targets. Button presses/releases were recorded with 1 ms resolution (Kutz et al. 2005).

The 3 different target positions were tested in a randomized sequence of trials (45 each for 2 animals, and 30 for the third). No other visual stimuli were presented during the premovement, reaching execution or hand holding phases.

Other Reaching Tasks

Several neurons tested during the Constant-gaze Reaching Task (Fig. 2A) were also evaluated using variants of the task named Foveal Reaching Task (Fig. 2B) and Constant-reach Reaching Task (Fig. 2C). In the Foveal Reaching Task, the fixation target was always coincident with the reaching target. Therefore, reaching movements were always directed toward foveated targets, and the eye-centered coordinates of reaching target (eye-target coordinates) remained constant throughout the task. Reaching/fixation targets were on monkey straight ahead, or 15.4° to the right or left.

Figure 2.

Geometry of tasks used to study the reference frames of reaching in V6A. (A) Constant-Gaze Reaching Task. Reaching movements were performed toward 1 of the 3 targets located on a panel in front of the animal (open circles). The spatial position of the target changed, but that of the gaze was kept constant, resulting in changes of eye/target configuration. Eye/target configurations are indicated from left to right as LT, TE, and RT, respectively. Details as in Figure 1. (B) Foveal Reaching Task. Reaching movements were performed toward 1 of the 3 targets located on the panel in front of the animal. During the task, the monkey had to fixate the reaching target (cross on the panel). The relative position of eye and target remained constant. Eye/target configurations are indicated from left to right as LTE, TE, and RTE, respectively. (C) Constant-reach Reaching Task. Reaching movements were performed toward the target located always straight-ahead on the panel. During the execution of the task, the monkey had to fixate a LED on the panel, which could be in 1 of 3 different positions (cross on the panel), resulting in always different eye/target configurations. Eye/target configurations are indicated from left to right as LE, TE, and RE, respectively. Rectangle circumscribes eye/target combination where target position varied; dashed rectangle indicates eye/target combinations where eye position changed.

Figure 2.

Geometry of tasks used to study the reference frames of reaching in V6A. (A) Constant-Gaze Reaching Task. Reaching movements were performed toward 1 of the 3 targets located on a panel in front of the animal (open circles). The spatial position of the target changed, but that of the gaze was kept constant, resulting in changes of eye/target configuration. Eye/target configurations are indicated from left to right as LT, TE, and RT, respectively. Details as in Figure 1. (B) Foveal Reaching Task. Reaching movements were performed toward 1 of the 3 targets located on the panel in front of the animal. During the task, the monkey had to fixate the reaching target (cross on the panel). The relative position of eye and target remained constant. Eye/target configurations are indicated from left to right as LTE, TE, and RTE, respectively. (C) Constant-reach Reaching Task. Reaching movements were performed toward the target located always straight-ahead on the panel. During the execution of the task, the monkey had to fixate a LED on the panel, which could be in 1 of 3 different positions (cross on the panel), resulting in always different eye/target configurations. Eye/target configurations are indicated from left to right as LE, TE, and RE, respectively. Rectangle circumscribes eye/target combination where target position varied; dashed rectangle indicates eye/target combinations where eye position changed.

Whenever possible, cells were also evaluated in the Constant-reach Reaching Task (Fig. 2C). In this task, the reaching target was held constant, whereas the fixation point could be to the left, centered, or to the right relative to reaching target (same positions of the 2 previous tasks). The time sequence, LED position, task control, and other parameters were exactly as described for the Constant-gaze Reaching Task. Keeping the reaching target constant allowed for constant direction of reaching movements, and thus precluded cell modulation resulting from the direction of arm movement. This allowed us to manipulate the eye-target coordinates (retinotopic coordinates) of the target, while keeping the spatial position of reaching target constant.

Time sequences of the Foveal Reaching Task and the Constant-reach Reaching Task were as described for the Constant-gaze Reaching Task (Fig. 1B).

Eye/target configurations corresponding to all possible combinations tested in the 3 tasks are described as follows (Fig. 2): reach target to the left or to the right of eye position (LT or RT; Constant-gaze Reaching Task), reach target and eye positions superimposed at the left or at the right (LTE or RTE; Foveal Reaching Task), eye to the left or to the right of reach target (LE or RE; Constant-reach Reaching Task), reach target and eye position superimposed at the central position (TE; Constant-gaze Reaching Task, Foveal Reaching Task, and Constant-reach Reaching Task).

No results relative to Constant-gaze Reaching Task have been described previously. We compared results from that task to output of neurons recorded in previous experiments. Neurons from tasks described previously accounted for fewer than 10% of neurons included here (Fattori et al. (2005), 24 of 337 (7%); Marzocchi et al. (2008), 29 of 337 (9%)).

Data Analysis

Data analysis was performed trial-by-trial. For each trial, we screened for a correlation between the neural discharge, eye-position and/or target/arm-movement position. As shown in the bottom part of Figure 1B, the “functional” epochs defined in the present work were: Each epoch was computed trial by trial using the yellow cue, the go-signal, the button- and target- press and release as time markers. The VIS and MEM epochs are referred to as “premovement epochs,” whereas MOV and HOLD are “action epochs.”

  • - FREE, from the beginning of the trial to the illumination of the LED (this epoch has been used as a reference period);

  • - VIS: period where possible transient visual responses to the target illumination could be evoked (from 40 to 150 ms after yellow cue offset);

  • - MEM: delay period between the target illumination and the go signal (from 300 ms after yellow cue offset to the go signal for reaching movement);

  • - MOV: period where activity evoked by reaching execution could be detected (from 200 ms before home button release to the end of movement to the target);

  • - HOLD: period of static hand position on the reached target (from the end of the forward reach (button press) to the offset of the fixation target).

Only those neurons in which we could perform the quantitative analysis in at least 7 trials for each target position were included in the final analysis, as proposed by Snedecor and Cochran (1989) and detailed in Kutz et al. (2003). For cells tested with different tasks, we used the interspike interval plot to ensure that changes in neuronal modulations were not caused by simply losing isolation of the neuron. Additionally, to ensure that the recording situation did not change across the 3 different tasks, we compared activity during epoch FREE in the central position tested in the 3 tasks by applying Student's t-test (P < 0.05). Only cells conforming to all of the above criteria were included in the analysis.

One-way ANOVA (factor: target position; dependent variable: activity during VIS, MEM, MOV, and HOLD) was used to compare neural activity within epochs in the Constant-gaze Reaching Task (except where noted, the significance level for all statistical tests was α = 0.05).

For neurons significantly influenced by target position during individual epochs (VIS, MEM, MOV, and HOLD), we assessed neural spatial preferences with a Bonferroni post hoc correction. As we recorded data in both hemispheres, target positions are represented in terms of ipsi- and contralateral hemifield with respect to the recording side. The incidence of cells preferring a specific spatial sector was compared using the χ2 test. For those cells with significant spatial preference, we evaluated whether each neuron preserved spatial tuning during task evolution with respect to the consistency of the target position evoking the largest neural response across consecutive epochs.

Effect of Eye/Target Combinations on Spatial Tuning of V6A Neurons

To better understand the tuning properties of V6A cells tested in all 3 tasks shown in Figure 2, we assessed whether or not spatial preference for target side, and the specific eye/target combination that evoked the greatest response, was preserved in each neuron across the 3 types of task. We analyzed peripheral targets, because the central targets were common to all tasks, for a total of 6 variables for each neuron. Student's t-test was carried out to assess the influence of target side between each pair of conditions (LT vs. RT; LTE vs. RTE; LE vs. RE).

Reference Frame Analysis

The opportunity to have different combinations of gaze and target position led us to consider the possible contribution of V6A neural activity to an encoding of reaching movements correlated to the position of the target in extrinsic space (space-related), or to the relative position of gaze and target (eye/target-related). For those neurons recorded under all 3 task conditions, and for each epoch, we quantified the response of each cell to the location of the reaching target in space, or to the relative position of gaze and target, and then compared the 2 sensitivities. For this comparison, we computed the normalized Euclidean distance between neural responses for task conditions with the same spatial configuration (same spatial location) versus the same eye/target configuration (same relative position of both eye and target) as follows: 

dist=i=1T(nimi)2T
where n and m are normalized neural response for the 2 configurations. Neural responses were normalized between 0 and 1 by first subtracting the smallest firing rate observed in the 2 responses from all values, then scaling all firing rates by the reciprocal of the largest firing rate. T is the number of targets. The result ranges between 0 and 1, where values near 0 indicate that the neural responses of a given cell for the 2 configurations were nearly identical, and values close to 1 indicate that the neural responses for the 2 configurations were maximally different. To compare the Euclidean distance for the same cell under the 2 different configurations, confidence intervals around the values of the distance were estimated using a Bootstrap test. Synthetic response profiles were created by drawing N firing rates (with replacement) from N repetitions of experimentally determined firing rates. The Euclidean distance was recomputed using these firing rates. Ten thousand iterations were performed, and confidence intervals were estimated as the range that delimited 95% of the computed distances. Using the estimate of measurement noise for the population, we determined a border that encompassed 95% of the noise values. For each epoch, the border is defined by lines, where x is the distance between the space-aligned neural responses, and y is the distance between the eye-aligned neural responses. Units within this border represent those neurons whose sensitivity to eye and spatial position were not statistically distinguishable from zero (Batista et al. 2007; Marzocchi et al. 2008; Bosco et al. 2010). All analyses were performed using custom scripts in Matlab (MathWorks).

Results

We recorded the activity of V6A neurons in 3 animals that performed a Constant-gaze Reaching Task, designed to assess the effect of target position and arm-movement direction while keeping the gaze in a fixed position. A total of 337 tested cells fulfilled the requirements for quantitative analysis (see Materials and Methods).

Quantitative evaluation of neural modulation showed that target position had significant effects on neural activity in all time epochs. We found that firing rates were modulated in 37% of neurons (125 of 337) during the VIS epoch, in 54% of neurons during MEM (183 of 337) and MOV (182 of 337), and 50% of cells (169 of 337) in HOLD (one-way ANOVA and Bonferroni post hoc test, P < 0.05). These percentages include neurons presenting single and multiple modulations during consecutive epochs (Table 1). Examples of responses in the Constant-Gaze Reaching Task are shown in Figure 3. The neuron in Figure 3A displayed a strong visual response when the left target was illuminated, but the visual response disappeared when the target was illuminated in central or right positions. As about 60% of V6A neurons show a visual receptive field (Galletti et al. 1999), this behavior presumably reflects the stimulation of the visual receptive field of the neuron when the target to the left of fixation point was illuminated.

Table 1

Incidence of spatial modulations in 1 and in 2 epochs in V6A neurons (total cells = 337)

 VIS MEM MOV HOLD 
Single + multiple modulations 125 (37%) 183 (54%) 182 (54%) 169 (50%) 
Modulations only in one epoch. 21 (6%) 20 (6%) 12 (3%) 26 (8%) 
Modulations in 2 consecutive epochs. VIS/MEM
85 (25%) 
MEM/MOV
133 (39%) 
MOV/HOLD
115 (34%) 
 
 VIS MEM MOV HOLD 
Single + multiple modulations 125 (37%) 183 (54%) 182 (54%) 169 (50%) 
Modulations only in one epoch. 21 (6%) 20 (6%) 12 (3%) 26 (8%) 
Modulations in 2 consecutive epochs. VIS/MEM
85 (25%) 
MEM/MOV
133 (39%) 
MOV/HOLD
115 (34%) 
 
Figure 3.

Examples of spatial tuning in different task epochs in V6A. (A), Neuron spatially modulated during visual epoch (VIS). Experimental condition is shown in diagram in each panel. Traces indicate neuronal spike density functions (SDFs) with variability band [standard error of the mean (SEM), thickness of trace]; epochs are indicated by abbreviations. Raster displays show impulse activity, while simultaneously recorded of X and Y components of eye position are shown in the traces at the bottom of each panel. Long vertical ticks in raster displays are behavioral or task-related markers. For complete sequence of task and behavioral markers, see Figure 1B. Neural activity and eye traces have been aligned on the onset of target presentation (yellow cue onset). Vertical scale bars on histograms: 125 spikes/s. Eye traces 60°/division. (B) Neuron showing visual response, a planning-related activity and reach execution activity when movements were planned and executed toward the right target. Alignment of SDFs and eye traces on the yellow cue onset and on the Go signal. Vertical scale: 56 spikes/s. (C), Neuron spatially tuned in both MEM and MOV, mostly active when animal planned and performed movements to right target position. Alignment on the onset of reaching movement. Vertical scale: 99 spikes/s. (D), Reaching neuron spatially tuned during MOV and HOLD. Neural activity and eye traces have been aligned on the end of reaching movement. Both MOV and HOLD responses showed a preference for actions toward the right target. Vertical scale: 44 spikes/s. (E), Reaching neuron spatially tuned during HOLD epoch, with a preference for holding the right target. Neural activity and eye traces have been aligned on the end of reaching movement. Vertical scale: 60 spikes/s.

Figure 3.

Examples of spatial tuning in different task epochs in V6A. (A), Neuron spatially modulated during visual epoch (VIS). Experimental condition is shown in diagram in each panel. Traces indicate neuronal spike density functions (SDFs) with variability band [standard error of the mean (SEM), thickness of trace]; epochs are indicated by abbreviations. Raster displays show impulse activity, while simultaneously recorded of X and Y components of eye position are shown in the traces at the bottom of each panel. Long vertical ticks in raster displays are behavioral or task-related markers. For complete sequence of task and behavioral markers, see Figure 1B. Neural activity and eye traces have been aligned on the onset of target presentation (yellow cue onset). Vertical scale bars on histograms: 125 spikes/s. Eye traces 60°/division. (B) Neuron showing visual response, a planning-related activity and reach execution activity when movements were planned and executed toward the right target. Alignment of SDFs and eye traces on the yellow cue onset and on the Go signal. Vertical scale: 56 spikes/s. (C), Neuron spatially tuned in both MEM and MOV, mostly active when animal planned and performed movements to right target position. Alignment on the onset of reaching movement. Vertical scale: 99 spikes/s. (D), Reaching neuron spatially tuned during MOV and HOLD. Neural activity and eye traces have been aligned on the end of reaching movement. Both MOV and HOLD responses showed a preference for actions toward the right target. Vertical scale: 44 spikes/s. (E), Reaching neuron spatially tuned during HOLD epoch, with a preference for holding the right target. Neural activity and eye traces have been aligned on the end of reaching movement. Vertical scale: 60 spikes/s.

The cell in Figure 3B responded across multiple trial intervals. The discharge started at the onset of the cue signal (VIS; Fig. 3B, right panel), and was maintained tonically for the entire duration of the planning phase and reaching movement (MEM and MOV epochs), which extended well beyond the duration of the visual stimulus. This cell showed a clear spatial preference, discharging strongly for rightward, but not to leftward movements.

Neurons modulated during action epochs (MOV and HOLD) exhibited consistent spatial modulation during multiple epochs. For example, the cell in Figure 3C was modulated during planning and execution of reaching movements, with a preference for rightward movement. The planning activity of this neuron followed the same spatial preference as the reaching activity, discharging strongly after cue presentation when the animal was preparing the rightward movement. The neuron in Figure 3D was activated maximally when the animal moved the arm toward the central and right target positions. The same cell discharged tonically during hand holding on the target, in particular for the target located to the right. The neuron in Figure 3E was maximally activated for holding the hand rightward, and was almost silent during the other epochs of the trial.

The majority of neurons studied responded during 2 or more sequential epochs (Table 1). Cells responding in both VIS and MEM represented 25% (85 of 337) of the cell population, whereas 18% (62 of 337) were modulated in VIS, MEM, and MOV (Fig. 3B). Approximately 39% (133 of 337) of the cell population was tuned to both MEM and MOV, with 34% (115 of 337) in both MOV and HOLD. Cells discharging only during the VIS or MEM epochs represented 6% (21 of 337 and 20 of 337, respectively) of the entire V6A population, while those modulated only during MOV represented 3% of cells (12 of 337), and those during HOLD by 8% (26 of 337).

It is evident from the previous description that the majority of V6A neurons encoded the reaching space in a way that required a multimodal integration of visual and motor information. In contrast, only a few cells showed a response restricted to one epoch, and consequently responded to a single type of information.

Figure 4A summarizes the distribution of preferred retinal locations for the reaching target across all the epochs of interest. Neurons were included in the distribution only if their activity in the preferred position was significantly different (Student's t-test with Bonferroni post hoc correction) from the other positions. The upper left part of Figure 4A shows that there was a significant spatial preference during VIS for contralateral positions across all cells (χ2 test, P < 0.05), an expected result given that V6A exhibits a preference for the contralateral visual hemifield (Galletti et al. 1999). No significant laterality effect was observed during the MEM or MOV epochs (χ2 test, n.s.). In contrast, ipsilateral targets were over-represented during HOLD (χ2 test, P < 0.05), with the majority of V6A neurons responding maximally while the hand was on a target in the ipsilateral hemifield. Discharges of V6A neurons during planning and movement epochs were not spatially correlated with visual responses (Fig. 4A), consistent with descriptions of saccade-related activity in LIP neurons (Colby et al. 1996).

Figure 4.

Frequency histograms of neuronal spatial preference in area V6A. (A), Data refer to reaching target positions at which each neuron responded the most with respect to the recording side. Each epoch is represented by a separate histogram. Preference was defined by a Bonferroni post hoc test (P < 0.05). The results of χ2 test is indicated by asterisks (P < 0.05). (B), Percentage of cells that showed the preservation of spatial tuning (black), and those that change (white) their tuning in pairs of consecutive epochs during the task progress. Cells preserving spatial preference between VIS and MEM were 62%; between MEM and MOV 84%; between MOV and HOLD 59%.

Figure 4.

Frequency histograms of neuronal spatial preference in area V6A. (A), Data refer to reaching target positions at which each neuron responded the most with respect to the recording side. Each epoch is represented by a separate histogram. Preference was defined by a Bonferroni post hoc test (P < 0.05). The results of χ2 test is indicated by asterisks (P < 0.05). (B), Percentage of cells that showed the preservation of spatial tuning (black), and those that change (white) their tuning in pairs of consecutive epochs during the task progress. Cells preserving spatial preference between VIS and MEM were 62%; between MEM and MOV 84%; between MOV and HOLD 59%.

Notably, a common response property during all task phases was the large number of cells preferring peripheral, as opposed to foveal, targets. During the VIS and HOLD epochs, encoding of foveal targets was observed in 15% and 27% of cells, respectively. This was significantly smaller than the fraction of cells encoding the 2 peripheral positions (see Fig. 4A, χ2 test, P < 0.05). In MEM and MOV, the number of cells preferring foveal targets, although somewhat smaller, was not significantly different from the number of cells that preferred peripheral targets (48% and 46%, respectively; see Fig. 4, χ2 test, P > 0.05).

Considering that most V6A cells were modulated across different task epochs, we checked the consistency of spatial tuning across epochs. Figure 4B shows the percentage of cells that maintained (black) or changed (white) spatial preference across consecutive epochs. The majority of V6A cells retained the same spatial preference across epochs, particularly for MEM and MOV, that is, during planning and execution of the arm movement, as shown by the individual examples in Figure 3.

Do V6A Neurons Encode the Position of Target in Space or the Position of Target Relative to the Gaze?

The patterns of firing rate modulation described thus far could be due to the spatial position of the reaching target, or to the relative position of target with respect to the position of gaze. To help resolve between these alternatives, we compared cells tested with the Constant-gaze Reaching Task, the Foveal Reaching Task, and the Constant-reach Reaching Task (see Materials and Methods; Fig. 2A–C).

The 2 exemplar neurons in Figure 5A, B are representative of 2 neural patterns that largely reflect how the different eye/target combinations influenced neuronal discharges. The neuron shown in Figure 5A was spatially modulated during both premovement and reaching phases, showing the largest responses when the animal performed the 2 non-foveal tasks and reached for the target to the left of gaze position (Top-left and bottom-right; conditions LT and RE, respectively). This particular neuron was representative of the group of cells whose activation strongly depended on a specific eye/target configuration arrangement during execution of the tasks. Conversely, the neuron depicted in Figure 5B exhibited similar discharges for left and right eye/target combinations during the MOV epoch within the same task. This cell responded most strongly during reaching movements executed toward the central target, while gaze was fixated either to the left or right (left-bottom and right-bottom; conditions LE and RE, respectively). In this case, the neural response did not change according to the target spatial side or gaze/target relative position, but was based instead on the nature of the task.

Figure 5.

Spatial preference of V6A neurons for different eye/target combinations. (A), Side-dependent neuron modulated during VIS, MEM and MOV epochs showing sustained discharge for eye/target combination represented by reaching target to the left of gaze position. Neural activity and eye traces have been aligned on the onset of reaching movement. Vertical scale: 50 spikes/s; (B), Task-dependent neuron strongly activated during epoch MOV and displaying the highest response when the animal performed movements toward the central position but fixating to the left or to the right. Neural activity and eye traces have been aligned on the onset of reaching movement. Vertical scale: 95 spikes/s. All details as in Figure 3. (C), Percentage of cells that showed spatial preference for a particular eye/target combination (side-dependent cells, black) and of those independent by a single eye/target combination (task-dependent cells, white). Cells belonging to the side-dependent group represented the 65% in epoch VIS; 70% in MEM; 66% in MOV and 70% in HOLD.

Figure 5.

Spatial preference of V6A neurons for different eye/target combinations. (A), Side-dependent neuron modulated during VIS, MEM and MOV epochs showing sustained discharge for eye/target combination represented by reaching target to the left of gaze position. Neural activity and eye traces have been aligned on the onset of reaching movement. Vertical scale: 50 spikes/s; (B), Task-dependent neuron strongly activated during epoch MOV and displaying the highest response when the animal performed movements toward the central position but fixating to the left or to the right. Neural activity and eye traces have been aligned on the onset of reaching movement. Vertical scale: 95 spikes/s. All details as in Figure 3. (C), Percentage of cells that showed spatial preference for a particular eye/target combination (side-dependent cells, black) and of those independent by a single eye/target combination (task-dependent cells, white). Cells belonging to the side-dependent group represented the 65% in epoch VIS; 70% in MEM; 66% in MOV and 70% in HOLD.

For those neurons exhibiting significant spatial modulation in at least one of the tasks (N = VIS: 71 of 113; MEM: 87 of 113; MOV: 83 of 113; and HOLD: 79 of 113), we assessed response strength across all 6 conditions (LT, RT, LET, RTE, LE, and RE). The responses of the entire population could be broadly divided into 2 patterns of activity. The majority of cells exhibited firing rate modulation for a single eye/target combination across all epochs (Fig. 5C, black) (VIS: 46 of 71; MEM: 61 of 87; MOV: 55 of 83; HOLD: 55 of 79). This group included cells where left and right eye/target arrangements evoked different, and in some cases, opposite patterns of neural activation (side-dependent cells).

The second, smaller group (VIS: 25 of 71; MEM: 26 of 87; MOV: 28 of 83; HOLD: 24 of 79) included neurons whose modulation was similar on both the left and right sides (task-dependent cells; Fig. 5C, white). Within the task-dependent cells, we identified a subgroup in which reaching evoked a response pattern in which eye/target configurations on either side of the target elicited identical responses within the same task. To characterize such behavior, we compared responses of task-dependent cells to the various conditions (LT vs. RT; LTE vs. RTE; LE vs. RE, see Materials and Methods) using a t-test. In several cases, statistical comparison of each pair of two-sided conditions yielded no significant differences across all 3 situations (VIS: 4 of 25; MEM: 5 of 26; MOV: 5 of 28; HOLD: 7 of 24; t-test, P > 0.05).

These neural behaviors were also examined using Principal Components Analysis, as detailed in Supplementary Material, which yielded similar conclusions.

Different Reference Frames for Reaching

The previous analysis suggested that different reference frames could be employed by V6A neurons. The 2 frames studied in the present work are the eye/target-related and the space-related frame of reference. We use the term “eye/target-related” to describe cells whose discharge depended on the relative eye/target configuration, while “space-related” refers to cells whose discharge depended on target position (or on arm direction/position). In our experimental conditions, “space-related” indicates stability with respect to the initial position of the hand, head, and body. In Figure 6, we show 3 examples of cells highlighting the different spatial and eye/target-related configurations across the 3 tasks.

Figure 6.

Examples of V6A neurons employing different reference frames for reaching. (AC) From the top to the bottom: Neural activity in Constant-gaze Reaching Task (LT, TE, and RT eye/target combinations), Foveal Reaching Task (LTE, TE, and RTE eye/target combinations) and Constant-reach Reaching Task (LE, TE, and RE eye/target combinations). Alignment on the onset of reaching movement. Color scale indicates the normalized mean activity for each epoch according to the color scale reported to the bottom of the figure. Rectangle, target position variation; dashed rectangle, eye position variation as in Figure 2. (A) The neuron encodes reaching movements in spatial frame of reference showing a preference for the right hemispace regardless of the eye/target configuration. Vertical scale: 70 spikes/s. (B) Neuron encoding reaching target in eye/target-related reference frame. The neuron showed strong activation when the animal reached toward targets on the right with respect to the eye position. For all the other arm directions, it did not exhibit significant modulation. Vertical scale: 31 spikes/s. (C) Neuron showing activation during all epochs of interest, regardless of eye/target relative coordinates or extrinsic space coordinates. This response is an example of a mixed reference frame. Vertical scale: 48 spikes/s. All details are as in Figure 3.

Figure 6.

Examples of V6A neurons employing different reference frames for reaching. (AC) From the top to the bottom: Neural activity in Constant-gaze Reaching Task (LT, TE, and RT eye/target combinations), Foveal Reaching Task (LTE, TE, and RTE eye/target combinations) and Constant-reach Reaching Task (LE, TE, and RE eye/target combinations). Alignment on the onset of reaching movement. Color scale indicates the normalized mean activity for each epoch according to the color scale reported to the bottom of the figure. Rectangle, target position variation; dashed rectangle, eye position variation as in Figure 2. (A) The neuron encodes reaching movements in spatial frame of reference showing a preference for the right hemispace regardless of the eye/target configuration. Vertical scale: 70 spikes/s. (B) Neuron encoding reaching target in eye/target-related reference frame. The neuron showed strong activation when the animal reached toward targets on the right with respect to the eye position. For all the other arm directions, it did not exhibit significant modulation. Vertical scale: 31 spikes/s. (C) Neuron showing activation during all epochs of interest, regardless of eye/target relative coordinates or extrinsic space coordinates. This response is an example of a mixed reference frame. Vertical scale: 48 spikes/s. All details are as in Figure 3.

The neuron in Figure 6A exhibited stronger discharges in MOV when the target was located at the center or to the right, rather than to the left (see first row of Fig. 6A; TE and RT). However, this pattern raises the question of whether this neuron's preference was for the central and right side of the animal's workspace, or the foveal and the right part of the visual space. Considering the strong responses observed when reaches were directed rightward, but with a different eye/target configuration (Fig. 6A, RTE and RE), it seems that the neuron preferred the right working space. That is, activity during reaching was encoded in a space-related frame of reference. Consistent with this interpretation, the same cell showed weak responses when the target was to the right of the fixation point, but shifted to the left in space-related coordinates (i.e., in the right part of the visual space; Fig. 6A, LE). This, in turn, ruled out the possibility that this cell discharged according to the relative position of gaze and reaching target. Figure 6A (RE) also shows a gaze effect on the reaching discharge, which is a phenomenon particularly relevant in area V6A (Marzocchi et al. 2008). We described this cell as “space-related.” The space-related behavior of this neuron is highlighted in the color maps of normalized mean activity [below each spike density function (SDF) in Fig. 6], which indicate the eye/target combinations that evoked the largest response during the epochs of interest.

The cell shown in Figure 6B was spatially tuned in MEM and MOV, and discharged maximally when the animal planned and reached to the right of the fixation point, or to the right-side target in the Constant-gaze Reaching Task (Fig. 6B, RT). Comparison of cell firing in the same spatial configuration (movement to the right, Fig. 6B, RTE) and in the same eye/target-related configuration (movement to the right of fixation point, Fig. 6B, LE) indicates a preference for executing reaches to targets located to the right with respect to fixation point, that is, in a eye/target-related reference frame. This neuron showed negligible discharges during both planning and movement phases in all other relative eye/target configurations, and for rightward reaching when the fixation point and reaching target were in register (Fig. 6B, LTE, RTE, and TE).

The neuron in Figure 6C did not exhibit either of the modulation patterns described above. Rather, this cell showed a strong gaze-related discharge that was evident in all the epochs of interest. The only condition that evoked any discharge was direction of the gaze to the left (LTE). In addition, this neuron was also influenced by hand position, as shown by the comparison of gaze-related discharges in the LTE and LE eye/target combinations. When the hand was on the right, gaze-related discharge was less prominent. Thus, the modulation of this cell has to be described as a complex interplay between the target/hand position and the gaze, rather than a simple eye/target configuration.

We compared the difference (Euclidean distance) between response profiles collected in either the constant space position or constant eye/target configuration, in order to assess the frame of reference for reaching across the entire V6A neuronal population. In Figure 7, the 2 distances calculated for each neuron are plotted against one another. The Euclidean distance was separately calculated for neural discharges during each of the premovement and action epochs (top and bottom rows, respectively). In Figure 7, neurons that encoded target position in an eye/target-related reference frame are located below the diagonal (e.g., the cell shown in Fig. 6B; filled square). Conversely, neurons that encoded target position in space-related frame of reference are located above the diagonal (e.g., Fig. 6A; filled triangle). Filled circles indicate cells whose bootstrap-estimated confidence intervals do not cross the diagonal. Empty circles represent those neurons whose confidence intervals cross the diagonal, and therefore are not statistically distinguishable with respect to eye/target- or space-related coordinates. We refer to these neurons as “mixed cells,” to reflect that they could employ either reference frame, as in Figure 6C (filled diamond in Fig. 7).

Figure 7.

Reference frames for reaching in V6A population. Each point represents a neuron (VIS, N = 71; MEM, N = 87; MOV, N = 83; HOLD, N = 79). Filled circles indicate neurons whose bootstrap estimated confidence intervals do not cross the diagonal. Empty circles indicate neurons whose bootstrap estimated confidence intervals cross the diagonal. Neurons that lie above the diagonal were classified as employing space-related coordinates; neurons below the diagonal were classified as employing eye/target-related coordinates. Neurons falling outside the estimate of noise measurement (empty circles) encode reaching actions in mixed eye/target- and space-related coordinates. The error bars indicate the 95% confidence intervals surrounding this measure. Triangle, square, and diamond correspond to neurons in Figure 6AC, respectively (filled or empty, depending on significance or not). Small light gray diamonds illustrate noise in the measurement about a true value of 0 for each neuron. Dashed line indicates the border that includes 95% of the empty light gray diamonds. Equations of dashed lines are x + y = 0.63, x + y = 0.45, x + y = 0.5, and x + y = 0.49, for VIS, MEM, MOV, and HOLD epochs, respectively.

Figure 7.

Reference frames for reaching in V6A population. Each point represents a neuron (VIS, N = 71; MEM, N = 87; MOV, N = 83; HOLD, N = 79). Filled circles indicate neurons whose bootstrap estimated confidence intervals do not cross the diagonal. Empty circles indicate neurons whose bootstrap estimated confidence intervals cross the diagonal. Neurons that lie above the diagonal were classified as employing space-related coordinates; neurons below the diagonal were classified as employing eye/target-related coordinates. Neurons falling outside the estimate of noise measurement (empty circles) encode reaching actions in mixed eye/target- and space-related coordinates. The error bars indicate the 95% confidence intervals surrounding this measure. Triangle, square, and diamond correspond to neurons in Figure 6AC, respectively (filled or empty, depending on significance or not). Small light gray diamonds illustrate noise in the measurement about a true value of 0 for each neuron. Dashed line indicates the border that includes 95% of the empty light gray diamonds. Equations of dashed lines are x + y = 0.63, x + y = 0.45, x + y = 0.5, and x + y = 0.49, for VIS, MEM, MOV, and HOLD epochs, respectively.

Mixed cells predominated in all epochs (Fig. 7), accompanied by several cells classified as eye/target related (VIS: 10 of 71; MEM: 18 of 87; MOV: 16 of 83; and HOLD: 22 of 79). A few space-related cells were evident in VIS (2 of 71) and HOLD (4 of 79) while a higher incidence of this type was observed in MEM (10 of 87) and MOV (9 of 83). During visual encoding of target position, and during static postures on the reached target, almost all neurons were eye/target-related or showed a mixed reference frame. In contrast, while the reaching movement is being planned and performed, neurons in V6A are more balanced between the space-related, mixed, and eye/target-related reference frames.

We reconstructed the anatomical location of the recording sites and examined whether space-related cells, eye/target-related cells and mixed cells were spatially segregated within V6A, but found no evidence of such segregation.

Discussion

It is well known that V6A cells compute gaze direction signals (Galletti et al. 1995; Hadjidimitrakis et al. 2011; Breveglieri et al. 2012) and reach direction signals (Fattori et al. 2005; Hadjidimitrakis et al. 2014b). To date, however, the computation of reach direction signals has only been demonstrated in experiments where the reaching targets were coupled with gaze (foveal reaching). We here investigated whether V6A neurons encode arm reach direction, or the spatial position of reaching target (which co-varies with reach direction this task), when the gaze is not directed to the target (peripheral reaching).

We examined the encoding of reaching target during the different temporal stages of the task, including: visual appearance of target; reach planning and execution; and target holding. During the premovement phase, the brief visual presentation of the reaching target modulated about 37% of V6A neurons. Some cells showed a transient response to target appearance (Fig. 3A). Others (Fig. 3B) responded to target presentation and maintained their output through the remaining task epochs even if the visual stimulus had already disappeared. Spatial tuning of the sustained discharge was highly consistent across epochs (Fig. 4B). Interestingly, we observed preservation of spatial tuning in the vast majority of V6A cells during task progression from movement planning through execution. This suggests an involvement of V6A in the motor process preparation, indicating a role for this area in visuomotor processes extending from capture of the visual target throughout the subsequent movement. Movement planning affected the activity of 54% of V6A cells in our sample. Note that in the nearby PRR (mostly corresponding to area MIP), the incidence of reach planning signals was about 70–80% (Chang et al. 2008; Hwang and Andersen 2012), clearly higher than in V6A. This indicates a weaker involvement of V6A in planning reaching movements with respect to PRR.

The population analysis of space representation in V6A suggests a quite homogeneous encoding of working space during planning and execution of reaching movements, and an over-representation of contralateral targets during their visual presentation (Fig. 4A). Spatial preferences in the VIS epoch are consistent with the visual properties of V6A neurons, in which the visual receptive fields are mainly located in the contralateral hemifield (Galletti et al. 1999; Gamberini et al. 2011). It is interesting to note that the over representation of ipsilateral targets in HOLD (Fig. 4A) cannot be explained by the visual stimulation evoked by the hand that holds the reaching target, as the arm is in the ipsilateral visual field, which is poorly represented in V6A. Furthermore, the action-related neuronal activity occurred in complete darkness, precluding the possibility that the spatial modulation observed during the reaching task could depend on visual inputs. It is thus likely associated with other factors, such as corollary discharges from motor centers, for example, premotor cortex, which are known to be directly connected to V6A (Shipp et al. 1998; Gamberini et al. 2009). Another possibility is that spatial modulation is due to proprioceptive information from the arm, which has been demonstrated in V6A (Breveglieri et al. 2002). Consistent with this possibility, optic ataxia patients, who have lesions likely involving the human homolog of macaque V6A (see Battaglini et al. 2002; Karnath and Perenin 2005; Pitzalis et al. 2013), showed deficits in extracting the spatial location of the ataxic (contralesional) hand from multijoint proprioceptive information, and showed pointing errors when proprioceptive pointing movements were performed with the same ataxic hand (Blangero et al. 2007).

In this study, and in previous work from our laboratory (Marzocchi et al. 2008), we have shown that V6A neurons prefer reaches directed to peripheral, rather than foveal, targets (see Fig. 4A). Preference for peripheral reaching has also been reported in a region of the human brain (Prado et al. 2005) that likely includes the human homolog of monkey area V6A (Pitzalis et al. 2013). Additional evidence for this preference comes from the work of Beurze et al. (2010), who demonstrated that in posterior parietal cortex and dorsal premotor cortex, planning peripheral reaches is metabolically more costly than planning foveal reaches.

Frames of Reference for Reaching in Area V6A

In the present work, we investigated whether V6A neural activity encoded the spatial location of the reaching target, or its position with respect to the gaze (i.e., retinotopic, or eye-centered position). Because the spatial positions of the target were always in register with respect to the world, and the monkey's head and body, these 3 frames of reference (world-centered, head-centered, body-centered) could not be studied separately here, and we refer to them collectively as the “space” frame of reference. In addition, in our experimental conditions, we could not distinguish between target position and reach direction, as they co-varied when the monkey performed a given reaching movement from a fixed starting hand position. Although the reaching positions tested in the present work give an effective estimate about the complex relationship between these spatial variables, we are aware that the limited number of positions tested here will require further studies to fully characterize spatial selectivity of V6A neurons in the entire reachable space.

The Constant-gaze Reaching Task alone does not allow us to determine whether the discharge pattern of a cell was the result of different spatial positions of targets in space, or of different relative positions of the reaching target with respect to the gaze. This ambiguity resulted from the fact that, each time, the reaching target changed position with respect to the gaze, it changed position in space. However, in the present work, we compared reaching activity in the same neuron across 3 tasks with different eye/target configurations (Fig. 2). This allowed us to determine whether the neural modulation was due to the change of target position in space, or to the change in the relative position of the target with respect to gaze.

The evaluation of eye/target configuration evoking the best response highlighted 2 patterns of reaching target encoding, and hence we describe 2 types of neurons: side-dependent and task-dependent cells (Fig. 5). Side-dependent cells comprise the majority of population, exhibiting complex behavior in which spatial preference changed or inverted across task conditions. The side-dependent group also includes neurons whose discharge was dependent on the relative position of eye and reaching target, and neurons with intermediate neural behaviors between eye/target- and space-related encoding of reaching.

Cells in the task-dependent category were far less common in V6A. This group includes cells with similar response profiles during reaching toward the left or right target positions. Task-dependent cells were modulated only by the nature of the task. Thus, we can assert with confidence that these cells present a balanced encoding of both peripheral target positions. We might suppose for these neurons the existence of a “straightforward” coding of reaching space in absolute coordinates.

Galletti et al. (1993) reported the presence of “real position” cells in V6A, in which visual receptive field did not shift with gaze. In that experiment, animals did not perform any arm movements and the authors inferred that these neurons directly encoded visual space in craniotopic coordinates (Galletti et al. 1993, 1995). The present data suggest that V6A could extend the “real position” concept to the motor-related activity of these task-dependent cells.

We applied a bootstrap analysis to characterize the frames of reference used by V6A neurons for reaching (Fig. 7). This approach showed that the majority of V6A exhibited “mixed” reference frames, that is, they did not have a unique frame of reference organizing their activity. Cells with this response profile employed both space-related and eye/target-related frames of reference over the different epochs. It is worth noting that pure spatial-related encoding is poor with respect to the other 2 frames, (eye/target-related and mixed). It remains to be determined whether the response pattern of “mixed” cells represent an intermediate step between spatial and eye-target related frames of reference, or a genuinely different way of encoding reaching targets.

Comparison with Nearby Areas and Functional Suggestions

Our results differ in several respects from observations reported for nearby reach-related areas of superior parietal cortex. For instance, reaching activity in V6A is not predominantly organized in the eye-centered frame of reference, in contrast to area PRR (Snyder et al. 1997; Batista et al. 1999), and it is not organized in global tuning fields as reported for area PEc (Lacquaniti et al. 1995; Battaglia-Mayer et al. 2003). Our data are consistent with recent work showing that neurons in area 5 and MIP do not fall into the rigid schema of eye- or space-related tuning, but rather incorporate complex and heterogeneous reference frames (McGuire and Sabes, 2011). Our data are also in line with a very recent description of reference frames for reaching in area PRR, which range from eye-related to hand-related, and includes many cells exhibiting an intermediate representation (Chang and Snyder 2010). Hybrid representations similar to V6A are also found in areas of the intraparietal sulcus, including LIP and MIP, Mullette-Gillman et al. (2009). In LIP and MIP areas, when monkeys performed visual and auditory saccades from different initial eye positions, both visual and auditory signals reflected a hybrid between head- and eye-centered coordinates during the execution of eye movements (Mullette-Gillman et al. 2009). Mixed reference frames in V6A were also described in recent work where the initial hand position was varied in foveal reaching toward targets located in different parts of the 3D peripersonal space (Hadjidimitrakis et al. 2014a). In that study, most V6A cells encoded targets in mixed body-centered and hand-centered coordinates, while several cells encoded targets in exclusively body-centered coordinates.

The high incidence of V6A neurons with mixed frames of reference suggests that this area represents a stage in the visuomotor process where spatially integrated information about the reaching target and arm movement parameters could be transferred to premotor areas. Anatomical evidence indicates that V6A is directly connected with the dorsal premotor area 6 (Matelli et al. 1998; Shipp et al. 1998; Gamberini et al. 2009). Moreover, many studies have reported that, during movement planning, neurons in dorsal premotor cortex are influenced by both eye-position and spatial coordinates of reaching targets, as we observed in V6A neurons (Pesaran et al. 2006, 2010; Batista et al. 2007). The interactions between V6A and the dorsal premotor cortex could allow coordinate transformations for correct arm movement control.

The presence in V6A of neurons employing different coordinate systems for reaching (eye/target-related coordinates, spatial coordinates, mixed representations) may also reflect the many types of signals (visual, proprioceptive, and motor-related) processed in this cortical area (Breveglieri et al. 2002; Bosco et al. 2010). Different eye/target configurations likely generate different sensory inputs, and the interplay between eye-related information, visual information, and somatosensory information within V6A neural population could facilitate task-dependent reweighting of sensory signals to create a mixed frame of reference, both as the action is planned to occur and while it is executed.

The coordinate systems described above are consistent with recent computational models of reach planning, which suggest that reaching movements are planned using a task-dependent, weighted combination of polymodal sensory information (see Ernst and Bulthoff 2004 for a review; Sober and Sabes 2005; McGuire and Sabes 2009). This hypothesis is also supported by the observation that optic ataxia patients are unable to simultaneously represent visual information defined across different frames of reference (Jackson et al. 2009). The intact V6A, presumably damaged in optic ataxia patients (Karnath and Perenin 2005), could help in forming such a cross-reference frame representation.

Conclusions

Our data suggest that reach-related neurons in V6A code reaching actions using several different reference frames: space-related, eye/target-related, and mixed. The multiplicity of V6A reference frames could emerge through concatenation of dynamically specific reference frames used to plan and guide reaching, that in turn could create a hybrid representation suitable for the specific demands of particular tasks (see also Jackson et al. (2009)). The mixed reference frames observed in V6A could be due to the different reference frames in which the various sensory signals that reach V6A are encoded (i.e., eye/target-related for visual inputs and space-related for proprioceptive inputs). To use these signals, transformations between reference frames are necessary, but this adds bias and variability (Sober and Sabes 2005; Schlicht and Schrater 2007). The presence of a mixed representation of target position during reach planning and execution reduces noise derived from sensory transformations (Deneve et al. 2001; Avillac et al. 2005; McGuire and Sabes 2009, 2011). The use of a control system that simultaneously represents different kinds of reference frames, as we observed in mixed neurons in V6A, could reduce such variability and facilitate reliable motor performance. The target location can be encoded in different types of V6A cells by integrating visual information with ongoing skeleto-motor signals concerning the current state of the arm, and corollary discharges of arm movement execution, all data available in V6A (see also Bosco et al. (2010); Gamberini et al. (2011)). This is consistent with the view that V6A is a node of the dorso-medial visual stream involved in online control of movements.

Funding

This work was supported by EU FP7-IST-217077-EYESHOTS, by Ministero dell'Università e della Ricerca (Italy) and by Fondazione del Monte di Bologna e Ravenna (Italy).

Notes

We thank G. Placenti for setting-up the experimental apparatus, E. Chinellato for help with the PCA analysis, M. Reser for proofreading assistance, and M. Gamberini for histological reconstructions of penetrations. Conflict of Interest: None declared.

References

Avillac
M
Deneve
S
Olivier
E
Pouget
A
Duhamel
JR
.
2005
.
Reference frames for representing visual and tactile locations in parietal cortex
.
Nat Neurosci
 .
12
:
12
.
Batista
AP
Buneo
CA
Snyder
LH
Andersen
RA
.
1999
.
Reach plans in eye-centered coordinates
.
Science
 .
285
:
257
260
.
Batista
AP
Santhanam
G
Yu
BM
Ryu
SI
Afshar
A
Shenoy
KV
.
2007
.
Reference frames for reach planning in macaque dorsal premotor cortex
.
J Neurophysiol
 .
98
:
966
983
.
Battaglia-Mayer
A
Caminiti
R
Lacquaniti
F
Zago
M
.
2003
.
Multiple levels of representation of reaching in the parieto-frontal network
.
Cereb Cortex
 .
13
:
1009
1022
.
Battaglini
PP
Muzur
A
Galletti
C
Skrap
M
Brovelli
A
Fattori
P
.
2002
.
Effects of lesions to area V6A in monkeys
.
Exp Brain Res
 .
144
:
419
422
.
Beurze
SM
Toni
I
Pisella
L
Medendorp
WP
.
2010
.
Reference frames for reach planning in human parietofrontal cortex
.
J Neurophysiol
 .
104
:
1736
1745
.
Blangero
A
Ota
H
Delporte
L
Revol
P
Vindras
P
Rode
G
Boisson
D
Vighetto
A
Rossetti
Y
Pisella
L
.
2007
.
Optic ataxia is not only ‘optic’: impaired spatial integration of proprioceptive information
.
Neuroimage
 .
36
:
T61
T68
.
Bosco
A
Breveglieri
R
Chinellato
E
Galletti
C
Fattori
P
.
2010
.
Reaching activity in the medial posterior parietal cortex of monkeys is modulated by visual feedback
.
J Neurosci
 .
30
:
14773
14785
.
Breveglieri
R
Hadjidimitrakis
K
Bosco
A
Sabatini
SP
Galletti
C
Fattori
P
.
2012
.
Eye position encoding in three-dimensional space: integration of version and vergence signals in the medial posterior parietal cortex
.
J Neurosci
 .
2012
:
159
169
.
Breveglieri
R
Kutz
DF
Fattori
P
Gamberini
M
Galletti
C
.
2002
.
Somatosensory cells in the parieto-occipital area V6A of the macaque
.
Neuroreport
 .
13
:
2113
2116
.
Chang
SW
Dickinson
AR
Snyder
LH
.
2008
.
Limb-specific representation for reaching in the posterior parietal cortex
.
J Neurosci
 .
28
:
6128
6140
.
Chang
SW
Snyder
LH
.
2010
.
Idiosyncratic and systematic aspects of spatial representations in the macaque parietal cortex
.
Proc Natl Acad Sci USA
 .
107
:
7951
7956
.
Colby
CL
Duhamel
JR
Goldberg
ME
.
1996
.
Visual, presaccadic, and cognitive activation of single neurons in monkey lateral intraparietal area
.
J Neurophysiol
 .
76
:
2841
.
Deneve
S
Latham
PE
Pouget
A
.
2001
.
Efficient computation and cue integration with noisy population codes
.
Nat Neurosci
 .
4
:
553
562
.
Ernst
MO
Bulthoff
HH
.
2004
.
Merging the senses into a robust percept
.
Trend Cogn Sci
 .
8
:
162
169
.
Fattori
P
Kutz
DF
Breveglieri
R
Marzocchi
N
Galletti
C
.
2005
.
Spatial tuning of reaching activity in the medial parieto-occipital cortex (area V6A) of macaque monkey
.
Eur J Neurosci
 .
22
:
956
972
.
Galletti
C
Battaglini
PP
Fattori
P
.
1995
.
Eye position influence on the parieto-occipital area PO (V6) of the macaque monkey
.
Eur J Neurosci
 .
7
:
2486
2501
.
Galletti
C
Battaglini
PP
Fattori
P
.
1993
.
Parietal neurons encoding spatial locations in craniotopic coordinates
.
Exp Brain Res
 .
96
:
221
229
.
Galletti
C
Fattori
P
Battaglini
PP
Shipp
S
Zeki
S
.
1996
.
Functional demarcation of a border between areas V6 and V6A in the superior parietal gyrus of the macaque monkey
.
Eur J Neurosci
 .
8
:
30
52
.
Galletti
C
Fattori
P
Kutz
DF
Gamberini
M
.
1999
.
Brain location and visual topography of cortical area V6A in the macaque monkey
.
Eur J Neurosci
 .
11
:
575
582
.
Galletti
C
Kutz
DF
Gamberini
M
Breveglieri
R
Fattori
P
.
2003
.
Role of the medial parieto-occipital cortex in the control of reaching and grasping movements
.
Exp Brain Res
 .
153
:
158
170
.
Gamberini
M
Galletti
C
Bosco
A
Breveglieri
R
Fattori
P
.
2011
.
Is the Medial Posterior Parietal Area V6A a Single Functional Area?
J Neurosci
 .
31
:
5145
5157
.
Gamberini
M
Passarelli
L
Fattori
P
Zucchelli
M
Bakola
S
Luppino
G
Galletti
C
.
2009
.
Cortical connections of the visuomotor parietooccipital area V6Ad of the macaque monkey
.
J Comp Neurol
 .
513
:
622
642
.
Hadjidimitrakis
K
Bertozzi
F
Breveglieri
R
Fattori
P
Galletti
C
.
2014a
.
Body-centered, mixed, but not hand-centered coding of visual targets in the medial Posterior Parietal Cortex during reaches in 3D space
.
Cereb Cortex
 .
24
:
3209
3220
.
Hadjidimitrakis
K
Bertozzi
F
Breveglieri
R
Bosco
A
Galletti
C
Fattori
P
.
2014b
.
Common neural substrate for processing depth and direction signals for reaching in the monkey medial posterior parietal cortex
.
Cereb Cortex
 .
24
:
1645
1657
.
Hadjidimitrakis
K
Breveglieri
R
Placenti
G
Bosco
A
Sabatini
SP
Fattori
P
.
2011
.
Fix your eyes in the space you could reach: neurons in the macaque medial parietal cortex prefer gaze positions in peripersonal space
.
PLoS One
 .
6
:
e23335
.
Hwang
EJ
Andersen
RA
.
2012
.
Spiking and LFP activity in PRR during symbolically instructed reaches
.
J Neurophysiol
 .
107
:
836
849
.
Jackson
SR
Newport
R
Husain
M
Fowlie
JE
O'Donoghue
M
Bajaj
N
.
2009
.
There may be more to reaching than meets the eye: re-thinking optic ataxia
.
Neuropsychologia
 .
47
:
1397
1408
.
Karnath
HO
Perenin
MT
.
2005
.
Cortical control of visually guided reaching: evidence from patients with optic ataxia
.
Cereb Cortex
 .
15
:
1561
1569
.
Kutz
DF
Fattori
P
Gamberini
M
Breveglieri
R
Galletti
C
.
2003
.
Early- and late-responding cells to saccadic eye movements in the cortical area V6A of macaque monkey
.
Exp Brain Res
 .
149
:
83
95
.
Kutz
DF
Marzocchi
N
Fattori
P
Cavalcanti
S
Galletti
C
.
2005
.
Real-time supervisor system based on trinary logic to control experiments with behaving animals and humans
.
J Neurophysiol
 .
93
:
3674
3686
.
Lacquaniti
F
Guigon
E
Bianchi
L
Ferraina
S
Caminiti
R
.
1995
.
Representing spatial information for limb movement: role of area 5 in the monkey
.
Cereb Cortex
 .
5
:
391
409
.
Marzocchi
N
Breveglieri
R
Galletti
C
Fattori
P
.
2008
.
Reaching activity in parietal area V6A of macaque: eye influence on arm activity or retinocentric coding of reaching movements?
Eur J Neurosci
 .
27
:
775
789
.
Matelli
M
Govoni
P
Galletti
C
Kutz
DF
Luppino
G
.
1998
.
Superior area 6 afferents from the superior parietal lobule in the macaque monkey
.
J Comp Neurol
 .
402
:
327
352
.
McGuire
LM
Sabes
PN
.
2011
.
Heterogeneous representations in the superior parietal lobule are common across reaches to visual and proprioceptive targets
.
J Neurosci
 .
31
:
6661
6673
.
McGuire
LM
Sabes
PN
.
2009
.
Sensory transformations and the use of multiple reference frames for reach planning
.
Nat Neurosci
 .
12
:
1056
1061
.
Mullette-Gillman
OA
Cohen
YE
Groh
JM
.
2009
.
Motor-related signals in the intraparietal cortex encode locations in a hybrid, rather than eye-centered reference frame
.
Cereb Cortex
 .
19
:
1761
1775
.
Nakamura
K
Chung
HH
Graziano
MSA
Gross
CG
.
1999
.
Dynamic representation of eye position in the parieto-occipital sulcus
.
J Neurophysiol
 .
81
:
2374
2385
.
Pesaran
B
Nelson
MJ
Andersen
RA
.
2006
.
Dorsal premotor neurons encode the relative position of the hand, eye, and goal during reach planning
.
Neuron
 .
51
:
125
134
.
Pesaran
B
Nelson
MJ
Andersen
RA
.
2010
.
A relative position code for saccades in dorsal premotor cortex
.
J Neurosci
 .
30
:
6527
6537
.
Pitzalis
S
Sereno
MI
Committeri
G
Fattori
P
Galati
G
Tosoni
A
Galletti
C
.
2013
.
The human homologue of macaque area V6A
.
Neuroimage
 . .
Prado
J
Clavagnier
S
Otzenberger
H
Scheiber
C
Kennedy
H
Perenin
MT
.
2005
.
Two cortical systems for reaching in central and peripheral vision
.
Neuron
 .
48
:
849
858
.
Schlicht
EJ
Schrater
PR
.
2007
.
Impact of coordinate transformation uncertainty on human sensorimotor control
.
J Neurophysiol
 .
97
:
4203
4214
.
Shipp
S
Blanton
M
Zeki
S
.
1998
.
A visuo-somatomotor pathway through superior parietal cortex in the macaque monkey: cortical connections of areas V6 and V6A
.
Eur J Neurosci
 .
10
:
3171
3193
.
Snedecor
GW
Cochran
WG
.
1989
.
Statistical methods: 8. ed
 .
Ames
:
Iowa State University Press
.
Snyder
LH
Batista
AP
Andersen
RA
.
1997
.
Coding of intention in the posterior parietal cortex
.
Nature
 .
386
:
167
170
.
Sober
SJ
Sabes
PN
.
2005
.
Flexible strategies for sensory integration during motor planning
.
Nat Neurosci
 .
8
:
490
497
.