During foveal reaching, the activity of neurons in the macaque medial posterior parietal area V6A is modulated by both gaze and arm direction. In the present work, we dissociated the position of gaze and reaching targets, and studied the neural activity of single V6A cells while the eyes and reaching targets were arranged in different spatial configurations (peripheral and foveal combinations). Target position influenced neural activity in all stages of the task, from visual presentation of target and movement planning, through reach execution and holding time. The majority of neurons preferred reaches directed toward peripheral targets, rather than foveal. Most neurons discharged in both premovement and action epochs. In most cases, reaching activity was tuned coherently across action planning and execution. When reaches were planned and executed in different eye/target configurations, multiple analyses revealed that few neurons coded reaching actions according to the absolute position of target, or to the position of target relative to the eye. The majority of cells responded to a combination of both these factors. These data suggest that V6A contains multiple representations of spatial information for reaching, consistent with a role of this area in forming cross-reference frame representations to be used by premotor cortex.
Area V6A, located in the caudal-most part of the superior parietal lobule (SPL) (Galletti et al. 1996, 1999), is a center of visuomotor integration involved in the control of arm reaching movements (Galletti et al. 2003; Gamberini et al. 2011). Usually, primates perform reaching movements while directing the gaze toward the object to be grasped. Under these conditions (foveal reaching), it was found that both direction of gaze (Galletti et al. 1995; Nakamura et al. 1999; Hadjidimitrakis et al. 2011; Breveglieri et al. 2012) and direction of reaching movements (Fattori et al. 2005; Hadjidimitrakis et al. 2014b) strongly influence the activity of V6A neurons. A study specifically designed to evaluate the influence of gaze on V6A reach-related discharges showed that reaching activity was dramatically influenced by the direction of gaze. Indeed, V6A neurons that discharged vigorously for arm movements with a particular gaze direction could be completely silent during the same arm movement if the gaze was directed elsewhere (Marzocchi et al. 2008). The strong influence of gaze direction on the excitability of V6A neurons places severe constraints on the interpretation of data when changes in gaze direction accompany changes in the direction of arm movement (i.e., in foveal reaching). Dissociation of the 2 parameters is needed to understand the role played by each of them. Thus, to investigate the influence of target position and of arm direction independent of gaze direction, we employed a reaching task in which gaze was held constant during reaching movements toward different spatial locations.
A second aim of this work was to study the reference frame used by V6A reaching cells. In the parietal reach region (PRR), which sits near V6A, a predominantly eye-centered frame of reference was initially described (Snyder et al. 1997; Batista et al. 1999). Recently, Chang and Snyder (2010) reported that the reference frames in PRR are idiosyncratic to each neuron, with many neurons exhibiting representation of reaching targets that are intermediate between eye- and hand-related reference frames. McGuire and Sabes (2011) reported that neurons in areas 5 and MIP (the latter partially overlapping with PRR) use heterogeneous representations that do not belong to one simple and discrete reference frame. Using different combinations of gaze and target positions, we investigated whether V6A encodes the target position in an eye- or space-centered reference frame. Although we found a few neurons that encoded target position exclusively in an eye-centered frame of reference and a few that preferred a space-centered reference frame, most V6A neurons clearly preferred an intermediate frame of reference, consistent with recent findings in other posterior parietal areas.
Materials and Methods
Experiments were performed in accordance with National laws on care and use of laboratory animals, with European Council Directive of 24 November 1986 (86/609/EEC), and with the Directive of 22 September 2010 (2010/63/EU). All procedures were approved by the Bioethical Committee of the University of Bologna. Animals were purpose-bred and single-housed in a spacious and enriched environment.
Three trained Macaca fascicularis participated in this study (weight 3, 3.5, and 4.4 kg, respectively). During the behavioral experiments, monkeys were comfortably restrained in a custom primate chair designed for performance of controlled arm movements. Single cell activity was recorded extracellularly from the anterior bank of the parieto-occipital sulcus using glass-coated metal microelectrodes with tip impedances of 0.8–2 MΩ at 1 kHz. Action potentials were sampled at 1 kHz for one monkey and at 100 kHz for the second case. In the third monkey, multiple electrode penetrations were made using a 5-channel multielectrode recording system (Mini Matrix; Thomas Recording). Electrodes were quartz-platinum/tungsten fibers with an impedance of 0.5–2 MΩ at 1 kHz (Thomas Recording). The electrode signals were amplified (gain 10 000) and filtered (bandpass 0.5–5 kHz). Action potentials were isolated using a waveform discriminator (Multi Spike Detector; Alpha Omega Engineering), and spikes were sampled at 100 kHz. Eye movements were simultaneously recorded using an infrared oculometer (Dr Bouis, Germany for 2 cases, ISCAN for the third) and sampled at 100 Hz.
Surgery to implant the recording apparatus and restraint headpost was performed aseptically and under general anesthesia. Animals were deeply anesthetized with sodium thiopenthal (8 mg/kg/h, i.v.). Heart rate, respiratory rate, and body temperature were monitored. Body temperature was controlled with a heating pad. In all animals, a metal head-holder and a steel recording chamber 20 mm in diameter were implanted on the skull using Refobacin Placos R (Merck) neurosurgical cement. The implanted recording chamber was oriented for access to the posterior part of SPL of both hemispheres. Analgesics were administered postoperatively (ketorolac tromethamine, 1 mg/kg i.m. immediately after surgery, and 1.6 mg/kg i.m. on the following days) to minimize pain or discomfort.
Constant-Gaze Reaching Task
All reaching tasks were performed in darkness to avoid visual feedback from arm movement, as well as other visual stimulation. The light background was switched on for a few minutes after each block of triplets of target locations to avoid dark adaptation. To further minimize the role of vision during reaching, LED brightness was strongly reduced. Arm movement tasks were executed with the contralateral limb, while maintaining gaze fixation on the central, straight-ahead position (Fig. 1A). As shown in Figure 1A, reaching movements started from a fixed position (home button, 2.5 cm in diameter) outside the animal's field of view, 5 cm in front of the chest on the midsagittal plane, and reached targets located in different spatial positions on a fronto-parallel panel. Each target was formed by a central green/red light-emitting diode (LED) encircled by a circular ring (12 mm in diameter, 4.8° of visual angle) illuminated by a yellow LED. The target was mounted on a microswitch embedded in the panel. The bicolor LED instructed the animal where to direct and maintain the gaze fixation (fixation target), whereas the yellow LED instructed the animal where to reach (reaching target).
The targets and the fixation points were placed on a panel located 14 cm in front of the monkey (Fig. 1A). The panel had 3 reaching targets distributed along a line, each 3.7 cm (15.4°) apart. The fixation point was placed at eye level, directly in front of the animal.
The time sequence of the reaching task is shown in Figure 1B. A trial began when the monkey pressed the home button. The animal was free to look around, and was not required to perform any eye or arm movement. After 1000 ms, the central LED lit up (LED green), which signaled the monkey to gaze at the fixation point and to maintain the button press while awaiting the instructional cue. After a delay of 1000–1500 ms, the yellow LED in one of the 3 target positions was illuminated for 150 ms, cueing the target for intervening reaching movement. The monkey then had to wait an additional 1000–1500 ms for a change in color of the fixation LED (green to red) without performing any eye or arm movement. The color change of the fixation target was the go-signal for the monkey to release the home button, and perform an arm movement toward the reaching target and press it. The animal held its hand on the reaching target until the fixation LED switched off (after 800–1200 ms). The offset of the fixation LED informed the monkey to release the reaching target, and to press the home button again, to be rewarded and start another trial.
An electronic window control on eye movements forced the monkeys to fixate on the LED from LED onset (i.e., well before the go-signal for the outward reach) until offset (which cued the return reach). If fixation was broken during this interval, trials were interrupted on-line and discarded.
Correct performance of reaching movements was detected via monopolar microswitches (RS Components, UK) mounted under the home button and the reaching targets. Button presses/releases were recorded with 1 ms resolution (Kutz et al. 2005).
The 3 different target positions were tested in a randomized sequence of trials (45 each for 2 animals, and 30 for the third). No other visual stimuli were presented during the premovement, reaching execution or hand holding phases.
Other Reaching Tasks
Several neurons tested during the Constant-gaze Reaching Task (Fig. 2A) were also evaluated using variants of the task named Foveal Reaching Task (Fig. 2B) and Constant-reach Reaching Task (Fig. 2C). In the Foveal Reaching Task, the fixation target was always coincident with the reaching target. Therefore, reaching movements were always directed toward foveated targets, and the eye-centered coordinates of reaching target (eye-target coordinates) remained constant throughout the task. Reaching/fixation targets were on monkey straight ahead, or 15.4° to the right or left.
Whenever possible, cells were also evaluated in the Constant-reach Reaching Task (Fig. 2C). In this task, the reaching target was held constant, whereas the fixation point could be to the left, centered, or to the right relative to reaching target (same positions of the 2 previous tasks). The time sequence, LED position, task control, and other parameters were exactly as described for the Constant-gaze Reaching Task. Keeping the reaching target constant allowed for constant direction of reaching movements, and thus precluded cell modulation resulting from the direction of arm movement. This allowed us to manipulate the eye-target coordinates (retinotopic coordinates) of the target, while keeping the spatial position of reaching target constant.
Time sequences of the Foveal Reaching Task and the Constant-reach Reaching Task were as described for the Constant-gaze Reaching Task (Fig. 1B).
Eye/target configurations corresponding to all possible combinations tested in the 3 tasks are described as follows (Fig. 2): reach target to the left or to the right of eye position (LT or RT; Constant-gaze Reaching Task), reach target and eye positions superimposed at the left or at the right (LTE or RTE; Foveal Reaching Task), eye to the left or to the right of reach target (LE or RE; Constant-reach Reaching Task), reach target and eye position superimposed at the central position (TE; Constant-gaze Reaching Task, Foveal Reaching Task, and Constant-reach Reaching Task).
No results relative to Constant-gaze Reaching Task have been described previously. We compared results from that task to output of neurons recorded in previous experiments. Neurons from tasks described previously accounted for fewer than 10% of neurons included here (Fattori et al. (2005), 24 of 337 (7%); Marzocchi et al. (2008), 29 of 337 (9%)).
Data analysis was performed trial-by-trial. For each trial, we screened for a correlation between the neural discharge, eye-position and/or target/arm-movement position. As shown in the bottom part of Figure 1B, the “functional” epochs defined in the present work were: Each epoch was computed trial by trial using the yellow cue, the go-signal, the button- and target- press and release as time markers. The VIS and MEM epochs are referred to as “premovement epochs,” whereas MOV and HOLD are “action epochs.”
- FREE, from the beginning of the trial to the illumination of the LED (this epoch has been used as a reference period);
- VIS: period where possible transient visual responses to the target illumination could be evoked (from 40 to 150 ms after yellow cue offset);
- MEM: delay period between the target illumination and the go signal (from 300 ms after yellow cue offset to the go signal for reaching movement);
- MOV: period where activity evoked by reaching execution could be detected (from 200 ms before home button release to the end of movement to the target);
- HOLD: period of static hand position on the reached target (from the end of the forward reach (button press) to the offset of the fixation target).
Only those neurons in which we could perform the quantitative analysis in at least 7 trials for each target position were included in the final analysis, as proposed by Snedecor and Cochran (1989) and detailed in Kutz et al. (2003). For cells tested with different tasks, we used the interspike interval plot to ensure that changes in neuronal modulations were not caused by simply losing isolation of the neuron. Additionally, to ensure that the recording situation did not change across the 3 different tasks, we compared activity during epoch FREE in the central position tested in the 3 tasks by applying Student's t-test (P < 0.05). Only cells conforming to all of the above criteria were included in the analysis.
One-way ANOVA (factor: target position; dependent variable: activity during VIS, MEM, MOV, and HOLD) was used to compare neural activity within epochs in the Constant-gaze Reaching Task (except where noted, the significance level for all statistical tests was α = 0.05).
For neurons significantly influenced by target position during individual epochs (VIS, MEM, MOV, and HOLD), we assessed neural spatial preferences with a Bonferroni post hoc correction. As we recorded data in both hemispheres, target positions are represented in terms of ipsi- and contralateral hemifield with respect to the recording side. The incidence of cells preferring a specific spatial sector was compared using the χ2 test. For those cells with significant spatial preference, we evaluated whether each neuron preserved spatial tuning during task evolution with respect to the consistency of the target position evoking the largest neural response across consecutive epochs.
Effect of Eye/Target Combinations on Spatial Tuning of V6A Neurons
To better understand the tuning properties of V6A cells tested in all 3 tasks shown in Figure 2, we assessed whether or not spatial preference for target side, and the specific eye/target combination that evoked the greatest response, was preserved in each neuron across the 3 types of task. We analyzed peripheral targets, because the central targets were common to all tasks, for a total of 6 variables for each neuron. Student's t-test was carried out to assess the influence of target side between each pair of conditions (LT vs. RT; LTE vs. RTE; LE vs. RE).
Reference Frame Analysis
The opportunity to have different combinations of gaze and target position led us to consider the possible contribution of V6A neural activity to an encoding of reaching movements correlated to the position of the target in extrinsic space (space-related), or to the relative position of gaze and target (eye/target-related). For those neurons recorded under all 3 task conditions, and for each epoch, we quantified the response of each cell to the location of the reaching target in space, or to the relative position of gaze and target, and then compared the 2 sensitivities. For this comparison, we computed the normalized Euclidean distance between neural responses for task conditions with the same spatial configuration (same spatial location) versus the same eye/target configuration (same relative position of both eye and target) as follows:
We recorded the activity of V6A neurons in 3 animals that performed a Constant-gaze Reaching Task, designed to assess the effect of target position and arm-movement direction while keeping the gaze in a fixed position. A total of 337 tested cells fulfilled the requirements for quantitative analysis (see Materials and Methods).
Quantitative evaluation of neural modulation showed that target position had significant effects on neural activity in all time epochs. We found that firing rates were modulated in 37% of neurons (125 of 337) during the VIS epoch, in 54% of neurons during MEM (183 of 337) and MOV (182 of 337), and 50% of cells (169 of 337) in HOLD (one-way ANOVA and Bonferroni post hoc test, P < 0.05). These percentages include neurons presenting single and multiple modulations during consecutive epochs (Table 1). Examples of responses in the Constant-Gaze Reaching Task are shown in Figure 3. The neuron in Figure 3A displayed a strong visual response when the left target was illuminated, but the visual response disappeared when the target was illuminated in central or right positions. As about 60% of V6A neurons show a visual receptive field (Galletti et al. 1999), this behavior presumably reflects the stimulation of the visual receptive field of the neuron when the target to the left of fixation point was illuminated.
|Single + multiple modulations||125 (37%)||183 (54%)||182 (54%)||169 (50%)|
|Modulations only in one epoch.||21 (6%)||20 (6%)||12 (3%)||26 (8%)|
|Modulations in 2 consecutive epochs.||VIS/MEM|
|Single + multiple modulations||125 (37%)||183 (54%)||182 (54%)||169 (50%)|
|Modulations only in one epoch.||21 (6%)||20 (6%)||12 (3%)||26 (8%)|
|Modulations in 2 consecutive epochs.||VIS/MEM|
The cell in Figure 3B responded across multiple trial intervals. The discharge started at the onset of the cue signal (VIS; Fig. 3B, right panel), and was maintained tonically for the entire duration of the planning phase and reaching movement (MEM and MOV epochs), which extended well beyond the duration of the visual stimulus. This cell showed a clear spatial preference, discharging strongly for rightward, but not to leftward movements.
Neurons modulated during action epochs (MOV and HOLD) exhibited consistent spatial modulation during multiple epochs. For example, the cell in Figure 3C was modulated during planning and execution of reaching movements, with a preference for rightward movement. The planning activity of this neuron followed the same spatial preference as the reaching activity, discharging strongly after cue presentation when the animal was preparing the rightward movement. The neuron in Figure 3D was activated maximally when the animal moved the arm toward the central and right target positions. The same cell discharged tonically during hand holding on the target, in particular for the target located to the right. The neuron in Figure 3E was maximally activated for holding the hand rightward, and was almost silent during the other epochs of the trial.
The majority of neurons studied responded during 2 or more sequential epochs (Table 1). Cells responding in both VIS and MEM represented 25% (85 of 337) of the cell population, whereas 18% (62 of 337) were modulated in VIS, MEM, and MOV (Fig. 3B). Approximately 39% (133 of 337) of the cell population was tuned to both MEM and MOV, with 34% (115 of 337) in both MOV and HOLD. Cells discharging only during the VIS or MEM epochs represented 6% (21 of 337 and 20 of 337, respectively) of the entire V6A population, while those modulated only during MOV represented 3% of cells (12 of 337), and those during HOLD by 8% (26 of 337).
It is evident from the previous description that the majority of V6A neurons encoded the reaching space in a way that required a multimodal integration of visual and motor information. In contrast, only a few cells showed a response restricted to one epoch, and consequently responded to a single type of information.
Figure 4A summarizes the distribution of preferred retinal locations for the reaching target across all the epochs of interest. Neurons were included in the distribution only if their activity in the preferred position was significantly different (Student's t-test with Bonferroni post hoc correction) from the other positions. The upper left part of Figure 4A shows that there was a significant spatial preference during VIS for contralateral positions across all cells (χ2 test, P < 0.05), an expected result given that V6A exhibits a preference for the contralateral visual hemifield (Galletti et al. 1999). No significant laterality effect was observed during the MEM or MOV epochs (χ2 test, n.s.). In contrast, ipsilateral targets were over-represented during HOLD (χ2 test, P < 0.05), with the majority of V6A neurons responding maximally while the hand was on a target in the ipsilateral hemifield. Discharges of V6A neurons during planning and movement epochs were not spatially correlated with visual responses (Fig. 4A), consistent with descriptions of saccade-related activity in LIP neurons (Colby et al. 1996).
Notably, a common response property during all task phases was the large number of cells preferring peripheral, as opposed to foveal, targets. During the VIS and HOLD epochs, encoding of foveal targets was observed in 15% and 27% of cells, respectively. This was significantly smaller than the fraction of cells encoding the 2 peripheral positions (see Fig. 4A, χ2 test, P < 0.05). In MEM and MOV, the number of cells preferring foveal targets, although somewhat smaller, was not significantly different from the number of cells that preferred peripheral targets (48% and 46%, respectively; see Fig. 4, χ2 test, P > 0.05).
Considering that most V6A cells were modulated across different task epochs, we checked the consistency of spatial tuning across epochs. Figure 4B shows the percentage of cells that maintained (black) or changed (white) spatial preference across consecutive epochs. The majority of V6A cells retained the same spatial preference across epochs, particularly for MEM and MOV, that is, during planning and execution of the arm movement, as shown by the individual examples in Figure 3.
Do V6A Neurons Encode the Position of Target in Space or the Position of Target Relative to the Gaze?
The patterns of firing rate modulation described thus far could be due to the spatial position of the reaching target, or to the relative position of target with respect to the position of gaze. To help resolve between these alternatives, we compared cells tested with the Constant-gaze Reaching Task, the Foveal Reaching Task, and the Constant-reach Reaching Task (see Materials and Methods; Fig. 2A–C).
The 2 exemplar neurons in Figure 5A, B are representative of 2 neural patterns that largely reflect how the different eye/target combinations influenced neuronal discharges. The neuron shown in Figure 5A was spatially modulated during both premovement and reaching phases, showing the largest responses when the animal performed the 2 non-foveal tasks and reached for the target to the left of gaze position (Top-left and bottom-right; conditions LT and RE, respectively). This particular neuron was representative of the group of cells whose activation strongly depended on a specific eye/target configuration arrangement during execution of the tasks. Conversely, the neuron depicted in Figure 5B exhibited similar discharges for left and right eye/target combinations during the MOV epoch within the same task. This cell responded most strongly during reaching movements executed toward the central target, while gaze was fixated either to the left or right (left-bottom and right-bottom; conditions LE and RE, respectively). In this case, the neural response did not change according to the target spatial side or gaze/target relative position, but was based instead on the nature of the task.
For those neurons exhibiting significant spatial modulation in at least one of the tasks (N = VIS: 71 of 113; MEM: 87 of 113; MOV: 83 of 113; and HOLD: 79 of 113), we assessed response strength across all 6 conditions (LT, RT, LET, RTE, LE, and RE). The responses of the entire population could be broadly divided into 2 patterns of activity. The majority of cells exhibited firing rate modulation for a single eye/target combination across all epochs (Fig. 5C, black) (VIS: 46 of 71; MEM: 61 of 87; MOV: 55 of 83; HOLD: 55 of 79). This group included cells where left and right eye/target arrangements evoked different, and in some cases, opposite patterns of neural activation (side-dependent cells).
The second, smaller group (VIS: 25 of 71; MEM: 26 of 87; MOV: 28 of 83; HOLD: 24 of 79) included neurons whose modulation was similar on both the left and right sides (task-dependent cells; Fig. 5C, white). Within the task-dependent cells, we identified a subgroup in which reaching evoked a response pattern in which eye/target configurations on either side of the target elicited identical responses within the same task. To characterize such behavior, we compared responses of task-dependent cells to the various conditions (LT vs. RT; LTE vs. RTE; LE vs. RE, see Materials and Methods) using a t-test. In several cases, statistical comparison of each pair of two-sided conditions yielded no significant differences across all 3 situations (VIS: 4 of 25; MEM: 5 of 26; MOV: 5 of 28; HOLD: 7 of 24; t-test, P > 0.05).
These neural behaviors were also examined using Principal Components Analysis, as detailed in Supplementary Material, which yielded similar conclusions.
Different Reference Frames for Reaching
The previous analysis suggested that different reference frames could be employed by V6A neurons. The 2 frames studied in the present work are the eye/target-related and the space-related frame of reference. We use the term “eye/target-related” to describe cells whose discharge depended on the relative eye/target configuration, while “space-related” refers to cells whose discharge depended on target position (or on arm direction/position). In our experimental conditions, “space-related” indicates stability with respect to the initial position of the hand, head, and body. In Figure 6, we show 3 examples of cells highlighting the different spatial and eye/target-related configurations across the 3 tasks.
The neuron in Figure 6A exhibited stronger discharges in MOV when the target was located at the center or to the right, rather than to the left (see first row of Fig. 6A; TE and RT). However, this pattern raises the question of whether this neuron's preference was for the central and right side of the animal's workspace, or the foveal and the right part of the visual space. Considering the strong responses observed when reaches were directed rightward, but with a different eye/target configuration (Fig. 6A, RTE and RE), it seems that the neuron preferred the right working space. That is, activity during reaching was encoded in a space-related frame of reference. Consistent with this interpretation, the same cell showed weak responses when the target was to the right of the fixation point, but shifted to the left in space-related coordinates (i.e., in the right part of the visual space; Fig. 6A, LE). This, in turn, ruled out the possibility that this cell discharged according to the relative position of gaze and reaching target. Figure 6A (RE) also shows a gaze effect on the reaching discharge, which is a phenomenon particularly relevant in area V6A (Marzocchi et al. 2008). We described this cell as “space-related.” The space-related behavior of this neuron is highlighted in the color maps of normalized mean activity [below each spike density function (SDF) in Fig. 6], which indicate the eye/target combinations that evoked the largest response during the epochs of interest.
The cell shown in Figure 6B was spatially tuned in MEM and MOV, and discharged maximally when the animal planned and reached to the right of the fixation point, or to the right-side target in the Constant-gaze Reaching Task (Fig. 6B, RT). Comparison of cell firing in the same spatial configuration (movement to the right, Fig. 6B, RTE) and in the same eye/target-related configuration (movement to the right of fixation point, Fig. 6B, LE) indicates a preference for executing reaches to targets located to the right with respect to fixation point, that is, in a eye/target-related reference frame. This neuron showed negligible discharges during both planning and movement phases in all other relative eye/target configurations, and for rightward reaching when the fixation point and reaching target were in register (Fig. 6B, LTE, RTE, and TE).
The neuron in Figure 6C did not exhibit either of the modulation patterns described above. Rather, this cell showed a strong gaze-related discharge that was evident in all the epochs of interest. The only condition that evoked any discharge was direction of the gaze to the left (LTE). In addition, this neuron was also influenced by hand position, as shown by the comparison of gaze-related discharges in the LTE and LE eye/target combinations. When the hand was on the right, gaze-related discharge was less prominent. Thus, the modulation of this cell has to be described as a complex interplay between the target/hand position and the gaze, rather than a simple eye/target configuration.
We compared the difference (Euclidean distance) between response profiles collected in either the constant space position or constant eye/target configuration, in order to assess the frame of reference for reaching across the entire V6A neuronal population. In Figure 7, the 2 distances calculated for each neuron are plotted against one another. The Euclidean distance was separately calculated for neural discharges during each of the premovement and action epochs (top and bottom rows, respectively). In Figure 7, neurons that encoded target position in an eye/target-related reference frame are located below the diagonal (e.g., the cell shown in Fig. 6B; filled square). Conversely, neurons that encoded target position in space-related frame of reference are located above the diagonal (e.g., Fig. 6A; filled triangle). Filled circles indicate cells whose bootstrap-estimated confidence intervals do not cross the diagonal. Empty circles represent those neurons whose confidence intervals cross the diagonal, and therefore are not statistically distinguishable with respect to eye/target- or space-related coordinates. We refer to these neurons as “mixed cells,” to reflect that they could employ either reference frame, as in Figure 6C (filled diamond in Fig. 7).
Mixed cells predominated in all epochs (Fig. 7), accompanied by several cells classified as eye/target related (VIS: 10 of 71; MEM: 18 of 87; MOV: 16 of 83; and HOLD: 22 of 79). A few space-related cells were evident in VIS (2 of 71) and HOLD (4 of 79) while a higher incidence of this type was observed in MEM (10 of 87) and MOV (9 of 83). During visual encoding of target position, and during static postures on the reached target, almost all neurons were eye/target-related or showed a mixed reference frame. In contrast, while the reaching movement is being planned and performed, neurons in V6A are more balanced between the space-related, mixed, and eye/target-related reference frames.
We reconstructed the anatomical location of the recording sites and examined whether space-related cells, eye/target-related cells and mixed cells were spatially segregated within V6A, but found no evidence of such segregation.
It is well known that V6A cells compute gaze direction signals (Galletti et al. 1995; Hadjidimitrakis et al. 2011; Breveglieri et al. 2012) and reach direction signals (Fattori et al. 2005; Hadjidimitrakis et al. 2014b). To date, however, the computation of reach direction signals has only been demonstrated in experiments where the reaching targets were coupled with gaze (foveal reaching). We here investigated whether V6A neurons encode arm reach direction, or the spatial position of reaching target (which co-varies with reach direction this task), when the gaze is not directed to the target (peripheral reaching).
We examined the encoding of reaching target during the different temporal stages of the task, including: visual appearance of target; reach planning and execution; and target holding. During the premovement phase, the brief visual presentation of the reaching target modulated about 37% of V6A neurons. Some cells showed a transient response to target appearance (Fig. 3A). Others (Fig. 3B) responded to target presentation and maintained their output through the remaining task epochs even if the visual stimulus had already disappeared. Spatial tuning of the sustained discharge was highly consistent across epochs (Fig. 4B). Interestingly, we observed preservation of spatial tuning in the vast majority of V6A cells during task progression from movement planning through execution. This suggests an involvement of V6A in the motor process preparation, indicating a role for this area in visuomotor processes extending from capture of the visual target throughout the subsequent movement. Movement planning affected the activity of 54% of V6A cells in our sample. Note that in the nearby PRR (mostly corresponding to area MIP), the incidence of reach planning signals was about 70–80% (Chang et al. 2008; Hwang and Andersen 2012), clearly higher than in V6A. This indicates a weaker involvement of V6A in planning reaching movements with respect to PRR.
The population analysis of space representation in V6A suggests a quite homogeneous encoding of working space during planning and execution of reaching movements, and an over-representation of contralateral targets during their visual presentation (Fig. 4A). Spatial preferences in the VIS epoch are consistent with the visual properties of V6A neurons, in which the visual receptive fields are mainly located in the contralateral hemifield (Galletti et al. 1999; Gamberini et al. 2011). It is interesting to note that the over representation of ipsilateral targets in HOLD (Fig. 4A) cannot be explained by the visual stimulation evoked by the hand that holds the reaching target, as the arm is in the ipsilateral visual field, which is poorly represented in V6A. Furthermore, the action-related neuronal activity occurred in complete darkness, precluding the possibility that the spatial modulation observed during the reaching task could depend on visual inputs. It is thus likely associated with other factors, such as corollary discharges from motor centers, for example, premotor cortex, which are known to be directly connected to V6A (Shipp et al. 1998; Gamberini et al. 2009). Another possibility is that spatial modulation is due to proprioceptive information from the arm, which has been demonstrated in V6A (Breveglieri et al. 2002). Consistent with this possibility, optic ataxia patients, who have lesions likely involving the human homolog of macaque V6A (see Battaglini et al. 2002; Karnath and Perenin 2005; Pitzalis et al. 2013), showed deficits in extracting the spatial location of the ataxic (contralesional) hand from multijoint proprioceptive information, and showed pointing errors when proprioceptive pointing movements were performed with the same ataxic hand (Blangero et al. 2007).
In this study, and in previous work from our laboratory (Marzocchi et al. 2008), we have shown that V6A neurons prefer reaches directed to peripheral, rather than foveal, targets (see Fig. 4A). Preference for peripheral reaching has also been reported in a region of the human brain (Prado et al. 2005) that likely includes the human homolog of monkey area V6A (Pitzalis et al. 2013). Additional evidence for this preference comes from the work of Beurze et al. (2010), who demonstrated that in posterior parietal cortex and dorsal premotor cortex, planning peripheral reaches is metabolically more costly than planning foveal reaches.
Frames of Reference for Reaching in Area V6A
In the present work, we investigated whether V6A neural activity encoded the spatial location of the reaching target, or its position with respect to the gaze (i.e., retinotopic, or eye-centered position). Because the spatial positions of the target were always in register with respect to the world, and the monkey's head and body, these 3 frames of reference (world-centered, head-centered, body-centered) could not be studied separately here, and we refer to them collectively as the “space” frame of reference. In addition, in our experimental conditions, we could not distinguish between target position and reach direction, as they co-varied when the monkey performed a given reaching movement from a fixed starting hand position. Although the reaching positions tested in the present work give an effective estimate about the complex relationship between these spatial variables, we are aware that the limited number of positions tested here will require further studies to fully characterize spatial selectivity of V6A neurons in the entire reachable space.
The Constant-gaze Reaching Task alone does not allow us to determine whether the discharge pattern of a cell was the result of different spatial positions of targets in space, or of different relative positions of the reaching target with respect to the gaze. This ambiguity resulted from the fact that, each time, the reaching target changed position with respect to the gaze, it changed position in space. However, in the present work, we compared reaching activity in the same neuron across 3 tasks with different eye/target configurations (Fig. 2). This allowed us to determine whether the neural modulation was due to the change of target position in space, or to the change in the relative position of the target with respect to gaze.
The evaluation of eye/target configuration evoking the best response highlighted 2 patterns of reaching target encoding, and hence we describe 2 types of neurons: side-dependent and task-dependent cells (Fig. 5). Side-dependent cells comprise the majority of population, exhibiting complex behavior in which spatial preference changed or inverted across task conditions. The side-dependent group also includes neurons whose discharge was dependent on the relative position of eye and reaching target, and neurons with intermediate neural behaviors between eye/target- and space-related encoding of reaching.
Cells in the task-dependent category were far less common in V6A. This group includes cells with similar response profiles during reaching toward the left or right target positions. Task-dependent cells were modulated only by the nature of the task. Thus, we can assert with confidence that these cells present a balanced encoding of both peripheral target positions. We might suppose for these neurons the existence of a “straightforward” coding of reaching space in absolute coordinates.
Galletti et al. (1993) reported the presence of “real position” cells in V6A, in which visual receptive field did not shift with gaze. In that experiment, animals did not perform any arm movements and the authors inferred that these neurons directly encoded visual space in craniotopic coordinates (Galletti et al. 1993, 1995). The present data suggest that V6A could extend the “real position” concept to the motor-related activity of these task-dependent cells.
We applied a bootstrap analysis to characterize the frames of reference used by V6A neurons for reaching (Fig. 7). This approach showed that the majority of V6A exhibited “mixed” reference frames, that is, they did not have a unique frame of reference organizing their activity. Cells with this response profile employed both space-related and eye/target-related frames of reference over the different epochs. It is worth noting that pure spatial-related encoding is poor with respect to the other 2 frames, (eye/target-related and mixed). It remains to be determined whether the response pattern of “mixed” cells represent an intermediate step between spatial and eye-target related frames of reference, or a genuinely different way of encoding reaching targets.
Comparison with Nearby Areas and Functional Suggestions
Our results differ in several respects from observations reported for nearby reach-related areas of superior parietal cortex. For instance, reaching activity in V6A is not predominantly organized in the eye-centered frame of reference, in contrast to area PRR (Snyder et al. 1997; Batista et al. 1999), and it is not organized in global tuning fields as reported for area PEc (Lacquaniti et al. 1995; Battaglia-Mayer et al. 2003). Our data are consistent with recent work showing that neurons in area 5 and MIP do not fall into the rigid schema of eye- or space-related tuning, but rather incorporate complex and heterogeneous reference frames (McGuire and Sabes, 2011). Our data are also in line with a very recent description of reference frames for reaching in area PRR, which range from eye-related to hand-related, and includes many cells exhibiting an intermediate representation (Chang and Snyder 2010). Hybrid representations similar to V6A are also found in areas of the intraparietal sulcus, including LIP and MIP, Mullette-Gillman et al. (2009). In LIP and MIP areas, when monkeys performed visual and auditory saccades from different initial eye positions, both visual and auditory signals reflected a hybrid between head- and eye-centered coordinates during the execution of eye movements (Mullette-Gillman et al. 2009). Mixed reference frames in V6A were also described in recent work where the initial hand position was varied in foveal reaching toward targets located in different parts of the 3D peripersonal space (Hadjidimitrakis et al. 2014a). In that study, most V6A cells encoded targets in mixed body-centered and hand-centered coordinates, while several cells encoded targets in exclusively body-centered coordinates.
The high incidence of V6A neurons with mixed frames of reference suggests that this area represents a stage in the visuomotor process where spatially integrated information about the reaching target and arm movement parameters could be transferred to premotor areas. Anatomical evidence indicates that V6A is directly connected with the dorsal premotor area 6 (Matelli et al. 1998; Shipp et al. 1998; Gamberini et al. 2009). Moreover, many studies have reported that, during movement planning, neurons in dorsal premotor cortex are influenced by both eye-position and spatial coordinates of reaching targets, as we observed in V6A neurons (Pesaran et al. 2006, 2010; Batista et al. 2007). The interactions between V6A and the dorsal premotor cortex could allow coordinate transformations for correct arm movement control.
The presence in V6A of neurons employing different coordinate systems for reaching (eye/target-related coordinates, spatial coordinates, mixed representations) may also reflect the many types of signals (visual, proprioceptive, and motor-related) processed in this cortical area (Breveglieri et al. 2002; Bosco et al. 2010). Different eye/target configurations likely generate different sensory inputs, and the interplay between eye-related information, visual information, and somatosensory information within V6A neural population could facilitate task-dependent reweighting of sensory signals to create a mixed frame of reference, both as the action is planned to occur and while it is executed.
The coordinate systems described above are consistent with recent computational models of reach planning, which suggest that reaching movements are planned using a task-dependent, weighted combination of polymodal sensory information (see Ernst and Bulthoff 2004 for a review; Sober and Sabes 2005; McGuire and Sabes 2009). This hypothesis is also supported by the observation that optic ataxia patients are unable to simultaneously represent visual information defined across different frames of reference (Jackson et al. 2009). The intact V6A, presumably damaged in optic ataxia patients (Karnath and Perenin 2005), could help in forming such a cross-reference frame representation.
Our data suggest that reach-related neurons in V6A code reaching actions using several different reference frames: space-related, eye/target-related, and mixed. The multiplicity of V6A reference frames could emerge through concatenation of dynamically specific reference frames used to plan and guide reaching, that in turn could create a hybrid representation suitable for the specific demands of particular tasks (see also Jackson et al. (2009)). The mixed reference frames observed in V6A could be due to the different reference frames in which the various sensory signals that reach V6A are encoded (i.e., eye/target-related for visual inputs and space-related for proprioceptive inputs). To use these signals, transformations between reference frames are necessary, but this adds bias and variability (Sober and Sabes 2005; Schlicht and Schrater 2007). The presence of a mixed representation of target position during reach planning and execution reduces noise derived from sensory transformations (Deneve et al. 2001; Avillac et al. 2005; McGuire and Sabes 2009, 2011). The use of a control system that simultaneously represents different kinds of reference frames, as we observed in mixed neurons in V6A, could reduce such variability and facilitate reliable motor performance. The target location can be encoded in different types of V6A cells by integrating visual information with ongoing skeleto-motor signals concerning the current state of the arm, and corollary discharges of arm movement execution, all data available in V6A (see also Bosco et al. (2010); Gamberini et al. (2011)). This is consistent with the view that V6A is a node of the dorso-medial visual stream involved in online control of movements.
This work was supported by EU FP7-IST-217077-EYESHOTS, by Ministero dell'Università e della Ricerca (Italy) and by Fondazione del Monte di Bologna e Ravenna (Italy).
We thank G. Placenti for setting-up the experimental apparatus, E. Chinellato for help with the PCA analysis, M. Reser for proofreading assistance, and M. Gamberini for histological reconstructions of penetrations. Conflict of Interest: None declared.