Neurons in the posterior parietal cortex respond selectively for spatial parameters of planned goal-directed movements. Yet, it is still unclear which aspects of the movement the neurons encode: the spatial parameters of the upcoming physical movement (physical goal), or the upcoming visual limb movement (visual goal). To test this, we recorded neuronal activity from the parietal reach region while monkeys planned reaches under either normal or prism-reversed viewing conditions. We found predominant encoding of physical goals while fewer neurons were selective for visual goals during planning. In contrast, local field potentials recorded in the same brain region exhibited predominant visual goal encoding, similar to previous imaging data from humans. The visual goal encoding in individual neurons was neither related to immediate visual input nor to visual memory, but to the future visual movement. Our finding suggests that action planning in parietal cortex is not exclusively a precursor of impending physical movements, as reflected by the predominant physical goal encoding, but also contains spatial kinematic parameters of upcoming visual movement, as reflected by co-existing visual goal encoding in neuronal spiking. The co-existence of visual and physical goals adds a complementary perspective to the current understanding of parietal spatial computations in primates.
The posterior parietal cortex (PPC) is situated between sensory and motor cortices. It has been shown to be implicated in a number of higher cognitive functions such as spatial attention (Bisley and Goldberg 2010), decision making (Gold and Shadlen 2007), and planning of visually guided movements (Andersen and Buneo 2002; Battaglia-Mayer et al. 2003; Crawford et al. 2011). Neurophysiology research on action planning in PPC has mostly focused on sensory integration from different modalities for defining action goals (Snyder et al. 1997; Battaglia-Mayer et al. 2000, 2001; Buneo et al. 2002; Medendorp et al. 2005; Fattori et al. 2012; Bosco et al. 2015). Over the years, the view has emerged that PPC encodes movement goals in different spatial frames of reference (Batista et al. 1999; Buneo et al. 2002; Chang and Snyder 2010; McGuire and Sabes 2011; Hadjidimitrakis et al. 2013). Yet, from these previous studies, it remains ambiguous if movement goal signals in individual PPC neurons reflect the physical aspect of the planned movement (physical goal), or the visual aspect of the planned movement (visual goal).
Physical goal encoding would imply that motor goal signals are signatures of physical motor preparation. In contrast, visual goal encoding could implicate that action planning does not reflect the future physical motor execution per se, but rather the visual aspect of the upcoming movement kinematics. While physical and visual goals mark two distinct aspects of action planning, the visual and physical movement parameters in an everyday scenario are normally congruent. They can be experimentally dissociated by manipulating a visual cursor feedback about the movement (Shen and Alexander 1997; Eskandar and Assad 1999; Ochiai et al. 2002; Schwartz et al. 2004), or watching one's own movement through optical reversing prisms (Fernandez-Ruiz et al. 2007). Previous studies provide evidence that premotor and parietal areas are selective for visual parameters that correlate with planned or executed movements. For example, neural activity in ventral premotor cortex was shown to reflect the visualized trajectory of circular arm movements, while primary motor cortex reflected the actual physical arm trajectories (Schwartz et al. 2004). In monkey single-unit recordings, visual movement encoding has so far been shown in premotor cortex during motor planning (Shen and Alexander 1997; Ochiai et al. 2002) and during motor execution (Schwartz et al. 2004) while, in parietal cortex, it was only shown during motor execution (Eskandar and Assad 1999). During motor planning, hemodynamic responses in human PPC were shown to correlate better with the visual rather than with the physical goals (Fernandez-Ruiz et al. 2007). However, these previous studies did not spatially dissociate the visual instruction stimulus from the visual goal representation. For example, in a typical delayed or memory-guided reach task, used to analyze movement planning activity, the reach or cursor target is indicated to the subjects by a visual cue preceding the movement planning period. Thus, there is the possibility that the observed spatial encoding during planning was influenced by the visual memorization of the preceding instruction stimulus, rather than reflecting visual motor goals proper. Therefore, the question remains whether motor planning in PPC single-unit activity is associated with encoding of visual goals or physical goals, independent of visual memory. The answer to this question is not obvious, since the existing data on visual goal encoding in PPC is based on human fMRI (Fernandez-Ruiz et al. 2007). First, neural mass signals and, even more so, hemodynamic signals suffer from averaging effects over the large numbers of potentially idiosyncratic single-neuron coding properties, and the relationship between neural spiking activity and hemodynamic responses is particularly complex (Logothetis and Wandell 2004; Logothetis 2008). Second, the low temporal resolution of hemodynamic responses makes it particularly difficult to dissociate memory related activity from preceding visual stimulus related activation.
We directly tested if the sustained single-unit activity in the parietal reach region (PRR) contains information about visual and/or physical aspects of a planned action. To this effect, we used a reversing-prism task that dissociated physical goals from visual goals in a center-out reach paradigm. We additionally confirmed that the observed motor goal representations could neither be explained by visual memory encoding nor by selectivity for direct visual input. For this, we combined the reversing-prism task with an anti-reach task in which the monkey reached either toward the memorized visual location or to the location symmetrically opposite to it, under either the normal or the prism view of his fingertips. Our reversing-prism study at the single-neuron level adds a complementary perspective on PPC encoding, which is independent of previously described gaze-centered or body-centered spatial reference frames. Also, by analyzing local field potentials (LFPs) from the same recordings as single-unit data, we can narrow the gap between existing findings from human imaging data and monkey neurophysiology data.
Materials and Methods
The technical details of the apparatus and procedures were described previously (Westendorff et al. 2010), except for the reversing-prism optics and the optical motion tracking which will be described below. All experimental procedures were conducted in accordance with European Directive 2010/63/EU, the corresponding German law governing animal welfare, and institutional guidelines.
Two rhesus monkeys participated in the main experiment, and one of them participated in an additional control experiment. Both monkeys performed memory-guided center-out reaches with their preferred (left) hand on a fronto-parallel touch-screen, under monocular reversed vision (Dove prism PS992, Thorlabs, Germany) in a dimly lit room. The monkeys looked through a square aperture with their left eye (Fig. 1A), while the view from the right eye was occluded by a piece of black cardboard. The viewing distance of the screen was about 40 cm. The large distance helped to maximize the visual workspace on the screen seen through the aperture (10 × 10 cm square) and to minimize the variability in arm postures across trials and task conditions (see below). Additionally, we mounted an opaque board horizontally above the monkey's arm, which blocked the view of most of the lower visual field, except for the space near the touch screen immediately below its horizontal midline. The monkeys conducted their reach movements below this board, and could not see their reaching arm except for the most distal part of the fingers (1–2 cm) during the times when they extended beyond the board to touch the screen. All reach targets where located along the horizontal midline on the screen. The aperture contained either no prism (normal trials) or the Dove prism (prism trials), but provided the same field of view in either case. The mirroring axis of the prism was aligned with the center of the screen and thus with the ocular fixation spot (see below). Thereby, we ensured the precise alignment of the visual field with and without prism by controlling that the gaze direction during fixation stayed constant across both conditions.
Gaze direction was monitored via eye-tracking of the right eye (224 Hz infrared charge coupled device camera, ET-49B, Thomas Recording, Germany). A light-emitting diode was attached to the monkeys' fingertips (middle finger), to allow recording of reach movement trajectories. Reach trajectories were recorded as 3D position data at 200 Hz sampling frequency by an infrared optical motion tracking system (VZ 4000, PTI, Canada). In the two datasets in which we recorded the prism task without the pro/anti comparison (see below), we could only register reach endpoints. For this we used the position data from the transparent touch sensitive panel (IntelliTouch, ELO Systems, CA, USA) which was mounted in front of the display screen (LCD VX922, ViewSonic) in all experiments. The spatial resolution of both hand position recording systems is in the submillimeter range.
We conducted two experiments. In the first experiment (reversing-prism task), two monkeys (F & S) performed center-out reaches under either a normal or a prism viewing context. This task allowed us to dissociate the visual movement goal from the physical movement goal (Fig. 1A). In the second experiment (reversing-prism anti-reach task), we combined the reversing-prism task with an anti-reach task to confirm that the visual goal encoding that we observed in the first experiment (see Results) was not confounded by visual memory encoding (Fig. 1B). The combined reversing-prism anti-reach task [2 contexts (prism/normal) × 2 rules (pro/anti)] was performed by one monkey (S). Monkey F had to be excluded from the study for reasons unrelated to the experiment. In total, we recorded three independent datasets from two monkeys in this study.
Each trial started with a variable-length fixation period, followed by 0.2 s of visual cue presentation (reversing-prism task: spatial cue; reversing-prism anti-reach task: spatial cue and rule cue), and then a 1–2 s variable delay period during which the visual cues were absent (Fig. 1C). During these time periods, the monkeys had to keep both eye and hand fixation at the center of the screen (tolerance of 2 cm, or 2.9° of visual angle). Center-out reaches were made to peripheral targets with an eccentricity of 5 cm (7.1° visual angle, tolerance of 2 cm) in response to the disappearance of the central hand fixation spot (“go” signal). The monkeys received liquid reward for correct trials. Fingertip movements were continuously optically tracked to rule out on-line movement reversals (see also Supplementary S2). The monkeys had to keep ocular fixation on a small central spot throughout the trial.
The pro/anti task rules, relevant only in the reversing-prism anti-reach task, were instructed to the monkey by the color of a central frame surrounding the eye and hand fixation spots (green: pro rule; blue: anti rule). The pro rule required the monkey to reach toward the memorized visual cue position. The anti rule required him to reach to the opposite side of the visual cue location. The prism (vs. normal) trials, relevant in both experiments, were not specifically instructed. But monkeys could in principle distinguish the prism from the normal viewing context by visually noticing when the prism was in the aperture. The reversed visual feedback about their hand movements during acquisition of the hand fixation point at the beginning of a prism trial could have served as a cue for the prism being in place.
Importantly, despite the reversed vision of the fingertips, the monkeys had the same arm postures during the planning phase between the prism and the normal viewing contexts. This was encouraged by a large screen-body distance so that monkeys had to keep the arm stretched to reach the screen (i.e., little freedom on the elbow angles to allow different arm postures). To confirm this, we video-recorded the arm of the monkey over several sessions. We determined the position of the lower arm (near elbow) by comparing it with a measuring tape in the video image. The standard deviation of the arm position in the horizontal left/right direction (parallel to the touch screen) was 0.24 cm during prism-reaches, and 0.17 cm during normal reaches. The observed difference of 0.1 cm between the mean positions in normal and prism trials was not significant (P = 0.24, t-test).
The reach task was defined in the perceived visual space in all task conditions. For instance, in the prism context with a perceived right-side visual cue, the monkeys would need to physically reach to the left in order to visually bring the hand toward the memorized visual cue location (prism pro condition, lower left panel in Fig. 1B). In the prism anti trials (lower right panel in Fig. 1B), a perceived right-side visual cue would be associated with a physical rightward movement in order to visually bring the hand to the left (away from the perceived visual cue).
Left and right cues and pro and anti trials were randomly interleaved from trial to trial. Normal and prism trials were alternated in blocks of 40 trials by manually switching between the prism and the empty box in the aperture. Most recording sessions had four blocks, with two blocks in each viewing context. There were only two possible visual cue locations either to the left or to the right of the central fixation spots, at constant positions over all experimental sessions. As a consequence, the reach targets were not always centered on the response fields of the recorded neurons, since simultaneously recorded neurons often have different response fields.
Neural data Acquisition
We simultaneously recorded extracellular spikes and LFPs with a five-channel micro-drive (“mini-matrix”; Thomas Recording, Germany) from the PRR of two rhesus monkeys. Pre-surgical structural magnetic resonance imaging (MRI) guided chamber placement on the right hemisphere, contra-lateral to the handedness of each monkey (chamber center at bottom surface of chamber in monkey F: 7 mm lateral, 13 mm posterior; S: 6 mm lateral, 10 mm posterior). The selection of recording locations within the chamber was guided by a postsurgical structural MRI scan, as illustrated for both monkeys in Figure 2. The MR images are aligned to the orientation of the implanted chamber. Recording locations are superimposed onto the transverse section. Chambers were implanted surface-normal to the skull, not aligned with the stereotactic vertical axis (monkey F: A-P tilt 28° and M-L tilt 3° right-lateral; monkey S: A-P tilt 17° posterior, M-L tilt 2° right-lateral). Note that the selected layers in Figure 2 approximate the depth of the different recording sites, at around 4 mm below dura.
Our recordings were located within the functionally defined region “PRR” occupying the medial wall along the intraparietal sulcus, with an anterior/posterior span of the recording sites of approximately 3–5 mm and relatively limited variations in the medial/lateral dimension (within 1 mm). In monkey S, sites were located in the rostral portion of medial intraparietal sulcus (MIP), potentially overlapping with the intraparietal parts of area PE (PEip: Galletti et al. 1999). In monkey F, recording sites were located in the more caudal portion of MIP. Note that due to the tilted viewing angle, the distance of the recording sites from the occipitoparietal sulcus is difficult to estimate from Figure 2, and the chambers in two monkeys were tilted by different angles. Results from the two monkeys were quantitatively very similar (see Results), and we do not have an indication that neurons with specific encoding properties, e.g., visual vs. physical goal, would have clustered spatially within the range that we sampled. We therefore consider the data from both animals as coming from equivalent functional regions and most likely from MIP.
The raw signals were pre-amplified (20×; Thomas Recording), band-pass filtered into broadband data (154 Hz to 8.8 kHz) and LFPs (0.7–300 Hz). The band-pass filtered LFPs were digitized and sampled at 1000 Hz. Broadband signals were further amplified (400–800×; Plexon, Dallas, TX, USA), before online spike-sorting was conducted (Sort Client; Plexon). Additional to spike times, the spike waveforms were recorded, sampled at 40 kHz, and later subjected to offline sorting for the control of sorting quality (Offline Sorter; Plexon). All recorded and sufficiently well-isolated single spiking units, regardless of task-relatedness or directional selectivity, were included in the neural data analyses. Only LFPs from those channels which also contained isolated single-unit data were included in the LFP analysis.
Spiking Data Analysis
We quantified neural spiking responses by averaging spike rates across trials in different phases of the trial, but primarily in the last 800 ms before the “go” signal (the late delay period) to capture the sustained planning activity. We defined a direction selectivity index (DSI) as a contrast between the average spike rates (r) in left (L)- and right (R)-side cued trials:
The cue position was defined in the subject's perceived visual space (i.e., viewed through the prism if present). The left–right direction selectivity was considered significant at P < 0.05 (t-test between rL and rR).
We classified the neurons in the following ways: In detail, the procedure for the reversing-prism anti-reach task was as follows (see also diagram in Fig. 4B): First, we compared the directional selectivity between pro and anti trials for each neuron. If directional selectivity did not reach significance for both the pro and anti trials, this neuron was defined as unclassifiable. If directional selectivity reached significance for both pro and anti trials, DSIs could have either the same signs (visual memory neuron) or the opposite signs (motor goal neuron). Note that this anti dissociation, as a conservative approach, was computed independently in the normal and the prism context. Yet, we obtain consistent results between the normal and the prism trials: except for one neuron, all classifiable neurons showed the same motor goal selectivity in normal and in prism trials during the late memory period (see Results). Second, we computed the prism dissociation as in the reversing-prism task. The prism dissociation was computed independently in the pro and the anti trials. Again, neuron classifications according to the prism dissociation did not yield contradictory classifications when computed for pro and for anti trials. For this reason, we will report the population statistics of the prism dissociation conjointly for pro and anti trials in the reversing-prism anti-reach task (Fig. 4C).
In the reversing-prism task, we compared the directional selectivity between normal and prism trials (prism dissociation). If directional selectivity did not reach significance for both the normal and the prism trials, this neuron was defined as unclassifiable. When directional selectivity was significant in both normal and prism trials, the DSIs could have either the same signs (classified as visual goal neuron), or the opposite signs (physical goal neuron).
In the reversing-prism anti-reach task, we first compared the directional selectivity between pro and anti-trials (anti dissociation) and selected those neurons which showed motor goal encoding during the planning period according to this anti dissociation. We then, in a second step, applied the same neuron classification procedure as described above to these motor goal neurons.
The |…| symbol denotes the absolute values. We computed the ratios because we wanted to primarily compare the DSI consistency in sign and strength (tuning similarities or reversals) across task conditions, independently of how strong the directional selectivity was in individual conditions. For example, DSI ratios of 1 and −1 indicate that the DSIs in the corresponding comparison had the same selectivity strength, with, respectively, the same and the opposite left–right selectivity.
To quantify neural selectivity for individual experimental parameters and interactions between the different factors we also employed an ANOVA analysis, treating viewing context, pro/anti rule and cue direction as independent factors (see Supplementary S1). Note, though, while ANOVA tests can tell if in general, there is a main effect on direction (or context, or rule) and interactions between factors, they are not informative about how the directional selectivity varies specifically across task conditions, e.g., between the two viewing contexts. Yet, the core hypothesis emerging from our research question makes a specific prediction about the inversion of the spatial selectivity between the two viewing contexts. For this reason, we used the DSI index as our main analysis for quantifying neurons. To complement these “categorical” DSI analyses, we also examined the general neural response profiles and spatial selectivity strength in each task conditions, to see whether and how these intermediate neural properties were modulated by the viewing context or task rules, and how they might be related to kinematic behavioral parameters.
Finally, we also characterized the directional selectivity of each neuron in different task periods of the trial. The same data analysis was conducted for neuronal activities during the cue period (200 ms during the visual cue presentation), during the early delay period (from 100 to 900 ms after visual cue offset), during the late delay period (last 800 ms before the “go” cue), and during the reach movement period (200 ms before reach target acquisition). Different window lengths were used because the visual cue and movement periods were relatively brief compared with the delay period. The movement period was chosen such that the average 120 ms movement time plus an estimated 80 ms delay from muscle activation onset to measurable lifting of the finger was covered.
To test the statistical significances of the observed visual goal neurons and physical goal neurons, we applied permutation tests at different levels. The first permutation served as an alternative to the t-test for the directional selectivity of individual neurons. For this, we shuffled trials across all task conditions (left/right × normal/prism ×pro/anti) within each neuron. Direction selectivity was considered significant if the original DSI value fell outside the 95% confidence interval of the shuffled DSIs (comparable with the P < 0.05 criterion used in the t-test).
With the second permutation, we asked the more specific question: Provided that we preserve each neuron's general left/right selectivity, and meanwhile provided that all neurons were selective for the physical goal, how likely would neurons then be falsely classified as visual goal neurons, due to the remaining uncertainty in spike rate estimation? To test this, we grouped the task conditions across which a physical goal neuron is expected to be invariant. In the reversing-prism task, normal-left and prism-right trials fell into one group while normal-right and prism-left trials fell into the other group. In the reversing-prism anti-reach task with 2 × 4 conditions, normal-pro-left, prism-pro-right, normal-anti-right, and prism-anti-left trials fell into one group while normal-pro-right, prism-pro-left, normal-anti-left, and prism-anti-right fell in the other group. For each neuron, we then shuffled trials within each of these two groups (N = 1000 permutations) and re-computed DSIs in each task condition. The dashed ellipses in Figures 3B and 4C mark the 99% confidence intervals estimated from the resulting shuffle distribution of the DSIs combined from all neurons. Neurons that comply with the physical goal hypothesis should fall into this range. Neurons that fall outside this range and have a distance to the boundary of the ellipse that is large compared with the corresponding diameter of the ellipse, are extremely unlikely to be explained by random fluctuations under the assumption that the physical goal hypothesis is exclusively true.
The same analysis could be performed and the same conclusion be reached by starting from the null hypothesis that all neurons are tuned for the visual goal. In this case, the permutation would be conducted within groups of conditions which share the same visual goal. The confidence ellipse in Figures 3B and 4C would be oriented orthogonally and all neurons falling outside the ellipse would have to be considered unlikely to be explained by random fluctuations under the assumption that the visual goal hypothesis is exclusively true (not shown).
LFP Data Analysis
LFP spectrograms (time-resolved amplitude density spectra) were estimated via a discrete Fourier transformation of the LFPs using a sliding window of 384 ms length (tapered with a hamming window) and a step size of one-quarter of the window length (96 ms). For each LFP site, raw spectrograms were computed separately for each reach direction and in each task condition. We then computed a left versus right difference of spectrograms in each task condition before averaging across all LFP sites. Directional selectivity in each time–frequency bin was considered statistically significant if the Bonferroni corrected P-value of a t-test was <0.05. The alpha criterion was set to 0.05 divided by the total number of time–frequency bins in each spectrogram. This marks a very conservative correction since the spectral density in neighboring time-bins is not fully independent due to the sliding-window approach.
Both monkeys had become well acquainted with the two viewing contexts and performed the reaching task with high performance. The overall success rates, after subtracting trials with ocular or hand fixation breaks, belated responses, and erroneous choices, were 79% for monkey F and 76.3% for monkey S in the normal viewing context and 79% (F) and 78.5% (S) in the prism viewing context. There was no significant difference in the success rates between the two viewing contexts for monkey F (P = 0.95, paired t-test across N = 110 recording sessions), and minor differences for monkey S (Δ = 2.2%, P = 0.004, N = 107). Most error trials were attributable to terminations early during the trial (ocular or hand fixation breaks before the “go” signal), rather than confusion of the reach directions. The percentages of correct reach choices in non-aborted trials were 99.3% (F) and 98.8% (S) in the normal viewing context, and 99.2% (F) and 98.6% (S) in the prism viewing context. For both monkeys, there were no significant differences in choice performance between contexts (P = 0.92 for F and P = 0.71 for S, paired t-test).
The monkeys were also well acquainted with conducting the movements under reversed vision. Once the monkeys initiated a movement, there were no indications of changes-of-mind during action execution due to the inversed visual hand feedback. Reach trajectories were recorded for monkey (S) in the combined reversing-prism anti-reach task, to quantitatively test this behavior. Trajectories and velocity profiles of the index finger tip from an example session demonstrate stereotyped smooth movements with unimodal velocity profiles within each task condition and in each direction (Supplementary S2). Reach amplitudes showed mixed results across datasets and across monkeys. Monkey S systematically undershot the targets in each task condition, but significantly more so when the prism was present in the reversing-prism anti-reach task, while monkey F overshot the targets, but only in the normal viewing context. In other words, significantly shorter reach amplitudes under prism vision were observed for the monkey F prism task and the monkey S reversing-prism anti-reach task, but not for the monkey S reversing-prism task (Supplementary S3).
Coexistence of Visual Goal and Physical Goal Selective Neurons
To test our main hypothesis, we asked if the delay period activity in PRR reflected the spatial parameters of the physical goal or the visual goal. For this, we tested the correlation between the neuronal activity of each neuron during the instructed delay and the direction of the impending physical reach (physical goal), or the direction of the impending visual movement (visual goal).
Figure 3A shows the peristimulus time histograms (PSTHs) of two example neurons. During the delay period, the directional selectivity of the example neuron in the left panel correlated with the physical goal direction: the neuron had higher firing rates for rightward physical goals, regardless of visual goal directions during the sustained delay period. We will call this type of response pattern “physical goal encoding”; it is defined by a significant spatial selectivity during the delay period in both the normal and prism viewing trials, with the preferred directions (PDs) in the two viewing contexts being opposite when calculated with respect to the visual cue direction, that is, being the same when calculated with respect to the physical goal direction (see Materials and Methods). Notably, besides a modulation by physical goal directions, the neuron's activity during the late delay period diverged between normal and prism conditions (continuous red vs. continuous green). Since this separation was not present during the early delay period, but became even more pronounced during the movement period, we consider this neuron to be additionally influenced by the impending visual movement parameters.
In contrast, the example neuron on the right panel of Figure 3A was characterized by being more active in other two of the four task conditions. These two conditions shared the same visual goal to the right direction, but had opposite-side physical goal directions. Thus, this neuron's responses correlated best with the visual goals. We will call this type of response pattern “visual goal encoding”; it is defined by a significant spatial selectivity during the delay period in both the normal and prism viewing trials, with the PDs in the two viewing contexts being the same when calculated with respect to the visual cue direction (which is the same as the visual goal direction in this task). In addition to a modulation by the visual goal direction, this neuron also showed significant and complex modulation by the viewing context. Indeed, during the delay period the planning activity for the PD was suppressed in prism trials compared with normal trials. During the movement period, the response to the non-preferred direction (ND) was increased during prism trials compared with normal trials, which means that the relative response strength in different task conditions was not the same during the delay and the movement periods.
In the following population analysis, we will mainly focus on the delay period activity to test our specific hypothesis. We will first report results on the main modulation of visual goal versus physical goal representations, and then describe the more quantitative analysis on the context modulation of neural activity. Statistics on the frequency of interaction between directional and contextual modulations can be found in Supplementary S1. Finally, we will present data on spatial selectivity during visual cue presentation and during movement.
For the population statistics, we first compared the directional selectivity of each neuron between the two viewing contexts and quantified it with a signed left–right directional selectivity index (DSI) during the last 800 ms of the delay period (see Materials and Methods). Left and right were defined in the visual space, that is, as seen by the subjects through the aperture with or without the prism. We plotted the DSI obtained in the normal viewing context against those from the prism viewing context separately for all neurons from each monkey (Fig. 3B). The fact that we quantified our results via such classification approach does not imply that the underlying distribution of neural selectivity is necessarily categorical, and we do not imply that PRR contains distinct “classes” of neurons (see Discussion). In fact, the distributions formed a continuum from visual goal selectivity (first and third quadrants) to physical goal selectivity (second and fourth quadrants). Since both monkeys showed very similar percentages for the spatial selectivity classification of neurons, we report percentages in the pooled data as well as in the individual animal's data. Our datasets comprised 362 recorded PRR neurons (monkey F: 199; S: 163). Of those, 75% were task-related (significantly left–right selective in at least one of the viewing contexts—F: 151 [73%]; S: 119 [76%]). Of these task-related neurons, 78% were significantly spatially selective in the normal viewing context (F: 118 [78%]; S: 93 [78%]), 59% in the prism viewing context (F: 86 [57%]; S: 73 [61%]), and 37% in both the normal and prism viewing context (F: 53 [35%]; S: 47 [39%]). This latter fraction was eligible for further classifications according to our specific research question (see Materials and Methods). This comparably small fraction of significantly selective neurons should not be surprising given the fact that the left–right task design does not match the PD of many neurons, which might be aligned to the vertical axis. Of the total 100 eligible neurons, 76% were classified as physical goal neurons (F: 39 [74%]; S: 37 [78%]), while 24% were classified as visual goal neurons (F: 14 [26%]; S: 10 [21%]) (Fig. 3B).
We did not observe a preferred ipsi- or contra-laterality in either neuron type. Of the 76 physical goal neurons (F: 39; S: 37), 47% preferred ipsi-lateral goal locations (F: 17 [44%]; S: 19 [51%]) and 53% preferred contra-lateral goal locations (F: 22 [56%]; S: 18 [49%]). Of the 24 visual goal neurons (F: 14; S: 10), 42% preferred ipsi-lateral space (F: 6 [43%]; S: 4 [40%]) and 58% preferred contra-lateral space (F: 8 [57%]; S: 6 [60%]). A χ2 test on the number of ipsi- and contra-preferring neurons among the physical and visual goal neurons showed no significant difference in the laterality (P = 0.63). This means that visual and physical goal neurons showed a balanced left/right preference in spatial motor goal encoding.
The left–right selectivity patterns of individual neurons that led to their classification as visual goal or physical goal neurons did not just reflect random variation of directional selectivity. We confirmed this in three independent ways. First, we used conservative criteria when we classified the individual neurons as being visual or physical goal selective: we required that the left–right difference in firing rate was statistically significant in both viewing contexts (normal and prism). To this effect, we tested each firing rate difference with a t-test at P = 0.05, which means that for a random process with independent spike rate fluctuations in each task condition the chance level for a neuron to become significant for both conditions was at 0.052 = 0.0025. Second, we conducted an additional non-parametric permutation test at the level of individual neurons. The null hypothesis for this permutation test was the absence of spatial selectivity (see Materials and Methods). This test confirmed our original single-neuron classification based on the t-tested pair-wise spike rate differences. Third, we used a different permutation test which tested against the null hypothesis that all task-related (and hence directionally selective) neurons are physical goal selective. In other words, we asked if it was possible that all task-related neurons were actually encoding the physical goals, but had been miss-classified as visual goal neurons due to random fluctuations and measurement uncertainty. For this, we shuffled the task conditions so that the directional selectivity was preserved, but so that the resulting surrogate data complied with the null hypothesis of pure physical goal encoding (see Materials and Methods). The dashed ellipses in Figure 3B show the 99% confidence limits for these shuffled values. The experimentally observed DSI values of many neurons fell outside the 99% confidence limits of the shuffled values (Fig. 3B). The result of this third test strongly indicates that the existence of visual goal neurons cannot be explained by inherent random variability of left–right directional selectivity under the null assumption of pure physical goal encoding.
In summary, even though the total number of visual goal neurons which complied with the set of strict classification criteria was small, their existence marked a highly significant deviation from what one would have to expect from random variations or pure physical goal encoding.
Visual Goal Neurons Are Not Related to Visual Memory Encoding
In the reversing-prism task, the location of the visual goal was congruent with the location of the preceding visual cue. Therefore, the visual encoding during movement planning in our main dataset could in principle reflect visual memory encoding. To rule out this possibility, we conducted a second experiment in which we combined the reversing-prism task with an anti-reach task (Crammond and Kalaska 1994; Zhang and Barash 2000; Gail and Andersen 2006; Westendorff et al. 2010). For reasons unrelated to this study, this experiment could only be conducted in one of the monkeys (S). Monkey S was trained to move its visually perceived hand either toward the visually cued location (pro trial) or to the location symmetrically opposite to the cue (anti trial). Pro and anti trials were carried out under either a normal viewing context or a prism viewing context (Fig. 1B).
The combined reversing-prism anti-reach task design spatially dissociated the visual memory, the physical goals, and the visual goals (Fig. 1B). The pro versus anti comparisons dissociated the visual spatial memory of the cue from the spatial motor goals, since the same cue position instructed opposite-side movement goals in pro and anti reaches. For example, the same right-side cue was associated with either a left-side (anti) or right-side (pro) motor goal (Fig. 1B, top 2 panels), and vice versa. Hence, the pro/anti comparison (further referred to as “anti dissociation”) is well suited to identify spatial neural representation of an intended motor goal, when compared with representations of the visual memory of the cue. Yet, this anti dissociation alone does not allow to decide if such motor goal representations indicate the preparation of the intended physical hand movement (physical goal, black hand symbol) or the future visual movement (visual goal, orange hand symbol). Therefore, to dissociate physical goals from visual goals, we asked the monkey to conduct the pro and anti reaches in the two different viewing contexts: the normal viewing context (top panels) and the prism context (bottom panels), as in experiment 1. Most importantly, this combined prism and anti task created two pairs of conditions where the preceding sensory cue and the impending physical reach directions were identical, but the impending visual movements were opposite (both diagonals in Fig. 1B). For example, if one compares the two diagonal conditions “normal pro” and “prism anti”, then the monkey received the same right-side visual instruction and had to conduct the same right-side physical movement, but with opposite visual movement of the hand. Therefore, by combining the anti dissociation and the prism dissociation, one is able to fully separate visual goal signals from visual memory and from physical goal signals.
The example neuron in Figure 4A shows representative response patterns in the combined reversing-prism anti-reach task. This neuron was selective for the visual goal as opposed to the physical goal or the visual memory during the delay period. First, in the trials with the pro task rule, both a right-side physical goal during normal viewing (dark green) and a left-side physical goal in the prism viewing condition (dark red) elicited neural responses above baseline level (Fig. 4A, left panel). The visual goal directions were identical in these two conditions (to the right direction). However, neither of the other two conditions with left-side visual goal (but with opposite-side physical goals) elicited a response. This indicated that the example neuron was selective for the visual goal, not the physical goal. When we compared the neural responses between the normal and the prism conditions in the anti reach trials for the same neuron, the same pattern of visual goal encoding emerged during the late delay period (Fig. 4A, right panel). Second, the directional selectivity reversed between pro and anti trials, and this was true in both the normal (green vs. cyan lines) and the prism viewing conditions (red vs. blue lines). A right-side cue in pro reaches elicited similarly strong responses as a left-side cue in anti reaches, and vice versa (Fig. 4A, contrast across left and right panels). This reversal of the selectivity in the anti dissociation shows that this visual goal neuron was selective for the direction of the intended movement (motor goal) rather than the visual memory.
We classified all task-related neurons (significant DSIs in at least one of the four task conditions; 71/81 = 88% of all recorded neurons in the combined task) according to their directional selectivity in both the anti dissociation and the prism dissociation (see Fig. 4B and Materials and Methods). In brief, visual memory neurons would have significant left–right selectivity with same-signed DSIs in the pro and anti trials. Motor goal neurons were characterized by significant but opposite-signed DSIs in the pro and anti trials. These motor goal neurons, in a second step, were further classified as visual goal or physical goal neurons. Visual goal neurons were identified by same-signed DSIs in the normal and prism trials. Physical goal neurons were identified by significant opposite-signed DSIs in the normal and prism trials.
To illustrate the spatial selectivity of each neuron across all conditions, that is, in the anti dissociation and in the prism dissociation in combination, we plotted their DSI ratio from the anti dissociations (average across normal and prism contexts) against the DSI ratio from the prism dissociations (average across pro and anti rules) (Fig. 4C). Computing the average across the viewing contexts in the anti dissociation and across the pro/anti rule in the prism dissociation was justified since the separate analysis in each viewing context and each task rule lead to consistent results (see also Materials and Methods). As a result, in this ratio plot, only two of four quadrants were populated with neurons which were significantly selective in two or more task conditions. These are the two left quadrants, which correspond to motor goal encoding (negative DSI ratio for anti dissociation). The ratio plot shows that the selectivity spreads widely between visual goal selectivity (positive DSI ratio in prism dissociation, top left quadrant) and physical goal selectivity (negative DSI ratio in prism dissociation, bottom left quadrant), with a predominance of physical goal encoding. Importantly, the upper left quadrant contains a substantial fraction of neurons. This quadrant corresponds to visual goal encoding (positive DSI ratio for prism dissociation). We analyze DSI ratios rather than DSI differences since the distinction between visual goal and physical goal encoding needs to be independent of the strength of the left–right selectivity (absolute DSI value). Rather, the question was whether the signs of the left–right selectivity (the DSI) were preserved or inverted across task conditions.
Consistent Encoding across Tasks and Monkeys
The anti dissociation in the reversing-prism anti-reach task was essential to rule out the possibility that visual goal neurons reflect visual memory (Fig. 1B). As a result, the anti dissociation almost exclusively revealed motor related neural selectivity during the late delay period (75% motor selective neurons vs. 1% visual memory selective neurons (Fig. 4C)). In retrospect, this indicates that we can classify the individual neurons as visual goal or physical goal neurons based on the prism dissociation alone (experiment 1). This is because sustained visual memory encoding during the delay period was absent and hence could not constitute a confounding factor (as shown in experiment 2). To confirm this, we conducted a control analysis in which we used the data of monkey S during the combined reversing-prism anti-reach task and ignored the anti dissociation. We then classified the neurons as we did in experiment 1. When we did so, the group of neurons which were classified as visual goal selective, was exactly the same group of neurons as in the complete analysis of experiment 2. The same was true for the physical goal neurons, except for only one additional neuron, which was classified as physical goal neuron. This means that, for the single-neuron data in our task design, we can test the visual versus physical goal encoding directly with the reversing-prism task. This allowed us to compare three independent datasets across both monkeys.
The percentages of visual goal and physical goal neurons were comparable across monkeys and datasets. All datasets contained close to 10% visual goal neurons and 26–41% physical goal neurons when we conservatively calculated the percentages relative to the total number of neurons (Fig. 5A). This is a very conservative (under-) estimation in two senses. First, a large fraction of neurons in the pure reversing-prism task drop out of the analysis because their preferred directional selectivity would not match our left–right task design. When we computed the percentages relative to the number of left–right direction selective neurons, then between 19% and 26% of neurons showed visual goal selectivity during the late delay period (14/53 = 26% prism task [F], 10/47 = 21% prism task [S], 7/36 = 19% combined prism-anti task [S]), whereas between 74% and 81% of neurons showed physical goal selectivity (39/53 = 74% prism task [F], 37/47 = 79% prism task [S], 29/36 = 81% combined prism-anti task [S]) (Fig. 5A). Second, the fraction of visual goal neurons depended on the choice of the time window within the delay period (Fig. 5B). So far, we had focused on the late delay period, but the fraction of visual goal neurons during the early delay period is higher than during the late delay period. During the early delay, 17% (prism task, F), 16% (prism task, S), and 12% (combined prism-anti task, S) of all task-related neurons were visual goal neurons. In comparison, in the late delay period, 9%, 8%, and 10% of all neurons were visual goal selective (Fig. 5A,B). Note that differences between early and late delay epochs might be underestimated due to partial overlap of the time windows in trials with random short memory lengths.
Visual Goal Neurons Are Not Selective for the Visual Input
We tested if the neurons that were classified as visual goal neurons during the delay period would also be selective for actual visual input. For this we tested the influence of the visual cue on the neural selectivity during the cue period, as well as the influence of the actual visual hand feedback during the movement period.
A previous study showed that part of PRR neurons—which developed motor goal selectivity during the delay period—encoded the position of the visual instruction stimulus during the preceding cue period (visuomotor neurons). Other neurons in the same experiment were only directionally selective after the cue presentation, when the monkeys knew already about the pending motor goal (motor goal neurons) (Gail and Andersen 2006). If the visual goal neurons in our study were identical to the neurons which are selective for the actual visual input, then the visual goal neurons should be largely overlapping with the visuomotor neurons. We tested this possibility by analyzing the spatial selectivity of visual goal neurons during the cue period for the dataset with the combined reversing-prism anti-reach task. The other two datasets are not eligible for this analysis, since the definition of visuomotor tuning depends on the anti dissociation (Gail and Andersen 2006).
Overall, directional selectivity was much less frequent in the cue period than in the delay period. 37% (30/81) of all recorded neurons showed significant directional selectivity in any of the four task conditions during the cue period, compared with 88% (71/81) during the late delay period. Specifically, of the 7 visual goal neurons, only one (1/7, 14%) showed a visual related tuning during the cue period (Fig. 6, dashed arrows). On the one hand, this low percentage matches the expectation from our previous studies where only approximately 10%–15% of neurons showed a visually selective response to the briefly flashed cue in anti-reaches. On the other hand, the cue period response was most likely not optimal for each neuron since we used a relatively small visual eccentricity (5 cm, 7.1° visual angle) due to spatial constraints of the prism device and only two fixed cue directions (left or right), often not matching the optimal direction of recorded neurons. Importantly, most visual goal neurons (6/7, 86%) were not directional selective during the cue period, as can be seen from the example neurons in Figure 3A (right panel) and Figure 4A. Consequently, visual goal neurons are not identical to visuomotor neurons.
We further asked if visual goal neurons would be selective for the actual visual feedback during the movement period. Indeed, the visual goal neurons were classified during the late delay period, that is, when no spatial visual input that could directly elicit a directionally selective response in the neurons is available. However, during the movement period, the spatial visual input about the hand is available. If visual goal neurons were visually responsive, the visual input during movement should then induce a visual selectivity. Therefore, we examined the spatial selectivity of visual goal neurons during the movement period in all three datasets.
The average discharge rate during the movement period was significantly different from the baseline firing rate in 98% of the PRR neurons in the reversing-prism task (F: 196 of 199; monkey S: 159 of 163). Of the total population, 71% exhibited significant spatial selectivity in the normal viewing context (F: 146 [73%]; S: 110 [67%]), 54% in the prism viewing context (F: 102 [51%]; S: 92 [56%]), and 40% in both viewing contexts (F: 78 [39%]; S: 68 [42%]). Visual goal neurons overlapped poorly with the group of neurons that showed visual movement related tuning during the reach movement period. Of the 31 visual goal neurons, only 16% (5/31) showed significant directional tuning that was selective for the visual movement during movement execution, while others were either not significantly direction tuned (55%, 17/31) or significantly direction tuned but selective for the physical movement direction (29%, 9/31) during the reach movement period (Fig. 6, solid arrows). This indicates that visual goal neurons are not necessarily selective for the visual input during the movement when visual hand feedback is available.
In summary, there was little overlap between the visual goal encoding during the delay period and the visual input encoding during either the cue period or the reach period. This shows that visual goal neurons do not represent direct bottom-up sensory or perceptual parameters, but rather show activity related to motor planning which correlates with visual sensory aspects of the planned movement.
Modulation of Spatial Selectivity Strength
Our main research question mostly addressed the directionality of the selectivity in individual neurons. Meanwhile, we also observed substantial modulations in the strength of the directional selectivity. Figure 7 shows population PSTHs for the PD and nonpreferred direction (ND) in each task condition and separately for the three datasets from both monkeys. Preferred direction was defined based on the late delay period activity and computed separately for each task condition and each neuron before computing the average PSTH across neurons within each condition. As such, differences in average firing rate between conditions cannot be explained by a mixture of trials with preferred and no-preferred task parameters within one condition.
During the delay period, the difference in average neural response strength between PD and ND was reduced in prism trials compared with normal trials for monkey F (prism-only task, Fig. 7A) and monkey S (combined prism-anti task; Fig. 7C). The same monkey S in the prism-only task did not show differences in average response strengths between viewing contexts (Fig. 7B). Figure 7D confirms this observation quantitatively based on the individual neurons' directional selectivity index. Absolute DSI across neurons were smaller in the prism than the normal viewing context in the monkey F prism-only dataset (P < 0.001, paired t-test) and monkey S combined prism-anti dataset (P < 0.001, two-way repeated-measures ANOVA with factors rule × context). This was not the case in the monkey S prism-only dataset (P = 0.19). Additionally, in the combined prism-anti task, no effect of task rule on the directional selectivity strength and no interaction between rule and context were found (Fig. 7C,D; P = 0.96). During the movement period, the spatial selectivity strengths were modulated by the viewing context in an analogous manner to the delay period (Fig. 7E).
Taken together, these results indicate that, besides its effect on the directionality of spatial selectivity, the viewing context also had a noticeable impact on the strengths of spatial selectivity. The two datasets in which the strength of selectivity in prism trials was reduced coincided with the datasets in which we also observed reduced average reach amplitudes (see above and also Supplementary S3).
Different Spatial Encoding in Simultaneously Recorded Spiking and LFPs
At the level of individual neurons, the sustained activity during motor planning was related to the intended movement for basically all neurons which were selective in the reversing-prism task. This made the pro/anti comparison redundant in retrospect. Yet, providing evidence for visual goal encoding with the reversing-prism task alone is not necessarily a valid approach for other signal types (Fernandez-Ruiz et al. 2007) or brain areas (Shen and Alexander 1997; Ochiai et al. 2002), unless one has explicitly demonstrated that the very same signal is motor goal related, as achieved here by combining the prism dissociation with the anti dissociation.
In a previous fMRI study, Fernandez-Ruiz et al. (2007) showed with a reversing-prism task that fMRI signals in PPC correlated best with the visual location of a pointing target, rather than the physical direction of the movement. While we also found significant visual motor goal encoding in individual neurons, more neurons in PRR were best correlated with the physical goal. To resolve the seeming discrepancy, we characterized the spatial encoding properties in simultaneously recorded LFP signals, which are thought to capture subthreshold synaptic population activity, and are correlated more strongly to fMRI signals than to spiking data (Logothetis and Wandell 2004; Logothetis 2008).
Figure 8 shows the population averaged spatial selectivity of 62 LFP channels in the combined reversing-prism anti-reach task from monkey S. Each panel shows the time–frequency diagram (spectrogram) of the difference in LFP amplitude density (color-coded) between left-cued and right-cued trials. We used the left–right difference equivalently to what is typically used in EEG- or fMRI-based imaging data, for easier comparison with the human data. Sorting the LFP channels according to the preferred versus nonpreferred direction (as in the single-neuron data) would not change our conclusions, since preferred directions across the LFP channels from the same area and hemisphere were very similar in our data, even more than what was the case in previous studies (Scherberger et al. 2005; Hwang and Andersen 2012). We can only compare the LFP spectrograms between conditions in time–frequency ranges (regions of interest, ROI) for which the LFP amplitude density was significantly direction selective. In the normal viewing context (upper two panels), LFPs were direction selective in similar ROIs in pro and anti trials. This common ROI ranged approximately from 15 to 20 Hz, started approximately 600 ms after spatial cue onset, and continued to the onset of movement. Furthermore, this direction selectivity was of opposite sign in pro and anti reaches, which means that LFPs within this overlapping (and only this) time–frequency domain were motor goal related, not related to visual memory. Similarly, in the prism viewing context (lower two panels), the LFP spectrograms revealed overlapping ROIs which were direction selective and also motor goal related. These overlapping ROIs ranged approximately from 15 to 25 Hz, started approximately from 250 ms latency, and terminated after 900 ms. Overall, the pro/anti dissociation indicates motor goal selectivity in LFP signals.
Yet, the comparison of the observed selectivity between prism and normal trials does not allow deciding clearly if this motor goal tuning in LFP encodes the visual goal or the physical goal. This is because motor goal selective ROIs for prism pro and normal pro trials (left 2 panels) only barely overlap. In prism-pro trials, significant directional selectivity started at approximately 250 ms and ended at approximately 900 ms latency, whereas, in normal pro condition, selectivity emerged not before 600 ms latency. Hence, the LFPs in these nonoverlapping time–frequency ranges could neither be classified as visual goal nor as physical goal signals. Only LFPs in a small time–frequency range at a latency of around 600–900 ms were motor goal selective in both the prism and the normal viewing context, and those were selective for the visual goal, not the physical goal (same sign selectivity). Additionally, during cue presentation, LFPs in an additional time–frequency range at (namely at frequencies below 15 Hz) were direction selective. Yet, since this was only the case in prism trials, LFPs in this low-frequency range did not qualify for the visual versus physical goal classification.
In summary, LFP signals did not show a result equivalent to the majority of our single-unit spiking data, but rather were selective for the visual goal in a narrowly confined time–frequency range.
We combined a reversing-prism task with an anti-reach task to dissociate the impending visual movement (visual goal) from the planned physical movement (physical goal) and from the preceding visual instruction stimulus. Individual neurons from the PRR in rhesus monkeys showed a broad spectrum of spatial selectivity ranging from visual goal to physical goal representations, with the latter being the predominant encoding. In contrast, LFP signals recorded from the same brain region exhibited purely visual goal encoding. We further unveiled that the observed visual goal encoding was not related to visual memory, but related to movement planning, and that the neurons with visual goal encoding were not in general selective for direct visual input. These findings suggest that motor goal encoding in PPC should be neither seen as a pure precursor of an impending physical motor command, nor as a pure representation of kinematic parameters of the planned movement in the visual space.
Dominant Physical Goal Encoding in Spiking and Visual Goal Encoding in LFP
The current study showed that the majority of PRR neurons encoded the physical motor goals, while a small portion of individual neurons encoded the intended visual motor goal during planning. We did not expect this predominance of physical goal encoding from known results on PRR encoding. Specifically, two lines of research on sustained PPC activity addressed related questions on motor planning, each covering an individual aspect of the current task design. In anti-reach experiments, we had shown previously that PRR activity during the memory period is selective for the future movement parameters, not the visual memory of the instruction (Gail and Andersen 2006; Gail et al. 2009; Westendorff et al. 2010). In a reversing-prism experiment, Fernandez-Ruiz et al. (2007) had shown selectivity for the visual spatial aspects of the pointing task in PPC. Combining the conclusions from these two previous studies, one would have expected predominant visual motor goal representation in PRR neurons. In contrast, our data revealed that more neurons of the current study actually were incompatible with this prediction, while a smaller fraction complied with it. Yet, the previous studies differed in terms of brain signals (fMRI vs. single unit), species (human vs. monkey, with uncertain homologies (Culham et al. 2006; Vesia and Crawford 2012)), type of motor actions involved (hand reaching vs. finger pointing), and training intensities.
In this sense, our data illustrates that caution needs to be exercised when merging findings from human fMRI and monkey single-unit recordings. To narrow the range of possible reasons for the seeming discrepancy between our results and previous fMRI findings (Fernandez-Ruiz et al. 2007), we analyzed LFPs. Since our LFP signals originated from simultaneous recordings from the same electrodes as the single-neuron data, all other parameters (species, brain area, task design, behavior, etc.) were matched. Our LFP results demonstrate that, like fMRI, electrophysiological mass signals cannot grasp the important dichotomy seen in the different characteristics of individual neurons (see Fig. 8). More importantly, the LFP results cannot even be seen as a mass average of the single-unit encoding in our data, since the majority of individual neurons was selective for the physical goal, while the LFP was selective for the visual goal. These results support the notion that LFP signals contain independent information which does not necessarily reflect the functional computation based on the spiking activity of a given brain area (Logothetis and Wandell 2004; Logothetis 2008).
Old or New Reference Frame?
The dissociation between visual and physical goal encoding cannot be derived from previous knowledge about spatial reference frames in PPC. Previous reference frame studies described how spatial sensory inputs from different modalities are integrated in a feed-forward manner to form motor goal representations. These studies have shown that PRR encodes the target location or movement endpoint in a predominately gaze-centered reference frame (Batista et al. 1999; Buneo et al. 2002), or, more precisely, in a compound frame of reference with predominant gaze-rather than hand-centered encoding (Chang and Snyder 2010; McGuire and Sabes 2011). Moreover, spatial selectivity evolves in response to visual or proprioceptive target instructions (McGuire and Sabes 2011) and independent of immediate visual input at the target location (Gail and Andersen 2006; Hwang and Andersen 2012). The typical reference frame tasks and our reversing-prism task varied independent task dimensions: Our task did not vary relative initial eye and hand positions, while reference frame tasks did not dissociate visual and proprioceptive feedback. Therefore, it is unclear whether our visual and physical goal neurons correspond to neurons which encode motor goals in a specific previously established spatial reference frame.
For example, it might be an intuitive association at first glance that neurons which show gaze-centered reach goal encoding, that is, a relatively visually oriented representation, should be visual goal neurons rather than physical goal neurons. But a neuron which encodes a movement endpoint in a gaze-centered reference frame and which is hence not selective for the hand movement vector does not have to be selective for the visual goal in our task. The reason is that visual goal encoding in our task could still relate to either the desired visual endpoint of the movement relative to the gaze direction (gaze-centered reference frame), or to the desired visual movement vector relative to the hand (hand-centered reference frame). In fact, neurons in ventral premotor cortex were shown to be spatially selective for the visual (rather than physical) movement while, at the same time, these neurons encoded the movement in a hand-centered reference frame (Ochiai et al. 2005), not a gaze-centered reference frame. Correspondingly, encoding in a hand-centered reference frame does not imply physical goal encoding, since physical goal encoding can also relate to either the endpoint of the movement relative to gaze (gaze-centered reference frame), or the vector of the physical movement relative to the starting position of the hand (hand-centered reference frame).
In summary, our reversing-prism task revealed a continuous spectrum of neural spatial selectivity patterns which complements previously observed forms of spatial encoding in PPC. We speculate that such mixed spatial encoding in PPC is specific to the task context and might serve as a flexible computational (not representational) basis for mediating between different frames of reference, namely those reference frames which are relevant to cope with the current spatial behavioral demands of a task.
Modulation of Spatial Selectivity Strength Under Reversed Vision
Besides the effect of the reversed viewing context on the directionality of the spatial selectivity in individual neurons, the viewing context also had a noticeable impact on the average strengths of spatial selectivity (Fig. 7). Reduced spatial selectivity in PPC has been reported as consequence of spatial incongruence between visual target and reach goal (PRR: Gail et al. 2009), absence of visual feedback (V6A: Bosco et al. 2010; PRR: Hwang and Andersen 2011), and dissociation of visual and physical task space (PEc: Hawkins et al. 2013). Also, reduced spatial selectivity could in principle be related to behaviorally observed reduced levels of online motor control under reversed vision (Gritsenko and Kalaska 2010; Kuang and Gail 2014).
Yet, our results do not suggest that the reversed visual feedback or viewing context per se reduced neural selectivity in PRR. Instead, we attribute the reduced spatial selectivity to the reduced reach movement amplitudes in the prism viewing context, since movement amplitude correlated well with differences in neural selectivity across the three datasets (Fig. 7 and Supplementary Fig. S3). While amplitude encoding has not been systematically addressed in single-neuron data of PRR, corresponding data from frontal lobe sensorimotor areas suggests at least partial amplitude encoding (Riehle and Requin 1989; Fu et al. 1993; Kurata 1993; Messier and Kalaska 2000). Also, a human fMRI study suggests that directionally selective PPC activity is sensitive for the planned reach amplitude (Fabbri et al. 2012). Finally, inactivating part of PRR in macaques caused mis-reaches to peripheral targets with reduced reach amplitude, a hallmark of optic ataxia (Hwang et al. 2012). Thus, we conclude that the observed systematic average reduction in selectivity strength in PRR during reach planning in the current study is most parsimoniously explained by the difference in the reach amplitude across viewing contexts.
Without a confounding effect of reach amplitude, the dissociation of visual and physical goal encoding might have been clearer for many neurons in our datasets. However, reach amplitude difference cannot explain the main results of visual goal versus physical goal representations, which were based on the analysis of neural directional selectivity, and which was not modulated by the reach amplitude. Directional selectivity analyses showed highly consistent results for visual goal versus physical goal encodings across both monkeys and all datasets (Fig. 5) despite the differing patterns of reach amplitude (Supplementary Fig. S3). Therefore, we ruled out reach amplitude as a potential confound for the visual versus physical goal encoding in our data.
Putative Functional Role and Related Concepts
A speculative implication from the co-encoding of prospective visual and physical movement parameters is that action planning might imply an anticipation of the impending visual and physical sensory feedback of the planned movement. In this view, a bidirectional link between planned action and anticipated visual sensory effect was acquired automatically during the early behavioral training stages (Prinz 1987; Shin et al. 2010). An intended action plan then was selected based on the desired sensory effects of an action, and was initiated by invoking the sensory “images” of the selected action, an idea that dates back to the 19th century (Lotze 1852; James 1890). To fit with this picture, the observed visual goal neurons in PPC would have to be interpreted as neurons that were encoding the expected visual sensory consequences of the planned movement. Anticipating visual sensory aspect of a movement plays a role in various concepts of motor cognition, like the ideomotor concept (James 1890; Prinz 1987), motor imagery and mental rehearsal (Crammond 1997; Jeannerod 2001), motor awareness (Haggard 2005; Desmurget and Sirigu 2009), and perceptual stability (Duhamel et al. 1992). Also, the idea of representing future sensory parameters of a movement is reminiscent of internal models in optimal motor control, a function that PPC has previously been associated with during action execution (Mulliken et al. 2008; Shadmehr et al. 2010; Franklin and Wolpert 2011). The intriguing finding here would be that PPC co-encodes the intended physical movement and its associated visual sensory effect already during movement planning, which marks a conceptual difference to the sensory forward predictions during motor control. A major functional relevance of anticipatory encoding of sensory action effects during motor planning lies in its potential to contribute to action selection based on the expected sensory outcome (James 1890; Prinz 1987).
Our results add a new complementary perspective to the current understanding of spatial representations in the PPC of primates. The visual goal neurons observed in our experiment highlight the visual sensory aspect of planned hand movement, independent of preceding visual cue instruction or visual memory. They shed a new light on the concept of motor goal, suggesting that the formation of a motor goal implies not just the preparation of a proper physical motor command and its representation in different spatial reference frames, but also the visuospatial aspects of the future movement. Besides, our findings of distinct encoding properties by single-unit activity and LFP data, recorded simultaneously within the same brain area under the same task conditions, calls for caution when comparing single-neuron spiking activity with electrophysiological or hemodynamic mass signals.
This work was supported by the Federal Ministry for Education and Research (BMBF, Germany, grants, 01GQ0814 and 01GQ 1005C), the German Research Foundation (DFG, grant SFB-889, and DFG Research Unit GA1475-B2), and the State of Lower Saxony (grant VWZN2563). S.K. acknowledges the Scientific Foundation of Institute of Psychology, Chinese Academy of Sciences (No.Y3CX112005).
We thank Sina Plümer, Klaus Heisig, and Dirk Prüße for technical support, Stephanie Westendorff, Opher Donchin, Suresh Krishna, and Axel Lindner for helpful discussions. Conflict of Interest: None declared.