Abstract

In daily life, hand and eye movements occur in different contexts. Hand movements can be made to a visual target shortly after its presentation, or after a longer delay; alternatively, they can be made to a memorized target location. In both instances, the hand can move in a visually structured scene under normal illumination, which allows visual monitoring of its trajectory, or in darkness. Across these conditions, movement can be directed to points in space already foveated, or to extrafoveal ones, thus requiring different forms of eye–hand coordination. The ability to adapt to these different contexts by providing successful answers to their demands probably resides in the high degree of flexibility of the operations that govern cognitive visuomotor behavior. The neurophysiological substrates of these processes include, among others, the context-dependent nature of neural activity, and a transitory, or task-dependent, affiliation of neurons to the assemblies underlying different forms of sensorimotor behavior. Moreover, the ability to make independent or combined eye and hand movements in the appropriate order and time sequence must reside in a process that encodes retinal-, eye- and hand-related inputs in a spatially congruent fashion. This process, in fact, requires exact knowledge of where the eye and the hand are at any given time, although we have no or little conscious experience of where they stay at any instant. How this information is reflected in the activity of cortical neurons remains a central question to understanding the mechanisms underlying the planning of eye–hand movement in the cerebral cortex. In the last 10 years, psychophysical analyses in humans, as well as neurophysiological studies in monkeys, have provided new insights on the mechanisms of different forms of oculo-manual actions. These studies have also offered preliminary hints as to the cortical substrates of eye–hand coordination. In this review, we will highlight some of the results obtained as well as some of the questions raised, focusing on the role of eye- and hand-tuning signals in cortical neural activity. This choice rests on the crucial role this information exerts in the specification of movement, and coordinate transformation.

Psychophysical Studies

The Problem of Sensorimotor Transformations for Reaching

In planning how to reach for a visual target, the brain transforms information about target location into commands that specify the patterns of muscle activity necessary to bring the hand to the target. Under stationary conditions, the visual direction of a target is mapped topographically on the retina, whereas its distance is defined by both monocular (accommodation, relative size, intensity, perspective, shading, etc.) and binocular cues (retinal stereodisparity and ocular vergence signals). The vector of desired hand movement (motor error) is defined by the difference between target location and hand location. In theory, the motor error could be derived in retinotopic coordinates, but this is often difficult to accomplish. First, orienting the gaze (eye and head) toward the target at variable times prior to and during reaching changes the retinotopic map of the motor error in complex ways, while at the same time contributing important retinal and extra-retinal information, which depends on both proprioception and efference copy of gaze movement. Moreover, although hand position can also be encoded visually, outside the field of view it is defined by proprioception and by efference copy of the motor commands to arm muscles in their intrinsic coordinates. Therefore, it has been hypothesized that, in the process of translating sensory information about target and hand position into the appropriate motor commands, the final position of reaching may be recoded from retinotopic coordinates to other egocentric or allocentric coordinates (Flanders et al., 1992; Gordon et al., 1994; Lacquaniti, 1997).

A number of psychophysical studies have suggested that a cascade of sensorimotor transformations remap target location from initial retinotopic coordinates to binocular viewer-centered frames, to body- and hand-centered frames (Soechting and Flanders, 1989b; Gordon et al., 1994; McIntyre et al., 1997, 1998). Remapping depends on the combination of retinal and extra-retinal eye signals with somatic signals about the position of body segments to update target and hand spatial representations as the eyes, head, trunk or limb move. How exactly this remapping occurs is still a matter of controversy, but an emerging view is that the frame of reference used to specify the endpoint is task- and context-dependent. Moreover, different spatial dimensions of the endpoint (e.g. direction and distance) probably are not treated in a unitary manner, but are processed in parallel and largely independent of each other according to principles of modular organization.

The Psychophysical Approach

Experimental approaches often rely on the study of the errors made by subjects who point either to a previously visible and memorized target or to a continuously visible virtual target. In both cases, movement corrections based on tactile feedback are avoided, and the underlying frames of reference putatively used by the brain to specify the endpoint may be revealed by the differences in precision between the neural channels that process spatial information independently. There are three types of errors for repeated trials with the same target location: (i) constant errors, representing the systematic deviation (bias) of the mean end point from the target; (ii) variable errors, representing the dispersion (variance) of the individual responses around the mean; and (iii) local distortions, indicating the fidelity with which the relative spatial configuration of the targets is maintained in the configuration of the endpoints (McIntyre et al., 1997, 1998). A careful analysis of all these errors, combined with their mathematical modeling, may reveal the spatial axes of internal representations and coordinate transformations used to specify hand reaches (McIntyre et al., 2000).

Viewer-centered Specification of Endpoint

Reaches to a continuously visible virtual target or to a memorized target, performed with visual feedback of hand position, involve systematic undershoots of the radial distance of the target relative to the eyes (Soechting et al., 1990), variability of the endpoints along the sight-line greater than along all other axes (Carrozzo et al., 1999; McIntyre et al., 1997), and local distortion with a contraction of space orthogonal to the sight-line (Carrozzo et al., 1999 (Fig. 1). These results can be found independently of the specific location of the target within the three-dimensional workspace, the hand (left or right) used to perform the movement, the starting position of the hand, and the orientation of the head and eyes during the task (McIntyre et al., 1997). Therefore, one can conclude that, in these conditions, the specification of the final position of the hand occurs in a viewer-centered reference frame (i.e. in terms of direction and distance relative to the eyes). The specific viewer-centered anisotropy of spatial errors depends on the fusion of eye position signals with retinal disparity signals, and on a coupling of uncompensated eye movements, with a finer control of conjugate versus disjunctive eye movements (McIntyre et al., 1997; Carrozzo et al., 1999). This implies that eye and hand information combine into a common egocentric binocular frame of reference, thus making it possible to compare the two sources.

A viewer-centered representation is also supported by the observation of pointing biases when gaze direction is shifted eccentrically relative to the target, either statically or dynamically [i.e. during the memory delay period (Bock, 1986; Enright, 1995; Henriques et al., 1998)]. Moreover, Vetter et al. showed that, when a discrepancy between the actual and visually perceived finger position is introduced at one location in space, subjects learn a visuo-motor remapping over the entire workspace, and this remapping is best captured by a spherical coordinate system (distance, azimuth and elevation) centered between the two eyes (Vetter et al.,1999).

Body-centered Specification of Endpoint

When vision of the hand is prevented, head- and shoulder- centered distortions reshape endpoint distributions: the axes of maximum variability are no longer viewer-centered, and instead are rotated around the body (McIntyre et al., 1998; Carrozzo et al., 1999). These data support the hypothesis of a multiple-stage transformation, from viewer-centered to head- and shoulder- centered coordinates (Flanders et al., 1992; McIntyre et al., 1998). Shoulder-centered coordinates are also used when pointing to kinesthetically perceived targets (Baud-Bovy and Viviani, 1998).

Additional variability along movement direction suggests that target information is combined with hand information to form a hand-centered vectorial plan of the intended movement trajectory as an extent and direction relative to the starting hand position (Gordon et al., 1994).

Allocentric Specification of Endpoint

The evidence reviewed above indicates that, when targets are situated in an otherwise neutral space, endpoint position is specified in egocentric frames of reference relative to some body parts (eyes, head, body, hand). However, when targets are embedded in a geometrically structured space, the visual context can shape pointing errors, and reveal the use of object-centered or allocentric reference frames to represent endpoint position. Thus, the final position of a pointing movement toward a remembered target is biased by the position of a surrounding frame (Bridgeman et al., 1997).

Pointing performance can be affected not only by the presence of overt features in the visual presentation of the target but also by covert relationships that are established by means of cognitive processing. In a recent study, subjects reached in three-dimensional space to a set of remembered targets whose position was varied randomly from trial to trial, but always fell along a ‘virtual’ line (Carrozzo et al., 2002). Targets were presented briefly, one at a time, in an empty visual field. After a short delay, subjects were required to point to the target location that they remembered. Under these conditions, the target was presented in the complete absence of allocentric visual cues as to its position in space. However, because the subjects were informed prior to the experiment that all targets would fall on a straight line, they could conceivably imagine each point target as belonging to a single rigid object with a particular geometry and orientation in space, although this virtual object was never explicitly shown to the subjects. The pattern of variable errors revealed both egocentric and allocentric components (Fig. 2), consistent with the hypothesis that target information can be defined concurrently in both egocentric and allocentric frames of reference, resulting in two independent, coexisting representations.

In sum, the frame of reference used to specify the endpoint for reaching is not fixed, but depends on (i) the available sensory information, (ii) the task constraints, (iii) the visual background and (iv) the cognitive context.

Modular Organization

There is growing evidence that, irrespective of the specific frame of reference used to plan the movement, the different spatial parameters of reaching are not treated in a unitary manner, but are processed in parallel and largely independent of each other (Georgopoulos, 1991). Thus, misreaching in direction is distinct from misreaching in distance, the error in the latter being generally much greater than the former (Soechting and Flanders, 1989a; Gordon et al., 1994), and the information transmitted by the movement being accordingly higher for direction than for distance (Soechting and Flanders, 1989a). Also, chronometric studies have indicated that the central processing time involved in programming direction is longer than that involved in programming distance (Rosenbaum, 1980).

In darkness, increasing the delay (up to 5 s) of memory storage after target disappearance leads to a decay of the body-centered representation, with a separate storage of distance and direction coordinates: distance information decays faster than directional information (McIntyre et al., 1998). This suggests that direction and distance are stored in separate channels in the buffer of visuospatial working memory.

In the hand-centered vectorial scheme of motor planning, extent and direction relative to the starting hand position are specified independently (Gordon et al., 1994). According to this hypothesis, movement extent is determined by linearly scaling a stereotypical bell-shaped velocity profile, movement direction is determined by establishing a reference axis, and movement duration is set by task context (Ghez et al., 2000). Moreover, the extent and direction of movement adapt differentially during motor learning, as shown by the very different time constants and generalization rules for these two parameters (Krakauer et al., 2000).

Eye–Hand Coordination

In fast reaching, the onset of the electromyographic activation for eye, head and arm muscles tends to be synchronous. However, because the inertia of the head is greater than that of the eye, and the inertia of the arm is greater than that of the head, the onset of eye, head and hand movements is sequential, with the eye moving first, followed by the head, and then the hand (Biguer et al., 1982). Therefore, the initial command sent to arm muscles is based on extra-foveal retinal signals, but at the time of onset of the arm movement, the eye has normally landed on the target, and a foveal signal can be used to update the command for arm movement (Prablanc and Martin, 1992). In addition, eye position signals (proprioceptive or efferent copy) contribute important information, as shown by the pointing errors induced by experimental deviations of an occluded non-viewing eye (Gauthier et al., 1990). Furthermore, clinical studies suggest a distinction between central (foveal) reaching and peripheral (extra-foveal) reaching (Sakata and Taira, 1994; Battaglia-Mayer and Caminiti, 2002).

The functional coupling between eye and hand movements is bidirectional, as demonstrated by the observation that when subjects are asked to make rapid pointing movements to successive visual targets, they cannot initiate a saccade to a second target until the hand has reached the first one (Neggers and Bekkering, 2000); the underlying mechanism of coupling is internal and does not depend on vision, since the same phenomenon is observed when vision of the moving arm is prevented (Neggers and Bekkering, 2001). Also, changing the gain of eye pursuit movement results in a change of the subsequent hand movements (van Donkelaar et al., 1994). Studies in both humans (Epelboin et al., 1997) and monkeys (Snyder et al., 2002) indicate that saccades are faster when they occur within a coordinated eye–arm movement. Eye and arm control systems could in principle be fed by a common input drive. Alternatively, one of these two systems could be the master and the other one could be the slave (Engel et al., 2000). Thus, there are cases in which the direction to which gaze points indicates the target direction for the hand, and other cases when hand movements affect eye movements.

Studies on humans performing visuomanual tasks in naturalistic settings confirm the role and importance of eye signals as guiding information for hand movement and, therefore, for eye–hand coordination. An elegant study (Land et al., 1999) of the pattern of fixation during a very ‘British’ form of daily activity, such as tea making, has shown that, in spite of the constellation of targets in the visual scene, the eyes always move almost exclusively to the object that will be manipulated, their movement leading that of the hand by about half second; furthermore, the eyes often move to a new location before the manipulation of the first object is completed, and control the state of variables important for the success of the task, such as the level of water in the teapot. In a study (Johansson et al., 2001) of eye–hand coordination during object manipulation, which also required obstacle avoidance, the subject’s gaze guided hand movements by marking landmarks in the workspace used for directing them, and also contributed to predictive control during manipulation.

Neurophysiological Studies

In recent years, neurophysiological studies in behaving monkeys have changed the way we look at the relationship between cortical neural activity and movement. The introduction of multi-task approaches, as well as the use of powerful analytical tools, to the study of single neurons have revealed that in the parieto-frontal network individual neurons combine and encode different movement signals and parameters.

The Influence of Eye Signals on Cortical Neural Activity

Evidence concerning the influence of eye signals on neural activity in the cerebral cortex came from studies of the dynamic properties of parietal neurons. In area 7a of the monkey, visual fixation neurons fire maximally for a preferred direction of gaze, with a discharge rate changing regularly as function of the angle of gaze (Sakata et al., 1980). In areas 7a and LIP (Andersen and Mountcastle, 1983; Andersen et al., 1985), the response of individual neurons to visual stimuli is modulated by the eye position in the orbit, a phenomenon rarely observed in absence of attentive fixation. Since the position of the eye modulates the gain of the visual response, this effect was considered as reflecting an eye gain field (Andersen et al., 1985; Salinas and Abbott, 1996), and regarded as a prerequisite for a representation of the spatial location of visual targets in head-centered coordinates. Although individual neurons in these areas do not encode target locations in explicit head-centered coordinates, the results of a network simulation (Zipser and Andersen, 1988) showed that such a code could be the result of a population activity. Some cells in areas 7a and LIP have both eye and head gain fields (Brotchie et al., 1995), which are in register. Altogether, these data were regarded as suggesting that visual activity in these parietal areas is modulated by the angle of gaze, and that a population mechanism encodes target location in head-centered coordinates (Andersen et al., 1997). Effects of eye position on visual activity have been observed in other parts of the parietal lobe, such as parieto-occipital cortex (Galletti et al., 1995). A coding scheme in head-centered coordinates, but not dependent on gain mechanisms, is supported by other observations from parieto-occipital cortex (Galletti et al., 1993) and ventral intraparietal area VIP (Duhamel et al., 1997), where selected populations of cells encode the target location in a way that is independent of both the retinal position of the stimulus and the position of the eye in the orbit. The observation that in the superior colliculus (SC), a target of LIP efferent messages, there probably exists a mechanism for the control of both eye and head movements (Freedman et al., 1996; Freedman and Sparks, 1997) highlights the potential relevance of a coding mechanism in head-centered coordinates in area LIP.

Coding of distance of visual targets mostly depends on binocular cues, such as retinal disparity and vergence. The influence of disparity on neural activity in the cerebral cortex has recently been reviewed in depth by Cumming and De Angelis (Cumming and De Angelis, 2001), and will not be treated here. It is worth remembering that neural activity in parietal cortex (area LIP) is modulated by disparity signals (Gnadt and Mays, 1995; Gnadt and Beyer, 1998; Ferraina et al., 2002), which are addressed both to the frontal eye fields (Ferraina et al., 2000; 2002) and intermediate layers of the SC (Gnadt and Beyer, 1998; Wurtz et al., 2001; Ferraina et al., 2002). A recent study (Ferraina and Genovesio, 2001) has shown a gain modulation of vergence on the disparity-tuning of neurons in LIP of monkeys trained to make saccades to targets in three-dimensional space, thus offering a neural substrate for an egocentric coding of distance in parietal cortex. This information can be available to the network controlling not only eye, but also hand movement, as suggested by the observation (Nakamura et al., 2001) that LIP projects to anterior intraparietal area (AIP), a region corticocortically connected to ventral premotor (area F5) cortex, and involved in the control of visually guided hand grasping [for reviews, see (Sakata et al., 1997; Rizzolatti et al., 1998)].

The Influence of Hand Movement and Position on Cortical Neural Activity

Tuning Properties of Hand-related Neural Activity

Studies of the relationships between hand movements and neural activity have proposed a vector code of movement direction in primary motor cortex (M1) (Georgopoulos et al., 1982) and in parietal area 5 (Kalaska et al., 1983). This code was then extended to primary somatosensory cortex (area 2) (Cohen et al., 1994), dorsal premotor cortex (PMd) (Caminiti et al., 1991; Fu et al., 1993), and generalized to hand movements in three-dimensional space (Schwartz et al., 1988; Caminiti et al., 1990, 1991). In frontal and parietal cortex, neural activity varies in an orderly fashion with the direction of hand movement during reaction time and movement time, as well as during the intervening delay time, when an instructed-delay reaching paradigm is used (Johnson et al., 1996). Cell activity is maximal for a preferred direction, and decreases in an orderly fashion for directions further and further away from the preferred one. Preferred directions of the population of neurons tend to be distributed uniformly throughout space. The broad tuning of cortical cells postulates a population code of movement direction (Georgopoulos et al., 1983, 1988). This has been formulated in vector terms: each neuron contributes a vector in its preferred direction with amplitude proportional to its level of activity (Georgopoulos et al., 1983, 1988). The evolution in time of the length and direction of the population vectors in M1 parallels the corresponding changes in the vector of tangential velocity during reaching (Georgopoulos et al., 1988) and drawing (Schwartz, 1994). The temporal evolution of the population vector also describes the representation of the movement of a visual stimulus in areas MT/MST and that of the concurrent hand movement in M1, during a visuomanual tracking task (Kruse et al., 2002). A recent study (Scott et al., 2001) questioning the predictive power of the population vector in describing movement direction remains controversial (Georgopoulos, 2002) because of methodological limitations of its design.

Influence of Hand Position on Neural Activity

Together with a vector code of movement direction, a static spatial effect of hand position on frontal and parietal neural activity was described (Georgopoulos et al., 1984). At the end of the reaching movement, monkeys were trained to maintain the hand immobile on different targets in the workspace. Neural activity in both motor (area 4) and parietal (area 5) cortices varied in an orderly fashion with the position of the hand in space. The influence of hand position on neural activity has been documented also in dorsal premotor cortex (PMd) (Caminiti et al., 1991; Crammond and Kalaska, 1996), and in different parietal areas, such as those of the dorsal bank of the intraparietal sulcus (PEa and MIP) (Lacquaniti et al., 1995; Johnson et al., 1996), medial wall of the parietal lobe (7m) (Ferraina et al., 1997a,b), parieto-occipital junction (V6A and PEc) (Battaglia-Mayer et al., 2000, 2001; Ferraina et al., 2001), and 7a (personal observations). The effect of hand position signals on neural activity has been characterized in three-dimensional space as well (Kettner et al., 1988; Caminiti et al., 1990, 1991; Lacquaniti et al., 1995). Thus, hand position exerts a profound influence on neural activity of all the areas of the parieto-frontal network so far studied [for a review, see (Battaglia-Mayer et al., 1998); Burnod and co-workers have described a conceptual framework (Burnod et al., 1999)].

Coding of Motor Error

Motor error is defined as the difference vector between target and hand location. In order to know the motor error, the amplitude of the hand movement needs to be specified, as well as its direction (Gordon et al., 1994). Fu and co-workers have addressed the problem of whether or not directional activity carries a signal concerning movement amplitude (Fu et al., 1993, 1995). In M1 and PMd, a significant correlation was found between neural activity and both the direction and amplitude of hand movement. For a given direction, neural discharge changes monotonically with movement amplitude. In most neurons, this modulation is significant for only a limited number of directions among those investigated. The modulation of cell activity due to amplitude is not necessarily associated with movements close to the preferred direction.

Neural Activity is Correlated with Multiple Movement Parameters

Information about target location, in addition to movement direction and amplitude, seems to be encoded in neural activity in premotor and motor cortex (Fu et al., 1993). The best correlations with these parameters tend to be ordered in time (Fu et al., 1995): cell firing related to direction of movement tends to occur first, and leads movement onset; neural activity related to target location, and movement amplitude emerges later on during the trial. Ashe and Georgopouloshave examined the time-varying correlation of neural activity in primary motor cortex and parietal area 5 with different parameters of movement (Ashe and Georgopoulos, 1994). In decreasing orders of magnitude, a significant correlation was found with the direction of movement, velocity and position. Although the influence of velocity on cortical activity in motor cortex was double that in parietal cortex, the overall contribution of this parameter, as compared to the others, was modest. These results imply a distributed representation of movement parameters in both motor and parietal cortex. The issue of multiparametric control of movement has recently been reviewed (Johnson et al., 2001), with conclusions similar to those expressed in this manuscript.

Coding in Body-centered Coordinates

The Influence of Arm Position and Posture on the Directional Tuning of Cortical Cells

In motor (Georgopoulos et al., 1982, 1988), premotor (Caminiti et al., 1991) and parietal (Kalaska et al., 1983; Lacquaniti et al., 1995; Crammond and Kalaska, 1996) cortex, neurons modulated by hand position also fire during planning and execution of hand-reaching movements. Elucidating the way arm position modulates reaching-related activity became a necessary step to understanding the coding scheme for hand movement in the cerebral cortex. A vector code of either abstract movement direction or motor error implies that neural activity should be the same for the same movement performed along parallel paths but starting from different initial positions. This problem was first attacked in motor (Caminiti et al., 1990) and dorsal premotor (Caminiti et al., 1991) cortex. Monkeys performed arm movements in the same direction starting from three different origins within the workspace. The results showed a profound effect of initial hand position on the directional tuning properties of the neuronal population studied. In fact, in both motor and premotor cortices, the overall orientation of the preferred directions of reaching-related neurons, computed during both reaction and movement time, rotated as function of hand position in space. However, the orientation of the population vectors remained invariant across the workspace. Thus, hand position did not merely exert a gain effect on the activity of reaching-related neurons, as observed for eye position on parietal visual activity, but changed their directional tuning. This influence was modeled as the result of a multiplicative interaction between visually related signals about target location and somatic information about hand position in space (Burnod et al., 1992). In the same vein, when hand movements in the same direction were made using different arm postures, the level of discharge prior to, during and/or after movement changed in a majority of cells across motor (Scott and Kalaska, 1997), premotor and parietal cortices (Scott et al., 1997). These cells showed a significant change in their relationship to movement direction, reflecting a change in the sharpness of their tuning and/or a change in their directional preference. A further study of the possible interaction between movement direction and the geometrical configuration of limb and the participating muscles (Kakei et al., 1999) has shown that the directional tuning of wrist muscles for flexion/extension and abduction/adduction depends on forearm pronation/supination. Likewise, in motor cortex the directional tuning of some neurons changes in parallel with the changes in directional tuning of the muscles, which is compatible with a kinetic code. This representation seems to coexist with others, since the activity of a different population of neurons is influenced by changes in posture, which is compatible with a kinematic code of limb geometrical configuration. Finally, the tuning of still other neurons is not affected by pattern of muscle activity and hand configuration, which is compatible with a kinematic code of abstract movement direction. Interestingly, in ventral premotor cortex, a region providing a direct input to motor cortex, directional coding is not influenced by postural influences, suggesting an abstract coding of movement direction (Kakei et al., 2001). Coexistence of different abstract representations of movement has recently been described also in dorsal premotor cortex, where neural activity, during the delay intervals preceding hand movement in a direction that will be specified later on during the trial, can simultaneously encode signals relative to two different potential movement directions (Cisek and Kalaska, 2002a).

Relevant to the issue of internal representations is the observation that during motor adaptation to external force fields (Ray Li et al., 2001) two different populations of neurons have been described in M1, one adapting to the force fields through changes of the directional tuning properties of cells, and a second one for which these properties remained invariant across force fields. In this same area (Gribble and Scott, 2002), the ability to represent contexts reflecting different mechanical loads is based on a neural computation that assumes the same neuronal population encodes overlapping internal models of the physical environment.

Evidence for Coding in Body-centered Coordinates

The relationships between parietal cell activity and reaching to visual targets have been subject of intensive analysis [for a review, see (Wise et al., 1997)]. Among these, a study by Lacquaniti et al. adopted an approach based on the results of psychophysical studies in man (Lacquaniti et al., 1995). Lacquaniti et al. reasoned that coding of hand position and movement in parietal cortex resided in a high-order spatial representation based on sensory information, traditionally known to exert a profound influence on parietal cell activity. A combination of signals about the horizontal rotation of both shoulder and elbow joints may provide an estimation of hand azimuth; hand elevation and distance could be similarly derived from combinations of horizontal and vertical rotations of the same joints. Specification of arm coordinates in geotropic space requires information about the vertical reference axis, which could correspond to the body midline. One may note that several neurons in area 5 have receptive fields encompassing both shoulder and body midline (Duffy and Burchfiel, 1971; Sakata et al., 1973; Mountcastle et al., 1975). In addition, a potential source for this information is the parieto-insular vestibular cortex (Grusser et al., 1990), directly and/or or via area 2.

The task used by Lacquaniti et al. (Lacquaniti et al. 1995) was the same as that used by Caminiti et al. (Caminiti et al., 1990, 1991), in that monkeys made reaches of parallel directions and maintained hand postures in three different parts across a three-dimensional workspace. The results showed that dorsal area 5 (area PE) and area PEa of the monkey can be substrates for egocentric representations of hand position and movement, since their neural activity was monotonically tuned in a body-centered reference frame, whose coordinates defined the azimuth, elevation and distance of the hand (Fig. 3). Both shoulder and eye-centered spherical coordinates fitted the neural data.

Distribution of Spatial Information in Neuronal Ensembles

The nature of the distributions of the tuning properties of neuronal populations in the different nodes of the parietofrontal network can be useful to understanding distributed representations of movement. The question to be answered is whether, for each movement parameter encoded, the tuning properties of individual neurons within a population cluster around the cardinal axes of a given coordinate system, or whether their distribution is uniform in space.

In dorsal premotor (Caminiti et al., 1990; 1991; Burnod et al., 1992) and primary motor cortex (Georgopoulos et al., 1988; Caminiti et al., 1990, 1991; Burnod et al., 1992) the distribution of preferred directions tend to be uniform in space, as implied by a population code of movement direction by broadly tuned neurons. In area 5 (Lacquaniti et al., 1995) and primary somatosensory cortex (Helms-Tillery et al., 1996) the tuning functions of individual neurons tend to cluster around the azimuth, elevation and distance, i.e. around the axes of a spherical coordinate frame, thus defining a positional code in body-centered coordinates. Thus, different populations of neurons encode each cardinal axis, although with substantial overlap among the different ensembles. Since positive and negative spatial coefficients of azimuth, elevation and distance tend to be evenly distributed (Lacquaniti et al., 1995), the overall information about limb position can emerge from a summation of the individual contributions at the population level. The segregation of information in partially different populations of neurons suggests that these spatial parameters of movement might be processed in parallel in parietal cortex. It remains to be determined where, in the distributed system controlling hand movement, reconstruction of limb position occurs (Mountcastle, 1995).

Segregation of different spatial dimensions has been described at other sites of the nervous system (Simpson, 1984; Helms-Tillery et al., 1996), and may favor the matching process between ongoing motor commands and sensory feedback, a mechanism believed to be essential for sensorimotor transformation. Spatial representation based on hybrid combinations of sensory and motor information could be ideally suited to accomplishing this matching process (Carrozzo and Lacquaniti, 1994; Lacquaniti, 1997).

Irrespective of the nature of their distribution, and of their degree of segregation within different populations of neurons, the spatial tuning properties of cortical neurons and the broad relationships of cell activity to them stress the importance of distributed population codes for different parameters of movement in the cerebral cortex.

The Influence of Eye Position on Reaching-related Activity and of Hand Position on Saccadic Activity

Modulation of Reaching-related Activity by Eye Signals

The influence of eye position on reaching-related activity was first found in ventral (Boussaoud et al., 1993), and then in dorsal (Boussaoud, 1995; Boussaoud et al., 1998), premotor cortex. This influence was described in terms of gain modulation [for a review, see (Boussaoud and Bremmer, 1999) and the references therein] on signal-related activity, which is believed to reflect the process of target localization, as well as on hand preparatory- and movement-related activity. On this basis, Boussaoud and Bremmer (Boussaoud and Bremmer, 1999) have argued that the rotation of hand movement-related preferred direction observed in dorsal premotor cortex when changing origin of movement (Caminiti et al., 1991) did not depend on the changing hand position, but on eye position, as a consequence of the gaze shifts accompanying the hand at the three different starting points in the workspace. This interpretation does not take into account that an eye influence described as gain signal can only modify the width of a directional tuning curve, not its orientation. Thus, in premotor cortex, eye position probably operates as a gain that can change the intensity of the relationship between cell activity and direction of movement, not the nature of this relationship, which instead depends on the position of the hand in space. Furthermore, a recent study (Cisek and Kalaska, 2002b) has shown only modest influences of gaze signals in monkey dorsocaudal premotor cortex, where they account for a small fraction of neural activity, consistent with the observation that microstimulution of this part of premotor cortex results only in limb and not in eye movements (Fujii et al., 2000). Therefore, the claim (Boussaoud et al., 1998; Boussaoud and Bremmer, 1999) that coding of reaching in premotor cortex occurs in an eye-centered frame is not substantiated by convincing experimental results, and one can conclude that in this area eye position could at most exert a modest gain modulation on the arm-centered representation of reaching (Caminiti et al., 1991; Burnod et al., 1992).

The influence of eye position on hand reaching-related activity in parietal cortex has been shown in area 7m (Ferraina et al., 1997a), and then documented in other parietal regions suchs as PEc (Ferraina et al., 2001), V6A (Battaglia-Mayer et al., 2001), and in a so-called Parietal Reach Region (Batista et al., 1999; Snyder et al., 2000) whose exact location remains undetermined, but which should be coextensive with parts of areas MIP, 7m and V6A (Andersen and Buneo, 2002). Batista et al. trained monkeys to make reaching movements in a delayed memory task aimed at assessing the relative influence of eye and hand position on neural activity (Batista et al., 1999). They found that, during the delay interval, cells in a region estimated to correspond to the caudalmost part of the superior parietal lobule show a better correlation of neural activity in an eye-rather than in a limb-centered reference frame. This hypothesis has been reaffirmed by a recent study (Buneo et al., 2002), which supports the contention that during a delayed-reach task, in dorsal area 5, target location is encoded with respect to both the eye and the hand, and is tranformed directly between these reference frames. In the Parietal Reach Region, this transformation would result from a vectorial subtraction of hand from target location, both represented in a common eye-centered reference frame, and would be possible thanks to the gain modulation exerted by hand initial position on the eye-centered representation of target location. There are at least three main problems with this interpretation. First, the basic assumption of this hypothesis is that reach plans in superior parietal cortex occur in eye coodinates (Batista et al., 1999; Buneo et al., 2002). This conclusion is based on analyses of cell activity mostly confined to one behavioral epoch related to preparation for movement, and not on a quantitative evaluation of the tuning properties of parietal neurons, and of their comparison across epochs and task conditions. Studies using multitask approaches (Battaglia-Mayer et al., 2000, 2001) have offered a quantitative assessment of the directional tuning functions of superior parietal neurons across a mutiplicity of behavioral epochs, requiring different combination of retinal, eye and hand signals. They have shown that the main feature of reaching-related cells in the superior parietal lobule is the relative invariance of their preferred directions across task epochs and contexts, a phenomenon expressed by their global tuning field (GTF; see below). This common feature of parietal cells postulates coding schemes that, at the present moment, cannot be reconciled with the hypothesis of an eye-centered representation, as unique mechanism of parietal cells for encoding reaching (Batista et al., 1999; Buneo et al., 2002; Cohen and Andersen, 2002). In this context, it is worth stressing that an eye-centered representation is equivalent to an object-centered one (Olson and Gettner, 1995, 1999) when the target is fixated. Coding reaching in allocentric coordinates seems more plausible in superior parietal areas. Second, available evidence (Ferraina and Bianchi, 1994; Lacquaniti et al., 1995) based on studies of arm reaching in three-dimensional space, argues against coding of motor error in areas of the superior parietal lobule comprised within the definition of Parietal Reach Region, and in the frontal areas where they project, such as dorsal premotor and primary motor cortex (Caminiti et al., 1990, 1991), and, as seen before, offer a different solution to this problem. Thus, the ‘direct’ transformation scheme does not provide evidence on how reach distance is coded in parietal cortex. Finally, this scheme requires a convergence of inputs from neurons of the Parietal Reach Region (V6A, MIP and 7m) into the rostral part of area 5 (area PE), for which there is no anatomical evidence. Although attractive, the ‘direct methods’ (Andersen and Buneo, 2002; Buneo et al., 2002) remain to be reconciled with a large body of anatomical and physiological data concerning the overall organization of the parieto-frontal system, as well as with most current psychophysical studies.

Hand Position Influences on Eye-movement-related Activity

Before we make hand movements to visual targets, we often scan the scene with our eyes. It is reasonable to assume that, when a plan for combined eye–hand movement is formed, hand position influences the central mechanism responsible for eye movement. In agreement with this hypothesis, the influence of hand position on saccade-related activity has been found in areas V6A and PEc (Battaglia-Mayer et al., 2000, 2001), by using a multi-task approach. Here, cell activity during eye movement differs depending on whether or not the target of saccadic eye movement, and therefore of future eye position, will also be the target of hand movement, and therefore of future hand position (Fig. 4A). In this case, the context-dependent nature of parietal cell activity reveals a predictive potential, by subserving a mechanism that might be used to detect the probability of a future spatial coincidence between the eye and the hand on the target. The tuning properties of parietal cells remain similar across these task conditions.

The Effect of Spatial Certainty about Target Location on Neural Activity

In experimental protocols, hand reaches are often made towards targets whose locations are not known in advance — in other words, in conditions of spatial uncertainty. At other times, reaches are made to visual targets long after they are localized in space (spatial certainty). A reflection of these different contexts is seen in areas V6A and PEc (Battaglia-Mayer et al., 2000, 2001). Here, neural activity during hand movement time (Fig. 4B) differs significantly depending on whether hand reaches are made within a reaction-time paradigm (spatial uncertainty), or within an instructed-delay task, when a visual cue instructs the animal about the location of the target for the future hand movement (spatial certainty).

The Effect of Vision of the Hand on Reaching-related Activity.

Reaching-related neurons in parietal areas 7m (Ferraina et al., 1997a), V6A (Battaglia-Mayer et al., 2000) and PEc (Battaglia-Mayer et al., 2001) discharge differently (Fig. 4C) when hand movements are planned and performed in normal light conditions vs. total darkness, as well as when the hand, held immobile at different target locations, is visible or not. In such circumstances, vision of the hand seems to exert a modulatory effect on cell activity, since the orientation of the preferred directions of most parietal neurons remains unchanged or changes very little with or without visual feedback of hand movement trajectory and position in space. This modulation effect can carry information relevant not only to visual monitoring of hand position and movement, but also on the structure of the visual space, which, as indicated by psychophysical studies, exerts profound influences on the representation of endpoint position.

Eye and Hand Signals Are Combined in Parietal Neural Activity in a Spatially Congruent Fashion. A Substrate for Eye–Hand Coordination?

Many neurophysiological studies suggest that in both frontal and parietal cortex eye and hand position signals dynamically interact to build-up the cortical representation of spatial frames of reference. How this interaction can be achieved is suggested by a common feature of parietal neurons. In the superior parietal areas V6A, PEc and 7m, neurons combine retinal-, eye- and hand-related signals in a spatially congruent fashion (Battaglia-Mayer et al., 2000, 2001). When studied via a multi-task approach, requiring different forms of eye–hand coordination, parietal activity in these areas is modulated by signals about target location, eye and hand position and movement direction. For each neuron, the orientation of the preferred directions relative to these information cluster within a limited sector of the workspace, referred to as the global tuning field (GTF; Fig. 5), which is characteristic for each cell. The GTF can be regarded as a spatial frame suitable for dynamically recombining directionally congruent eye- and hand-related information, and therefore as the basis for representations of reaching that are invariant when eye and/or hand position changes.

Anatomical studies (Marconi et al., 2001) suggest that the information encoded in the GTF originates from extrastriate, parietal and frontal areas, and that it can be addressed to other parietal areas, and to premotor cortex as well, by virtue of local intraparietal and long parieto-frontal connections (Fig. 6). The composition of motor plans for coordinated eye–hand actions can undergo further and final shaping thanks to re-entrant signaling operated by the fronto-parietal pathway. Thus, parietal cortex can act as a recurrent network where gain mechanisms might select the relative contribution of directional eye and hand signals to neural activity, by weighting them in a flexible way and on the basis of task demands.

Psychophysics, Behavioral Neurophysiology and Network Modeling in the Specification of the Frames of Reference for Eye and Hand Movement

A central theme of studies on visuomotor control has been the search for the neural correlates of the coordinate frames used for eye and hand movements. In this search, studies of motor behavior have served both as theoretical landmarks, and as sources of inspiration for experimental protocols in behavioral neurophysiology. Neurophysiologists have constantly been inspired by the results of psychophysics, while students of this discipline have regarded cellular neurophysiology as a source for a neural foundation of their observations and theories. This has produced a significant improvement both in the definition of the conceptual problems to address, and in the way to address them. Oversimplifications, however, have abounded. Influenced by the achievements of psychophysics, cellular neurophysiologists have interpreted observations often confined to a single cortical area by ‘neglecting’ the fact that behavioral studies in humans describe the global output and/or the intermediate steps of coordinate transformation of the whole brain; on the other hand, aficionados of psychophysics have regarded results obtained from a single or a limited set of cortical areas as validations of neural processes that instead involve distributed networks. Both viewpoints need to be retuned. The idea of capturing the complexity of the process of coordinate transformation between vision and movement just by scrutinizing a few cortical regions is naïve, since it assumes that the operations of these areas represent those of the distributed network underlying eye–hand movements. Furthermore, cortical areas have often been assigned coding schemes based on studies where neural activity has been analyzed in just a single behavioral task, thus ignoring all the potential implicit in context-dependency, which is one important reflection of synaptic integration. Coordinate transformations leading from vision to movement have traditionally been described as one way, top-down, serial mechanisms that transform sensory inputs into motor outputs. This view ignores the fact that ‘downstream’ cortical areas, such as premotor and motor cortex, thanks to their cortico-cortical connections, can, at any given instant, influence the early stages of computation in cortical areas traditionally considered ‘upstream’, such as posterior parietal and parieto-occipital cortex.

Over the last five years neurophysiological studies have, instead, converged in suggesting that no rigid assignment of given coordinate frames can be made to any cortical area, being cortical representations dynamic, task dependent and of multiparametric nature (Johnson et al., 2001). Studies of neural activity in parietal cortex using multitask approaches reveal the emergence of complex spatial representation, where eye and hand signals are combined on the basis of their directional congruence across task conditions. Common to both parietal and frontal cortex is the existence of representations where eye- and hand-related signals have different weights, depending on the cortical region considered. In the process of coordinate transformation, a gradual transition from one representation to another might occur in the parieto-frontal pathway. The gradient-like architecture of cortico-cortical connections, and the continuum of visual-, eye- and hand-related signals in the network offer the anatomical and physiological substrates for such a transition (Johnson et al., 1996; Matelli et al., 1998 ; Fujii et al., 2000; Battaglia-Mayer et al., 2001; Marconi et al., 2001). Selection of one frame vs. another, and evolution between frames, might be possible by differently combining and weighting eye and hand signals, probably through gain mechanisms. The reciprocal fronto-parietal path, as a source of hand position and movement-related signals to ‘early’ parietal areas, can contribute to the shaping of forward models of reaching, which is necessary to predict the sensory consequences of motor commands. Thus, cellular studies support the contention of psychophysics about the hybrid (Lacquaniti, 1997), task-dependent (Carrozzo et al., 1999) and probabilistic (Vetter and Wolpert, 2000) nature of these frames, and about their independent coexistence (Carrozzo et al., 2002).

These observations can be relevant to the current debate (Snyder, 2000) on the neural basis of coordinate transformations, and on its modeling (Pouget and Snyder, 2000). One view has been that multiple implicit coordinate systems may be distributed across populations of polymodal neurons, each processing different combinations of movement input signals in different coordinate systems (Andersen et al., 1997). In traditional approaches to spatial representations, object position is represented in maps using one particular frame of reference (Soechting and Flanders, 1992). Multiple frames of reference require multiple maps, and a neuron can only contribute to one frame of reference, specific to its map. Recent approaches (Xing and Andersen, 2000; Andersen and Buneo, 2002) propose that posterior parietal cortex behaves as a hidden layer performing direct transformations, using different reference frames (eye-centered, head-centered, body-centered, etc.). In these models, the hidden units produce features such as gain fields and receptive field shifts that resemble those observed experimentally. These models use non-local training methods, and provide internal representations of all possible combinations of input and output maps; thus, in their present form, they would fail to reproduce certain features of parietal neurons, such as the clustering of preferred directions that leads to the global tuning field.

Pouget and Sejnowski have proposed that cells in the parietal cortex implement basis functions that combine multiple sources of information into a representation that allows simultaneous readout of several frames of reference (Pouget and Sejnowski, 1997). In a basis function map, each neuron contributes to multiple frames of reference, and the tuning of parietal neurons is regarded as a basis function of the product space of different modalities. To allow for statistical inference among inputs, this approach has been extended (Deneve et al., 1999; Pouget et al., 2002) to include attractor dynamics. In principle, this coding scheme is compatible with the hypothesis we outlined above that target information for the specification of the endpoint can be defined concurrently in different reference frames, resulting in independent, coexisting representations. However, the alignment of eye and hand preferred directions of the GTF are not easily explained by the basis functions approach. Such an alignment would lead to an incomplete basis for the representation, although this apparent contrast may be reconciled by the statistics of alignment. If so, basis functions models would not contrast with the clustering of preferred directions of superior parietal neurons, but would not account for it.

Two different network models (Mascaro et al., 2003) have recently been used to explain the dynamic properties of reaching-related neurons in the superior parietal lobule. Both models are based on a Hebbian paradigm, and operate a cross-modal integration of inputs concerning retinal target location, eye position and hand position. In the first one, the interaction of these signals has been regarded as occurring at the afferent level, in a feed-forward fashion. In the second model, instead, it has been assumed that recurrent interactions are responsible for their combination. In these network models, eye and hand preferred directions were represented in relation to each other in local coordinates. Both models account surprisingly well for the experimentally observed GTF of parietal neurons, suggesting that parietal cortex might indeed operate as a Hebbian network. Beyond the parietal cortex, the Hebbian approach favors rather naturally the formation of a synaptic interplay between parietal and premotor cortical neurons, based on the direct association connections linking them. This synaptic structure could relate the parietal representation of incoming signals of different modalities to the premotor assemblies with similar tuning properties in any one of the relevant modalities, as also indicated by the observation that parietal and frontal regions displaying similar activity-types are linked by direct cortico-cortical connections (Caminiti et al., 1996, 1998; Johnson et al., 1996; Wise et al., 1997; Battaglia-Mayer et al., 2001). Furthermore, the Hebbian learning process required for these synapses could use the natural correlations occurring between activities in different modalities.

Although future experimental and theoretical work will be necessary to shed light on the coding mechanisms and coordinate transformation for reaching in the cerebral cortex, current evidence from neurophysiology, neuroanatomy and psychophysics strongly supports the existence of multiple, independent and coexisting levels of representation for combined eye–hand movement in the parieto-frontal network.

Notes

This study was supported by funds from MIUR and from Ministry of Public Health of Italy, Italian Space Agency (ASI), Telethon-Italy, Italian National Research Council (CNR), Commission of the European Communities (DG XII — contract number QLRT-1999-00448).

Address correspondence to email: alexandra.battagliamayer@uniroma1.it; roberto.caminiti@uniroma1.it; lacquaniti@caspur.it; or m.zago@hsantalucia.it.

Figure 1.

Variable errors for pointing to memorized targets, performed with visual feedback of hand position. Point targets (LED) were presented by a robot one at a time in a directionally neutral configuration consisting of eight targets distributed uniformly on the surface of a sphere of 22 mm radius, with a ninth target located at the center. The target cluster could be presented in one of two different workspace regions: one located 15 cm to the left of the subject’s midline, and another one located 15 cm to the right of the midline. The target LED was lit briefly at the selected position, then extinguished and quickly removed by the robot. After a short memory delay, a tone sounded (go signal), indicating that the subject should point to the remembered location of the target. Blue ellipsoids represent the tolerance regions containing 95% of pointing responses for all targets of each cluster. Segments emanating from each ellipsoid indicate the direction of maximum variability of the responses, surrounded by the corresponding 95% confidence cone (in black). Note that the primary axis of variability lies close to the sight-line for both target clusters. Two different views of the results are presented (from the top in A, from the side in B).

Figure 1.

Variable errors for pointing to memorized targets, performed with visual feedback of hand position. Point targets (LED) were presented by a robot one at a time in a directionally neutral configuration consisting of eight targets distributed uniformly on the surface of a sphere of 22 mm radius, with a ninth target located at the center. The target cluster could be presented in one of two different workspace regions: one located 15 cm to the left of the subject’s midline, and another one located 15 cm to the right of the midline. The target LED was lit briefly at the selected position, then extinguished and quickly removed by the robot. After a short memory delay, a tone sounded (go signal), indicating that the subject should point to the remembered location of the target. Blue ellipsoids represent the tolerance regions containing 95% of pointing responses for all targets of each cluster. Segments emanating from each ellipsoid indicate the direction of maximum variability of the responses, surrounded by the corresponding 95% confidence cone (in black). Note that the primary axis of variability lies close to the sight-line for both target clusters. Two different views of the results are presented (from the top in A, from the side in B).

Figure 2.

Allocentric components in variable errors. In this experiment the subject pointed to a remembered target whose position was varied randomly from trial to trial, but always fell along a ‘virtual’ line (A). The inset close to the head indicates that the subject expected that the target fell on a straight line lying in a fronto-parallel plane. In (B), two different views of the results are shown. Variable errors are divided by workspace region. Ninety-five percent tolerance ellipsoids were computed for three clusters of neighboring targets. (Modified with permission from Carrozzo et al., 2002.)

Figure 2.

Allocentric components in variable errors. In this experiment the subject pointed to a remembered target whose position was varied randomly from trial to trial, but always fell along a ‘virtual’ line (A). The inset close to the head indicates that the subject expected that the target fell on a straight line lying in a fronto-parallel plane. In (B), two different views of the results are shown. Variable errors are divided by workspace region. Ninety-five percent tolerance ellipsoids were computed for three clusters of neighboring targets. (Modified with permission from Carrozzo et al., 2002.)

Figure 3.

Egocentric coding of target and hand position in area 5 neurons. (A) Monkeys were trained to point to a visual target (violet) by making movements of constant amplitude from one of three different starting points (yellow) in one of eight possible directions. Targets denoted by pairs of numbers (e.g. 2,11) could be reached from two different starting points. Each neuron is best tuned to the changes in one spatial coordinate of the target relative to the body: in (B) to azimuth (activity increases monotonically from right to left); in (C) to distance (the closer is the hand, the greater the activity); in (D) to elevation (increasing with downward movements). The wire frames correspond to the three workspaces depicted in (A), with the corners indicating the position of the wrist relative to the monkey at the end of the movement to the corresponding target. Red bars denote the activity averaged during movement time; green bars, the activity predicted by a linear model of final wrist position in shoulder-centered spherical coordinates. (Modified from Lacquaniti et al., 1995.)

Figure 3.

Egocentric coding of target and hand position in area 5 neurons. (A) Monkeys were trained to point to a visual target (violet) by making movements of constant amplitude from one of three different starting points (yellow) in one of eight possible directions. Targets denoted by pairs of numbers (e.g. 2,11) could be reached from two different starting points. Each neuron is best tuned to the changes in one spatial coordinate of the target relative to the body: in (B) to azimuth (activity increases monotonically from right to left); in (C) to distance (the closer is the hand, the greater the activity); in (D) to elevation (increasing with downward movements). The wire frames correspond to the three workspaces depicted in (A), with the corners indicating the position of the wrist relative to the monkey at the end of the movement to the corresponding target. Red bars denote the activity averaged during movement time; green bars, the activity predicted by a linear model of final wrist position in shoulder-centered spherical coordinates. (Modified from Lacquaniti et al., 1995.)

Figure 4.

Context-dependency of parietal cell activity. Monkeys made arm and eye movements to visual targets in eight different directions, starting from a common central origin. Cell activity in the superior parietal lobule varied when the same eye (A) or hand (B) movement was made in different behavioral contexts. In (A), the directional tuning curves refer to the activity of parietal cell when the same eye movement (eye MT) is made with (Delay Reach, gray curve) and without (Saccade, black curve) subsequent hand movement to the fixation point. Shaded panels represent the behavioral epochs to which the tuning curves refer. In this and following graphs, the numbers under the abscissa indicate the orientation of the cell’s preferred directions across task conditions. The directional tuning curves shown in (B) compare the activity of another parietal cell when the same hand movement (hand MT) was performed within a reaction-time (Reach, black curve) and a delayed reach (DR, gray curve) task. In the first (R, upper panels), the target location, and therefore, the direction of hand movement could not be predicted by the monkey, in the second (DR, lower panels) the direction of the future hand movement was known, since target location was pre-cued by an instruction signal. (C,D) Directional tuning curves from the Delay Reach (DR) task compare, across light (l) and dark (d) conditions, cell activity during preparation for hand movement (delay time; C), and static holding of both eye and hand on the target (D).

Figure 4.

Context-dependency of parietal cell activity. Monkeys made arm and eye movements to visual targets in eight different directions, starting from a common central origin. Cell activity in the superior parietal lobule varied when the same eye (A) or hand (B) movement was made in different behavioral contexts. In (A), the directional tuning curves refer to the activity of parietal cell when the same eye movement (eye MT) is made with (Delay Reach, gray curve) and without (Saccade, black curve) subsequent hand movement to the fixation point. Shaded panels represent the behavioral epochs to which the tuning curves refer. In this and following graphs, the numbers under the abscissa indicate the orientation of the cell’s preferred directions across task conditions. The directional tuning curves shown in (B) compare the activity of another parietal cell when the same hand movement (hand MT) was performed within a reaction-time (Reach, black curve) and a delayed reach (DR, gray curve) task. In the first (R, upper panels), the target location, and therefore, the direction of hand movement could not be predicted by the monkey, in the second (DR, lower panels) the direction of the future hand movement was known, since target location was pre-cued by an instruction signal. (C,D) Directional tuning curves from the Delay Reach (DR) task compare, across light (l) and dark (d) conditions, cell activity during preparation for hand movement (delay time; C), and static holding of both eye and hand on the target (D).

Figure 5.

Global tuning field of parietal neurons. Macaque monkeys made arm and/or eye movements in eight different directions, starting from a common central origin. Preferred directions (PDs, colored arrows) of cell activity were computed during different epochs of the following tasks: a Reaching task to foveal (red), or to extrafoveal targets (blu) targets, a Saccadic eye movement task (yellow), and a Delay Reaching task, performed both under light conditions (light green) and in total darkness (black). The four circles refer to four typical parietal cells and display the orientation of their PD vectors in different task epochs (see below for acronyms). The length of each PD vector is proportional to the cell’s firing rate in a particular task epoch. The radius of the circle is normalized to the vector of maximum length. For each cell, PD vectors cluster within a restricted part of the workspace, referred to as field of global tuning (modified from Battaglia-Mayer et al., 2000, 2001). The abbreviations rt, mt, dt, tht indicate reaction-time, movement-time, delay-time and target holding time, respectively; subscripts e and h stand for eye and hand, respectively. Each acronym is color-coded (see above), depending on the behavioral task.

Figure 5.

Global tuning field of parietal neurons. Macaque monkeys made arm and/or eye movements in eight different directions, starting from a common central origin. Preferred directions (PDs, colored arrows) of cell activity were computed during different epochs of the following tasks: a Reaching task to foveal (red), or to extrafoveal targets (blu) targets, a Saccadic eye movement task (yellow), and a Delay Reaching task, performed both under light conditions (light green) and in total darkness (black). The four circles refer to four typical parietal cells and display the orientation of their PD vectors in different task epochs (see below for acronyms). The length of each PD vector is proportional to the cell’s firing rate in a particular task epoch. The radius of the circle is normalized to the vector of maximum length. For each cell, PD vectors cluster within a restricted part of the workspace, referred to as field of global tuning (modified from Battaglia-Mayer et al., 2000, 2001). The abbreviations rt, mt, dt, tht indicate reaction-time, movement-time, delay-time and target holding time, respectively; subscripts e and h stand for eye and hand, respectively. Each acronym is color-coded (see above), depending on the behavioral task.

Figure 6.

The parieto-frontal network underlying reaching in the monkey. Figurines of the monkey’s brain showing the pattern of cortico-cortical connections linking parietal and frontal areas (A, C), and the anatomical locations and borders of cortical areas (AD; modified after Caminiti et al., 1996; Picard and Strick, 1996; Galletti et al., 1996; Matelli et al., 1998; Marconi et al., 2001). (B) The medial aspect of the hemisphere with the location of mesial parietal areas, and the cingulate sulcus opened (gray shading) to indicate the cingulate motor areas. In the brain image of (C), large parts of the parietal and occipital lobes have been removed (Galletti et al., 1996) to show the location of the areas buried in the medial bank of the intraparietal sulcus and in the rostral bank of the parieto-occipital sulcus. (D) An enlargement of the parietal region flanking the intraparietal sulcus (IPS), shown as opened to illustrate the location of the areas buried in its medial and lateral banks (Caminiti et al., 1996). The bottom diagram is a schematic representation of the main association connections between parietal and dorsal premotor areas of the frontal cortex. A three-level color scale indicates the strength of connections. PS, AS, CS, IPS, SF, STS, LS, IOS, POS indicate principal, arcuate, central, intraparietal, Sylvian, superior temporal, lateral, inferior occipital, and parieto-occipital sulci. M1 is motor cortex, SI indicates primary somatosensory cortex. CMAr, CMAd, CMAv indicate cingulate motor areas rostral, dorsal and ventral; MIP, LIP, VIP and AIP indicate medial, lateral, ventral and anterior intraparietal areas.

Figure 6.

The parieto-frontal network underlying reaching in the monkey. Figurines of the monkey’s brain showing the pattern of cortico-cortical connections linking parietal and frontal areas (A, C), and the anatomical locations and borders of cortical areas (AD; modified after Caminiti et al., 1996; Picard and Strick, 1996; Galletti et al., 1996; Matelli et al., 1998; Marconi et al., 2001). (B) The medial aspect of the hemisphere with the location of mesial parietal areas, and the cingulate sulcus opened (gray shading) to indicate the cingulate motor areas. In the brain image of (C), large parts of the parietal and occipital lobes have been removed (Galletti et al., 1996) to show the location of the areas buried in the medial bank of the intraparietal sulcus and in the rostral bank of the parieto-occipital sulcus. (D) An enlargement of the parietal region flanking the intraparietal sulcus (IPS), shown as opened to illustrate the location of the areas buried in its medial and lateral banks (Caminiti et al., 1996). The bottom diagram is a schematic representation of the main association connections between parietal and dorsal premotor areas of the frontal cortex. A three-level color scale indicates the strength of connections. PS, AS, CS, IPS, SF, STS, LS, IOS, POS indicate principal, arcuate, central, intraparietal, Sylvian, superior temporal, lateral, inferior occipital, and parieto-occipital sulci. M1 is motor cortex, SI indicates primary somatosensory cortex. CMAr, CMAd, CMAv indicate cingulate motor areas rostral, dorsal and ventral; MIP, LIP, VIP and AIP indicate medial, lateral, ventral and anterior intraparietal areas.

All authors contributed equally to the review

References

Andersen RA, Mountcastle VB (
1983
) The influence of the angle of gaze upon the excitability of the light-sensitive neurons of the posterior parietal cortex.
J Neurosci
 
3
:
532
–548.
Andersen RA, Buneo CA (
2002
) Intentional maps in posterior parietal cortex.
Annu Rev Neurosci
 
25
:
189
–220.
Andersen RA, Essick GK, Siegel RM (
1985
) Encoding of spatial location by posterior parietal neurons.
Science
 
230
:
456
–458.
Andersen RA, Snyder LH, Bradley DC, Xing J (
1997
) Multimodal representation of space in the posterior parietal cortex and its use in planning movements.
Annu Rev Neurosci
 
20
:
303
–330.
Ashe J, Georgopoulos AP (
1994
) Movement parameters and neural activity in motor cortex and area 5.
Cereb Cortex
 
6
:
590
–600.
Batista AP, Buneo CA, Snyder LH, Andersen RA (
1999
) Reach plans in eye-centered coordinates.
Science
 
285
:
257
–260.
Battaglia-Mayer A, Caminiti R (
2002
) Optic ataxia as a result of the breakdown of the global tuning fields of parietal neurones.
Brain
 
125
:
225
–237.
Battaglia-Mayer A, Ferraina S, Marconi B, Bullis JB, Lacquaniti F, Burnod Y, Baraduc P, Caminiti R (
1998
) Early motor influences on visuomotor transformations: a positive image of optic ataxia.
Exp Brain Res
 
123
:
172
–189.
Battaglia-Mayer A, Ferraina S, Mitsuda T, Marconi B, Genovesio A, Onorati P, Lacquaniti F, Caminiti R (
2000
) Early coding of reaching in the parieto-occipital cortex.
J Neurophysiol
 
83
:
2374
–2391.
Battaglia-Mayer A, Ferraina S, Genovesio A, Marconi B, Squatrito S, Lacquaniti F, Caminiti R (
2001
) Eye-hand coordination during reaching. II. An analysis of the relationships between visuomanual signals in parietal cortex and parieto-frontal association projections.
Cereb Cortex
 
11
:
528
–544.
Baud-Bovy G, Viviani P (
1998
) Pointing to kinematic targets in space.
J Neurosci
 
18
:
1528
–1545.
Biguer B, Jeannerod M, Prablanc C (
1982
) The coordination of eye, head and arm movements during reaching at a single visual target.
Exp Brain Res
 
46
:
301
–304.
Bock O (
1986
) Contribution of retinal versus extraretinal signals towards visual localization in goal-directed movements.
Exp Brain Res
 
64
:
476
–482.
Boussaoud D (
1995
) Primate premotor cortex: modulation of preparatory neuronal activity by gaze angle.
J Neurophysiol
 
73
:
886
–890.
Boussaoud D, Bremmer F (
1999
) Gaze effects in the cerebral cortex: reference frames for space coding and action.
Exp Brain Res
 
128
:
170
–180.
Boussaoud D, Barth TH, Wise SP (
1993
) Effect of gaze on apparent visual responses of frontal cortex neurons.
Exp Brain Res
 
93
:
423
–434.
Boussaoud D, Jouffrais C, Bremmer F (
1998
) Eye position effects on the neuronal activity of dorsal premotor cortex in the macaque monkey.
J Neurophysiol
 
80
:
1132
–1150.
Bridgeman B, Peery S, Anand S (
1997
) Interaction of cognitive and sensorimotor maps of visual space.
Percept Psychophys
 
59
:
456
–69.
Brotchie PR, Andersen RA, Snyder LH, Goodman SJ (
1995
). Head position signals used by parietal neurons to encode locations of visual stimuli.
Nature
 
375
:
232
–235.
Buneo CA, Jarvis MA, Batista AP, Andersen RA (
2002
). Direct visuomotor transformations for reaching.
Nature
 
416
:
632
–636.
Burnod Y, Otto I, Grandguillaume P, Ferraina S, Johnson PB, Caminiti R (
1992
) Visuomotor transformations underlying arm movements toward visual targets: a neural network model of cerebral cortical operations.
J Neurosci
 
12
:
1435
–1453.
Burnod Y, Baraduc P, Battaglia-Mayer A, Guigon E, Koechlin E, Ferraina S, Lacquaniti F, Caminiti R (
1999
) Parieto-frontal operations underlying arm reaching movement to visual targets: an integrated framework.
Exp Brain Res
 
129
:
325
–346.
Caminiti R, Johnson PB, Urbano A (
1990
) Making arm movements within different parts of space: Dynamic mechanisms in the primate motor cortex.
J Neurosci
 
10
:
2039
–2058.
Caminiti R, Johnson PB, Galli C, Ferraina S, Burnod Y (
1991
) Making arm movements within different parts of space: The premotor and motor cortical representation of a coordinate system for reaching to visual targets.
J Neurosci
 
11
:
1182
–1197.
Caminiti R, Johnson PB, Ferraina S (
1996
) The source of visual information to the primate frontal lobe: A novel role for the superior parietal lobule.
Cereb Cortex
 
6
:
319
–328.
Caminiti R, Ferraina S, Battaglia-Mayer A (
1998
) Visuomotor transformations: early cortical mechanisms of reaching.
Curr Opin Neurobiol
 
8
:
753
–761.
Carrozzo M, Lacquaniti F (
1994
) A hybrid frame of reference for visuomanual coordination.
Neuroreport
 
5
:
453
–456.
Carrozzo M, McIntyre J, Zago M, Lacquaniti F (
1999
) Viewer-centered and body-centered frames of references for direct visuomotor transformations.
Exp Brain Res
 
129
:
201
–210.
Carrozzo M, Stratta F, McIntyre J, Lacquaniti F (
2002
) Cognitive allocentric representations of visual space shape pointing errors.
Exp Brain Res
 
147
:
426
–436.
Cisek P, Kalaska JF (
2002
) Simultaneous encoding of multiple potential reach directions in dorsal premotor cortex.
J Neurophysiol,
 
87
:
1149
–1154.
Cisek P, Kalaska JF (
2002
) Modest gaze-related discharge modulation in monkey dorsal premotor cortex during a reaching task performed with free fixation.
J Neurophysiol
 
88
:
1064
–1072.
Cohen DAD, Prud’homme MJL, Kalaska JF (
1994
) Tactile activity in primate primary somatosensory cortex during active arm movements: correlation with receptive field properties.
J Neurophysiol
 
71
:
161
–172.
Cohen YE, Andersen RA (
2002
) A common reference frame for movement plans in the posterior parietal cortex.
Nat Rev Neurosci
 
3
:
553
–62.
Crammond DJ, Kalaska JF (
1996
) Differential relation of discharge in primary motor cortex and premotor cortex to movements versus actively maintained postures during a reaching task.
Exp Brain Res
 
108
:
45
–61.
Cumming BG, De Angelis GC (
2001
) The physiology of stereopsis.
Annu Rev Neurosci
 
24
:
203
–238.
Deneve S, Latham P, Pouget A (
1999
) Efficient computation and cue integration with noisy population codes.
Nat Neurosci
 
4
:
826
–831.
Duffy FH, Burchfiel JL (
1971
) Somatosensory system: organizational hierarchy from single units in monkey area 5.
Science
 
172
:
273
–275.
Duhamel J-R, Bremmer F, BenHamed S, Graf W (
1997
) Spatial invariance of visual receptive fields in parietal cortex neurons.
Nature
 
389
:
845
–848.
Engel KC, Anderson JH, Soechting JF (
2000
) Similarity in the response of smooth pursuit and manual tracking to a change in the direction of target motion.
J Neurophysiol
 
84
:
1149
–56.
Enright JT (
1995
) The non-visual impact of eye orientation on eye-hand coordination.
Vis Res
 
35
:
1611
–1618.
Epelboin J, Steinman RM, Kowler E, Pizlo Z, Erkelens CJ, Collewijn H (
1997
) Gaze-shift dynamics in two kinds of sequential looking tasks.
Vision Res
 
37
:
2597
–2607.
Ferraina S, Bianchi L (
1994
) Posterior parietal cortex: functional properties of neurons in area 5 during an instructed-delay reaching task within different parts of space.
Exp Brain Res
 
99
:
175
–178.
Ferraina S, Genovesio A (
2001
) Saccades to real targets in 3D space: influences of vergence in the lateral intraparietal area.
Soc Neurosci Abstr
 
27
:
575.3
.
Ferraina S, Garasto MR, Battaglia-Mayer A, Ferraresi P, Johnson PB, Lacquaniti F, Caminiti R (
1997
) Visual control of hand reaching movement: activity in parietal area 7m.
Eur J Neurosci
 
9
:
1090
–1095.
Ferraina S, Johnson PB, Garasto MR, Battaglia-Mayer A, Ercolani L, Bianchi L, Lacquaniti F, Caminiti R (
1997
) Combination of hand and gaze signals during reaching: activity in parietal area 7m in the monkey.
J Neurophysiol
 
77
:
1034
–1038.
Ferraina S, Paré M, Wurtz RH (
2000
) Disparity sensitivity of frontal eye field neurons.
J Neurophysiol
 
83
:
625
–629.
Ferraina S, Battaglia-Mayer A, Genovesio A, Marconi B, Onorati P, Caminiti R (
2001
) Early coding of visuomanual coordination during reaching in parietal area PEc.
J Neurophysiol
 
85
:
462
–465.
Ferraina S, Paré M, Wurtz RH (
2002
) Comparison of cortico-cortical and cortico-collicular signals for the generation of saccadic eye movements.
J Neurophysiol
 
87
:
845
–858.
Flanders M, Helms-Tillery SI, Soechting JF (
1992
) Early stages in a sensorimotor transformation.
Behav Brain Sci
 
15
:
309
–362.
Freedman EG, Sparks DL (
1997
) Activity of cells in the deeper layers of the superior colliculus of the rhesus monkey: evidence for a gaze displacement command.
J Neurophysiol
 
78
:
1669
–1690.
Freedman EG, Stanford TR, Sparks DL (
1996
) Combined eye-head gaze shifts produced by electrical stimulation of the superior colliculus.
J Neurophysiol
 
76
:
927
–952.
Fu Q-G, Suarez JI, Ebner TJ (
1993
) Neural specification of direction and distance during reaching movements in the superior precentral motor area and in primary motor cortex of monkeys.
J Neurophysiol
 
70
:
2097
–2116.
Fu Q-G, Flament D, Coltz JD, Ebner TJ (
1995
) Temporal encoding of movement kinematics in the discharge of primate primary motor and premotor neurons.
J Neurophysiol
 
73
:
836
–854.
Fujii N, Mushiake H, Tanji J (
2000
) Rostrocaudal distinction of the dorsal premotor area based on oculomotor involvement.
J Neurophysiol
 
83
:
1764
–1769.
Galletti C, Battaglini PP, Fattori P (
1993
) Parietal neurons encoding spatial locations in craniocentric coordinates.
Exp Brain Res
 
96
:
221
–229.
Galletti C, Battaglini PP, Fattori P (
1995
) Eye position influence on the parieto-occipital area PO (V6) of the macaque monkey.
Eur J Neurosci
 
7
:
2486
–2501.
Galletti C, Fattori P, Battaglini PP, Shipp S, Zeki S (
1996
) Functional demarcation of a border between areas V6 and V6A in the superior parietal gyrus of the macaque monkey.
Eur J Neurosci
 
8
:
30
–52.
Gauthier GM, Nommay D, Vercher JL (
1990
) The role of ocular muscle proprioception in visual localization of targets.
Science
 
249
:
58
–61.
Georgopoulos AP (
1991
) Higher order motor control.
Ann Rev Neurosci
 
14
:
361
–377.
Georgopoulos AP (
2002
) Cognitive motor control: spatial and temporal aspects.
Curr Opin Neurobiol
 
12
:
678
–683.
Georgopoulos AP, Kalaska JF, Caminiti R, Massey JT (
1982
) On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex.
J Neurosci
 
2
:
1527
–1537.
Georgopoulos AP, Caminiti R, Kalaska JF, Massey JT (
1983
) Spatial coding of movement. A hypothesis concerning the coding of movement direction by motor cortical populations. In: Neuronal coding of motor performance (Massion J, Paillard J, Schultz W and Wiesendanger M, eds).
Exp Brain Res Suppl
 
7
:
327
–336.
Georgopoulos AP, Caminiti R, Kalaska JF (
1984
) Static spatial effect in motor cortex and area 5: quantitative relations in 2-D space.
Exp Brain Res
 
54
:
446
–454.
Georgopoulos AP, Kettner RE, Schwartz AB (
1988
) Primate motor cortex and free arm movements to visual targets in 3-dimensional space. II. Coding of the direction of movement by a neuronal population.
J Neurosci
 
8
:
2928
–2937.
Ghez C, Krakauer JW, Sainburg R and Ghilardi MF (
2000
) Spatial representations and internal models of limb dynamics in motor learning. In: The new cognitive neurosciences (Gazzaniga MS, ed.), pp. 501–514. Cambridge, MA: MIT press.
Gnadt JW, Beyer G (
1988
) Eye movements in depth: what does the monkey’s parietal cortex tell the superior colliculus?
Neuroreport
 
9
:
233
–238.
Gnadt JW, Mays LE (
1995
) Neurons in monkey parietal LIP are tuned for eye-movement parameters in three-dimensional space.
J Neurophysiol
 
73
:
280
–297.
Gordon J, Ghilardi MF, Ghez C (
1994
) Accuracy of planar reaching movements. I. Independence of direction and extent variability.
Exp Brain Res
 
99
:
97
–111.
Gribble PL, Scott SH (
2002
) Overlap of internal models in motor cortex for mechanichal loads during reaching.
Nature
 
417
:
938
–941.
Grusser OJ, Pause M, Schreiter U (
1990
) Localization and responses in the parieto-insular vestibular cortex of awake monkeys (Macaca fascicularis).
J Physiol (Lond)
 
430
:
537
–557.
Helms-Tillery SI, Soechting JF, Ebner TJ (
1996
) Somatosensory cortical activity in relation to arm posture: nonuniform spatial tuning.
J Neurophysiol
 
76
:
2426
–2438.
Henriques DP, Klier EM, Smith MA, Lowy D, Crawford JD (
1998
) Gaze-centered remapping of remembered visual space in an open-loop pointing task.
J Neurosci
 
18
:
1583
–1594.
Johansson R, Westling G, Bäckström A, Flanagan JR (
2001
) Eye–hand coordination in object manipulation.
J Neurosci
 
21
:
6917
–6932.
Johnson MTV, Mason CR, Ebner TJ (
2001
) Central processes for the mutiparametric control of arm movements in primates.
Curr Op Neurobiol
 
11
:
684
–688.
Johnson PB, Ferraina S, Bianchi L, Caminiti R (
1996
) Cortical networks for visual reaching. Physiological and anatomical organization of frontal and parietal lobe arm regions.
Cereb Cortex
 
6
:
102
–119.
Kakei S, Hoffman DS, Strick P (
1999
) Muscle and movement representations in the primary motor cortex.
Science
 
285
:
2136
–2139.
Kakei S, Hoffman DS, Strick P (
2001
) Direction of action is represented in ventral premotor cortex.
Nature Neurosci
 
10
:
1020
–1025.
Kalaska JF, Caminiti R, Georgopoulos AP (
1983
) Cortical mechanisms related to the direction of two dimensional arm movements: relations in parietal area 5 and comparison with motor cortex.
Exp Brain Res
 
51
:
247
–260.
Kettner RE, Schwartz AB, Georgopoulos AP (
1988
) Primate motor cortex and free arm movements to visual targets in 3-dimensional space. II. Positional gradients and population coding of movement direction from various movement origin.
J Neurosci
 
8
:
2938
–2947.
Krakauer JW, Pine ZM, Ghilardi MF, Ghez C (
2000
) Learning of visuomotor transformations for vectorial planning of reaching trajectories.
J Neurosci.
 
20
:
8916
–24.
Kruse W, Dannenberger S, Kleizer R, Hoffmann KP (
2002
) Temporal relation of the population vector activity in visual areas MT/MST and in primary motor cortex during visually guided tracking movements.
Cereb Cortex
 
12
:
446
–476.
Lacquaniti F (
1997
) Frames of reference in sensorimotor coordination. In: Handbook of neuropsychology (Boller F and Grafman J, eds), pp. 27–64. Amsterdam: Elsevier.
Lacquaniti F, Guigon E, Bianchi L, Ferraina S, Caminiti R (
1995
) Representing spatial information for limb movement: the role of area 5 in the monkey.
Cereb Cortex
 
5
:
391
–409.
Land M, Mennie N, Rusted J (
1999
) The role of vision and eye movement in the control of activities of daily living.
Perception
 
28
:
1311
–1328.
Marconi B, Genovesio A, Battaglia-Mayer A, Ferraina S, Squatrito S, Molinari M, Lacquaniti F, Caminiti R (
2001
) Eye-hand coordination during reaching. I. Anatomical relationships between parietal and frontal cortex.
Cereb Cortex,
 
11
:
513
–527.
Mascaro M, Battaglia-Mayer A, Nasi L, Amit DJ, Caminiti R (
2003
) The eye and the hand: neural mechanisms and network models for oculomanual coordination in parietal cortex.
Cereb Cortex
  (in press).
Matelli M, Govoni P, Galletti C, Kutz D, Luppino G (
1998
) Superior area 6 afferents from the superior parietal lobule in the macaque monkey.
J Comp Neurol
 
402
:
327
–352.
McIntyre J, Stratta F, Lacquaniti F (
1997
) Viewer-centered frame of reference for pointing to memorized targets in three-dimensional space.
J Neurophysiol
 
78
:
1601
–1618.
McIntyre J, Stratta F, Lacquaniti F (
1998
) Short-term memory for reaching to visual targets: psychophysical evidence for body-centered reference frames.
J Neurosci
 
18
:
8423
–8435.
McIntyre J, Stratta F, Droulez J, Lacquaniti F (
2000
) Analysis of pointing errors reveals properties of data representations and coordinate transformations within the central nervous system.
Neural Comp
 
12
:
2823
–2855.
Mountcastle VB (
1995
) The parietal system and some higher brain functions.
Cereb Cortex
 
5
:
377
–390.
Mountcastle VB, Lynch JC, Georgopoulos AP, Sakata H, Acuña C (
1975
) Posterior parietal association cortex of the monkey: command functions for operations within extrapersonal space.
J Neurophysiol
 
38
:
871
–908.
Nakamura H, Kuroda T, Wakita M, Kusunoki M, Kato A, Mikami A, Sakata H, Itoh K (
2001
) From three-dimensional space vision to prehensile hand movements: the lateral intraparietal area links area V3A and the anterior intraparietal area in macaques.
J Neurosci
 
21
:
8174
–8187.
Neggers SF, Bekkering H (
2000
) Ocular gaze is anchored to the target of an ongoing pointing movement.
J Neurophyiol
 
83
:
639
–651.
Neggers SF, Bekkering H (
2001
) Gaze anchoring to a pointing target is present during the entire pointing movement and is driven by a non-visual signal.
J Neurophysiol
 
86
:
961
–70.
Olson CR, Gettner SN (
1995
) Object-centered direction selectivity in the macaque supplementary eye field.
Science
 
269
:
985
–988.
Olson CR, Gettner SN (
1999
) Macaque supplementary eye field neurons encode object-centered directions of eye movement regardless of the visual attributes of instructional cues.
J Neurophysiol
 
81
:
2340
–2346.
Picard N, Strick PL (
1996
) Motor areas of the medial wall: a review of their location and functional activation.
Cereb Cortex
 
6
:
342
–353.
Pouget A, Sejnowski TJ (
1997
) Spatial transformation in the parietal cortex using basis functions.
J Cogn Neurosci
 
9
:
222
–237.
Pouget A, Snyder LH (
2000
) Computational approaches to sensorimotor transformations.
Nature Neurosci
 
3
:
1192
–1198.
Pouget A, Deneve S, Duhamel J-R (
2002
) A computational perspective on the neural basis of multisensory spatial representations.
Nat Rev Neurosci
 
3
:
741
–747.
Prablanc C, Martin O (
1992
) Automatic control during hand reaching at undetected two-dimensional target displacements.
J Neurophysiol
 
67
:
455
–69.
Ray Li C-S, Padoa-Schioppa C, Bizzi E (
2001
) Neural correlates of motor performance and motor learning in the primary motor cortex of monkeys adapting to an external force field.
Neuron
 
30
:
593
–607.
Rizzolatti G, Luppino G, Matelli M (
1998
) The organization of the cortical motor system: new concepts.
Electroencephalogr Clin Neurophysiol,
 
106
:
283
–296.
Rosenbaum DA (
1980
) Movement initiation: Specification of arm direction, and extent.
J Exp Psychol Gen
 
109
:
444
–474.
Sakata H, Taira M (
1994
) Parietal control of hand action.
Curr Opin Neurobiol
 
4
:
847
–856.
Sakata H, Takaoka Y, Kawarasaki A, Shibutani H (
1973
) Somatosensory properties of neurons in the superior parietal cortex (area 5) of the rhesus monkey.
Brain Res
 
64
:
85
–102.
Sakata H, Shibutani H, Kawano J (
1980
) Spatial properties of visual fixation neurons in posterior parietal association cortex of the monkey.
J Neurophysiol
 
43
:
1654
–1672.
Sakata H, Taira M, Kusunoki M, Murata A, Tanaka Y (
1997
) The parietal association cortex in depth perception and visual control of hand action.
Trends Neurosci
 
8
:
350
–357.
Salinas E, Abbott LF (
1996
) A model of multiplicative neural responses in parietal cortex.
Proc Natl Acad Sci USA
 
93
:
11956
–11961.
Schwartz AB (
1994
) Direct cortical representation of drawing.
Science
 
265
:
540
–542.
Schwartz AB, Kettner RE, Georgopoulos AP (
1988
) Primate motor cortex and free arm movements to visual targets in 3-dimensional space. I: Relations between single cells discharge and direction of movement.
J Neurosci
 
8
:
2913
–2927.
Scott SH, Kalaska JF (
1997
) Reaching movements with similar hand paths but different arm orientations: I. Activity of individual cells in motor cortex.
J Neurophysiol
 
77
:
826
–852.
Scott SH, Sergio LE, Kalaska JF (
1997
) Reaching movements with similar hand paths but different arm orientations. II. Activity of individual cells in dorsal premotor cortex and parietal area 5.
J Neurophysiol
 
78
:
2413
–2426.
Scott SH, Gribble PL, Graham KM Cabel DW (
2001
) Dissociation between hand motion and populations vectors from neural activity in motor cortex.
Nature
 
413
:
161
–165.
Simpson JI (
1984
) The accessory optic system.
Annu Rev Neurosci
 
7
:
13
–41.
Snyder LH (
2000
) Coordinate transformations for eye and arm movements in the brain.
Curr Opin Neurobiol
 
10
:
747
–754.
Snyder LH, Batista AP, Andersen RA (
2000
) Saccade-related activity in the parietal reach region.
J Neurophysiol
 
83
:
1099
–1102.
Snyder LH, Calton JL, Dickinson AR Lawrence BM (
2002
) Eye-hand coordination: saccades are faster when accompanied by a coordinated arm movement.
J Neurophysiol
 
87
:
2279
–2286.
Soechting JF, Flanders M (
1989
) Errors in pointing are due to approximations in sensorimotor transformations.
J Neurophysiol
 
62
:
595
–608.
Soechting JF, Flanders M (
1989
) Sensorimotor representation for pointing to targets in three-dimensional space.
J Neurophysiol
 
62
:
582
–594.
Soechting JF, Flanders M (
1992
) Moving in three-dimensional space: frames of reference, vectors, and coordinate systems.
Annu Rev Neurosci
 
15
:
167
–191.
Soechting JF, Helms-Tillery SI, Flanders M (
1990
) Transformation from head- to shoulder-centered representation of target direction in arm movements.
J Cogn Neurosci
 
2
:
32
–43.
van Donkelaar P, Lee RG, Gellman RS (
1994
) The contribution of retinal and extraretinal signals to manual tracking movements.
Exp Brain Res
 
99
:
155
–63.
Vetter P, Wolpert D (
2000
) Context estimation for sensorimotor control.
J Neurophysiol
 
84
:
1026
–1034.
Vetter P, Goodbody SJ, Wolpert D (
1999
) Evidence for an eye-centered spherical representation of the visuomotor map.
J Neurophysiol
 
81
:
935
–939.
Xing J, Andersen RA (
2000
) Models of posterior parietal cortex which perform multimodal integration and represent space in several coordinate frames.
J Cogn Neurosci
 
12
:
601
–614.
Wise SP, Boussaoud D, Johnson PB, Caminiti R (
1997
) Premotor and parietal cortex: Corticocortical connectivity and combinatorial computations.
Annu Rev Neurosci
 
20
:
25
–42.
Wurtz RH, Sommer MA, Paré M, Ferraina S (
2001
) Signal transformations from cerebral cortex to superior colliculus for the generation of saccades.
Vision Res
 
41
:
3399
–3412.
Zipser D, Andersen RA (
1988
) A back propagation programmed network that simulates response properties of a subset of posterior parietal neurons.
Nature
 
331
:
679
–684.