Abstract

There is strong experimental evidence that guiding the arm toward a visual target involves an initial vectorial transformation from direction in visual space to direction in motor space. Constraints on this transformation are imposed (i) by the neural codes for incoming information: the desired movement direction is thought to be signalled by populations of broadly tuned neurons and arm position by populations of monotonically tuned neurons; and (ii) by the properties of outgoing information: the actual movement direction results from the collective action of broadly tuned neurons whose preferred directions rotate with the position of the arm. A neural network model is presented that computes the visuomotor mapping, given these constraints. Appropriate operations are learned by the network in an unsupervised fashion through repeated action– perception cycles by recoding the arm-related proprioceptive information. The resulting solution has two interesting properties: (i) the required transformation is executed accurately over a large part of the reaching space, although few positions are actually learned; and (ii) properties of single neurons and populations in the network closely resemble those of neurons and populations in parietal and motor cortical regions. This model thus suggests a realistic scenario for the calculation of coordinate transformations and initial motor command for arm reaching movements.

Introduction

The preparation for an arm movement toward a visual target can be described as a series of sensorimotor coordinate transformations between the retinal position of the target and arm muscle activities. It is generally believed that the first stages of this process involve the computation of a vectorial representation of the movement [hand-to-target vector (Georgopoulos, 1995)]. This vector is then used to calculate a set of motor commands which will move the arm along the desired trajectory. In this framework, one task of neuronal populations in the central nervous system (CNS) is to calculate for each arm position a linear transformation from direction in visual space to direction in motor space (Mel, 1991; Burnod et al., 1992, 1999; Bullock et al., 1993).

Burnod et al. and Bullock et al. (Burnod et al., 1992; Bullock et al., 1993) have shown that a linear superposition of independent solutions of the vectorial visuomotor mapping obtained at each arm position by Hebbian synaptic changes leads to the correct transformation. However, these models rely on the construction of a code in which neurons are tuned to specific postures of the arm (a labelled-line code), which is necessary for Hebbian learning to work appropriately. There are two main difficulties with such an approach. First, models using tabular representations suffer in general from exponential complexity, i.e. the number of necessary neurons increases exponentially as the required precision or the number of degrees of freedom increases (Atkeson, 1989; Olson and Hanson, 1990). Secondly, there is no experimental evidence for a labelled-line representation of arm position. In fact, single unit recordings at different levels of the somatosensory pathway reveal that neuronal discharge is modulated by static limb posture in a monotonic fashion, with saturation at extreme joint angles [reviewed by Helms Tillery et al. (Helms Tillery et al., 1996)].

An alternative method is to use a function-approximating network, i.e. a neural network model which uses some optimization-based learning algorithm to approximate any input–output function. In the present case, basis functions could be constructed as the product of (monotonic) arm position detectors and visual direction detectors (Fig. 1A), and their combinations could be made to converge toward a set of command units through weights that can be adapted by error correction (Pouget and Sejnowski, 1994). Learning can become unsupervised and local if training examples from the desired mapping are actually samples of the inverse mapping (Kuperstein, 1988; Burnod et al., 1992; Bullock et al., 1993; Salinas and Abbott, 1995). The main criticism against this model concerns the locus of adaptation: learning occurs through a synaptic reorganization at the level of motor commands. In fact, studies involving the learning of new visuomotor transformations emphasize the existence of a proprioceptive component in the adaptation process (Redding, 1978; Welch, 1986; Inoue et al., 1997). For instance, optical rotation during a visually guided pointing task induces adaptation accompanied by activation of the postcentral gyrus (Inoue et al., 1997).

These observations inspire a different model (Fig. 1B). The basic principle is to learn a reorganization of the proprioceptive information (recoding) through local activity-dependent synaptic adaptations before combining it with visual information and calculating motor commands. This latter network model is the object of this article for which we present theoretical analysis and computer simulations. We show that (i) the network learns the appropriate transformation over the whole reaching work-space after training at only a few positions; and (ii) discharge properties of neurons in the network, which are by-products of the model architecture and the acquisition of the appropriate transformation, closely resemble those of parietal and motor cortical neurons.

Part of this paper appeared as a conference proceeding (Baraduc et al., 1999).

Materials and Methods

The model consists of a neural network that controls the movement of a planar two-link arm toward visual targets. The network combines information on arm position and target direction to produce motor commands. The mathematical formulation of this vectorial visuomotor transformation is a Jacobian matrix. Populations of neurons in the network compute a distributed representation of this Jacobian. This principle is described in detail in Appendix A (it is recommended that this be read before continuing this section). However, several implementation details differ from this principle, without affecting the behavior of the model. In the following sections, we describe the model of the arm, the coding of input and output information, and neural processing and learning in the network.

Model of the Arm

A planar, two-link right arm, with limited (160°) joint excursion at the shoulder and elbow was used (Fig. 2; in the following figures, unless otherwise specified, the workspace of the arm is set so that full extension of the shoulder and elbow corresponds to the horizontal on the page).

Input and Output Coding

Arm Position

The form of proprioceptive inputs is crucial to the functioning of the network (Appendix A). These inputs must have two properties. The first is monotonicity, which is dictated by experimental observations (Helms Tillery et al., 1996) (see also Introduction). The second is nonlinearity, which is required by nonlinear variations of the Jacobian matrix of the coordinate transformation with arm position. Nonlinearity may not be found in static muscle spindles since a fusimotor drive can adjust the range of receptor sensitivity and avoid saturation, but it is present in somatic neurons in the form of variable recruitment thresholds and saturations (Tanji, 1975; Gardner and Costanzo, 1981) or more complex dependences (Helms Tillery et al., 1996). We have chosen variable-threshold linear saturating functions as a model of proprioceptive input (see below). Other nonlinear functions would be applicable [e.g. a single (lower or upper) saturation]. In this representation of proprioception, some neurons signal a highly restricted range of posture at extreme joint angles. Simulations show that removing these neurons leads to degraded performances at the border of the workspace. Thus, afferents from articular receptors which actually discharge at extreme angular positions (Clark and Burgess, 1975) could provide appropriate information for using full ranges of joint angles.

In the model, limb position was represented by the population activity of Np proprioceptive neurons coding for the lengths of agonist or antagonist muscle at the shoulder and elbow (Fig. 2). To avoid unnecessary complexities, the mechanics of the muscle fiber was likened to a rope-and-pulley system.

Here and in the following, the activity of a model neuron is equated with its mean firing rate. To each of the four muscles corresponded the activity of the same number of units (Np/4). The firing of a proprioceptive neuron k (noted pk) was defined by a piecewise linear sigmoid of muscle length [for details, see Baraduc et al. (Baraduc et al., 1999)]. Recruitment thresholds were set so that any muscle stretch is signaled by at least one (moderately) strong activity and so that no neuron (except those recruited at extreme muscle lengths) fires for a narrow band of length.

Desired Direction of Movement

Psychophysical and electrophysiological studies suggest that arm movement trajectories are initially specified by the direction and amplitude of the hand-to-target vector (Gordon et al., 1994; Georgopoulos, 1995; Vindras and Viviani, 1998). The frame of reference and coordinate system in which the vector is represented are still debated. In the model, movement direction was described in Cartesian coordinates by a unit visual vector V parallel to the hand-to-target vector. The term visual employed here is conventional and gives no information on the origin of the directional signal. The vector V entered the network as a distributed neuronal representation over a set of Nv unit vectors, Vj, uniformly distributed in Cartesian space 

\[\mathit{v}_{\mathit{j}}\ {=}\ (1\ {+}\ \mathit{V}_{\mathit{j}}\ {\cdot}\ \mathit{V})/2\]
. The set of firings vj that encode a vector V will be subsequently termed a cosine population code or, when no ambiguity is possible, a population code.

Representation of desired movement direction as the hand–target vector in a body-centered Cartesian reference frame was motivated by its simplicity but is not essential. In fact, the network implements a generic vectorial coordinate transformation scheme, and the coordinate system of the visual input can be changed (e.g. to an oculocentric code) as long as this input remains vectorial.

Motor Command

Reaching movements are produced by complex coordinated patterns of muscular activity. Descending commands that drive the arm toward a target are elaborated in part by motor cortical circuits (Hoffman and Strick, 1995) and, based on anatomical and physiological arguments, their initial effect can be described as a weighted combination of muscle activations (Schwartz et al., 1988). These activations result in angular displacements, which can be taken as the initial contribution of a command. This functional representation of the motor command relies on the hypothesis that the command system is linear, that is, the resulting effects of commands combine vectorially. We make the assumption that sources of nonlinearity that exist between the command level and arm displacement can be suppressed by dedicated mechanisms (Bullock and Grossberg, 1991).

In the model, commands were emitted by a layer containing Nc neurons, which contributed to the initial direction of movement by a displacement along a direction in joint space. The individual influence of a command neuron is proportional to its discharge level. The collective effect of the layer is 

(1)
\[\mathit{C}\ =\ {{\sum}_{\mathit{i}=1}^{\mathit{N}_{\mathit{c}}}}\mathit{c}_{\mathit{i}}\mathit{C}_{\mathit{i}}\]
where ci is the activity of command neuron i, Ci is its command direction (CD) in joint space and C is the resulting movement direction in joint space. In general, CDs must be considered as a function of arm position [C = C(P)]. How the CDs depend on posture is determined by the mechanical properties of underlying muscles (e.g. moment arms), and the neuronal properties of supraspinal and spinal circuits. Although experimental data indicate that the mechanical actions of shoulder muscles depend almost linearly on arm posture (Buneo et al., 1997), the full influence of mechanical and neural constraints on the CDs cannot be assessed easily.

The model authorizes arbitrary variations in command directions with posture, as long as the Jacobian matrix of the transformation (see Appendix A) can be correctly approximated from proprioceptive inputs. The choice of the CDs influences the distribution of directional properties of command neurons (preferred directions; PDs) and the way PDs change with arm position (equation A4). It is thus constrained by experimental observations. First, PDs are uniformly distributed in a central part of the workspace (Caminiti et al., 1990, 1991). Secondly, PDs shift in an orderly fashion with the upper arm (Caminiti et al., 1990, 1991; Sergio and Kalaska, 1997). Thirdly, population vectors calculated at a remote posture deviate from movement direction (Scott and Kalaska, 1995). The use of CDs which are invariant in angular space is appropriate to meet these requirements (see Results).

In the model, the CDs were independent of posture and selected such that the distribution of PDs is uniform for a central position of the arm (Pref, Fig. 2). The method is explained in Appendix B.

Functioning of the Network

The theoretical principle of the model was implemented in the neural network shown in Figure 3. The structure and functioning of the network are described and then compared to the theory.

The network proceeded in three steps. First, a layer of somatic neurons (Nc × Nv) formed a distributed representation of the Jacobian of the visuomotor transform from the activities of the Np proprioceptive neurons. Adjustable feedforward weights Wijk were used to learn the dependence of the Jacobian on arm position. Although full feedforward connectivity leads to accurate performance, a different solution was retained. A fraction q of randomly chosen somatic units received full proprioceptive inputs, and lateral interactions between somatic units were used to compensate for this partial connectivity. This solution presents two benefits: it reduces the number of adjustable synapses and provides resistance to noise (Douglas et al., 1995; Salinas and Abbott, 1996). In the simulations, q = 0.15 and horizontal cosine connections lead to correct performances. Theoretical justifications of this choice can be found elsewhere (Baraduc and Guigon, 2001). Activity of somatic units was given by 

(2)
\[\mathit{s}_{\mathit{ij}}\ =\ \mathit{g}\left({{\sum}_{\mathit{k}=1}^{\mathit{N}_{\mathit{p}}}}\mathit{W}_{\mathit{ijk}}\mathit{P}_{\mathit{k}}\ +\ {{\sum}_{\mathit{n}=1}^{\mathit{N}_{\mathit{v}}}}\mathit{l}_{\mathit{jn}}\mathit{s}_{\mathit{in}}\right)\]
where the lateral connections are defined by 
\[\mathit{l}_{\mathit{jn}}\ {=}\ cos(2{\pi}(\mathit{j}\ {-}\ \mathit{n})/\mathit{N}_{\mathit{v}})\]
and the function g(u) takes the positive part of u (g(u) = [u]+, i.e. g(u) = u if u > 0, otherwise g(u) = 0). This function is actually a better model of the neuronal current–frequency transfer characteristics than the classic sigmoid (Baranyi et al., 1993; Schwindt et al., 1997). Equation (2) was evaluated iteratively starting with sij = 0. In theory, the dynamics of this equation are stable and lead to a cosine distribution of activity that stabilizes after two iterations if the feedforward input is removed after the first iteration ( Baraduc and Guigon, 2001). Computer simulations show that a permanent proprioceptive input does not alter the beneficial effect of lateral connections.

In a second step, the activity in the somatic layer was combined with the visual directional information in an Nc × Nv multimodal layer (Fig. 3). The multimodal layer realized a recurrent thresholded additive somatovisual combination, which approximates a multiplication (Salinas and Abbott, 1996) 

(3)
\[\mathit{m}_{\mathit{ij}}\ =\ \mathit{g}\left(\mathit{v}_{\mathit{j}}\ +\ \mathit{s}_{\mathit{ij}}\ +\ {{\sum}_{\mathit{n}=1}^{\mathit{N}_{\mathit{v}}}}\mathit{l}_{\mathit{jn}}\mathit{m}_{\mathit{in}}\right)\]
Equation (3) was evaluated as described for equation (2). Lateral interactions were restricted to the rows of the multimodal layer since the required operation is the row-by-row multiplication of somatic layer activity by visual information (Salinas and Abbott, 1996).

Third, command neurons summed rows of the multimodal layer 

(4)
\[\mathit{c}_{\mathit{i}}\ =\ \mathit{g}\left(\frac{1}{\mathit{N}_{\mathit{c}}}{{\sum}_{\mathit{j}=1}^{\mathit{N}_{\mathit{v}}}}\mathit{m}_{\mathit{ij}}\ {-}\ {\tau}\right)\]
where τ is a fixed threshold. These activities were then used to calculate actual movement direction (equation 1).

The network implementation differs from the theory by one aspect. Exact multiplication between somatic and visual information was replaced by a pseudo-multiplication. There are two reasons for this: (i) there is no need to invoke cellular or subcellular mechanisms for neuronal multiplication; and (ii) the multimodal layer has interesting and realistic physiological properties which remain hidden at a dendritic level if sigma-pi neurons (neurons that compute the sum of the products sijvj) are assumed to combine visualand proprioceptive inputs.

Training

The network was trained by correlating its motor commands with the visual effect of the movement (‘motor babbling’) (Kuperstein, 1988; Bullock et al., 1993). For a given starting position of the arm, movements were made in random directions. The directions in joint space (as given by an efferent copy of the command) were then associated with directions in visual space (visual feedback) and the current arm position. The requirement of uniformly distributed training examples in visual space (equation A6) is problematic since these examples are not chosen freely but result from random commands in angular space. This constraint is relieved by replacing equation (A10) by a different learning rule (Baraduc and Guigon, 2001). The learning scheme translates into the algorithm, which is a repetition of the following cycle:

  1. Random choice of an initial arm position among five (indicated by stars in Fig. 2). This small number of training positions was decided from initial simulations which showed that larger training sets do not lead to better performance.

  2. Random emission of a motor command corresponding to a random direction in external space. The motor commands were Gaussian distributions of activity over the command layer, with random peak position and variance σc2.

  3. Calculation of the efferent copy of the motor command (ci*), the visual feedback (vj) and the somatic activity (sij). The efferent copy was calculated as 

    \[\mathit{c}_{\mathit{i}}*\ {=}\ {\sum}_{\mathit{q}}cos(2{\pi}(\mathit{i}\ {-}\ \mathit{q})/\mathit{N}_{\mathit{c}})\mathit{c}_{\mathit{q}}\]
    where cq is the motor command. This transform changed the Gaussian distribution of activity into a cosine distribution which is appropriate for learning (a Gaussian activity profile can be used with a slightly more complicated learning rule). Note that ci and ci* actually represent the same command, i.e. the same direction of movement.

  4. Weight modification in the somatic layer, according to the rule 

    (5)
    \[{\Delta}\mathit{W}_{\mathit{ij{^\prime}k}}\ {=}\ {\eta}(\mathit{c}_{\mathit{i}}*\mathit{v}_{\mathit{j{^\prime}}}\ {-}\ \mathit{s}_{\mathit{ij{^\prime}}})\mathit{p}_{\mathit{k}}\]
    where η is a parameter and j′ = arg maxjvj. At each time, learning occurs only for the most active visual input. This rule can be compared to equation (A10). A possible architecture to implement equation (5) is shown in Figure 3 and is addressed in Discussion.

Training was stopped once the mean absolute error measured at the five positions and for 16 uniformly distributed movement directions stabilized. It took ~20,000 iterations.

Results

Performance

We first tested if the network solves the coordinate transformation with reasonable accuracy. We calculated the direction of the movements produced by the network for each of 16 uniformly distributed directions in Cartesian space. This was done for 21 starting positions of the arm (recall that one position in space corresponds to only one position of the arm). The results are displayed in Figure 4. The bold arrow shows the initial direction of the hand in response to the desired direction 0°. Accurate learning should result in an isotropic distribution of the arrows (as are the desired directions in Cartesian space); their deviation from the desired direction can be grossly appreciated by judging the horizontality of the reference arrow (bold).

Except in the extreme limb configurations (near-maximum extension backwards or near-maximum flexion of the two joints), the network solved the problem accurately. Mean directional error over the workspace (arm position was sampled every 2.5 cm for both Cartesian axes) was –0.6 ± 16.8° (mean ± SD); mean absolute error was 10.1°. When restricted to a central zone (dashed), the absolute directional errors and the variability dropped (mean directional error: –1.6 ± 5.4°; mean absolute error: 4.2°). One can observe that movements originating from the left of the workspace (lower left region in Fig. 4) show a counterclockwise bias, whereas the contrary is seen in the right part of the workspace. The errors thus reveal a consistent deviation toward the shoulder.

Global performance depends on the size of the learning set. Learning in only one position was clearly not sufficient to obtain a correct behavior on the whole workspace. However, learning in the 21 test positions did not give the best results. Trying to reduce the errors on an extreme position (such as when the arm is nearly fully extended) deteriorated the performance at the center of the workspace. As movements are generally executed in front of the body, the learning positions were placed in the central zone. Using more than five positions in this zone did not lead to a significant improvement, so this number was retained for all simulations.

The distributed coding of the information in the network was expected to provide robustness to lesion and noise. This was confirmed by random suppression of units or addition of noise to the inner layers. This robustness indicates that performance is not due to the selectivities of critical neurons whose death would be fatal to the operation of the network.

Single Neuron Discharge

We will now focus on the firing properties of isolated model neurons. Discharge of the somatic layer units over the work-space was analyzed to understand the nature of the somatic recoding. For multimodal layer units, directional tuning revealed no surprise as they derived directly from the properties of the visual layer neurons. We thus only delineated their quasi-multiplicative behavior. In the command layer of the network, directional selectivities were studied for each unit. As in numerous experimental studies, they were computed in Cartesian space through a multilinear regression.

Three vectors can be associated with single command neurons. The first is the preferred direction, defined by the direction of the hand for which the neuron fires maximally. The second vector is the command direction (CD, introduced in Materials and Methods): it is defined as the direction in joint space in which the cell drives the arm. For example, a command neuron equally facilitating the shoulder and elbow flexors will have a CD oriented at 45° in {shoulder, elbow} joint space. Lastly, the same command cell has a direction of action (DA). This DA is the direction in extrinsic Cartesian space in which the cell displaces the hand. This direction obviously depends both on the CD and on arm posture: the effect of an elbow flexion changes with the orientation of the upper arm. It is generally believed that the PD of a neuron is the direction in which this cell alone would produce the hand movement (the DA); however, this need not be true.

Proprioceptive Neurons

The properties of proprioceptive neurons are close to those of a fraction of somatic neurons (see below). They will not be described further.

Somatic Neurons

The activity of somatic layer neurons changes in a monotonic fashion with hand position over the whole workspace. It should be emphasized that the model equations do not force the somatic unit to fire monotonically. Indeed, the weights Wijk are not constrained to be positive, and a broad selectivity for a given arm position could potentially emerge. This was, however, never the case.

Six broad classes of neurons can be delineated. Determination was done de visu and is only an attempt to structure a continuum of response properties, as in physiological studies. In the ‘Off-center radial’ (Fig. 5A) as in the ‘Radial’ (Fig. 5B) type, the discharge increased or decreased in concentric rings, roughly centered at the shoulder, or at the position of the elbow when in full extension. The activity was roughly a weighted combination of shoulder and elbow angles. Other units behaved the same way, but were silent in the central part of the workspace and fired only for large flexion (‘Left’, Fig. 5C) or extension (‘Right’, Fig. 5D) of the shoulder. These features reflect the properties of proprioceptive inputs (Table 1).

The last two classes of neurons showed a more complex behavior. Some showed a left-right activity gradient (‘Gradi-ent’, Fig. 5E) while the remaining had no particular property (‘Atypical’, Fig. 5F). In these last two classes, the proprioceptive signal is most strongly transformed: the discharge is not a simple combination of muscle lengths and can vary in a nonlinear fashion with arm position. In these cases, the influence of the lateral connections in the layer is maximal. These types were not actually found in the proprioceptive layer (Table 1).

Changes in discharge frequency with arm position (positional selectivity) are generally quantified by a preferred axis, i.e. the direction of displacement that leads to a posture at which the discharge frequency is maximal (Kalaska et al., 1983; Kettner et al., 1988). We calculated distributions of positional selectivities for the somatic units for comparison with observed distributions in sensorimotor regions. Multiple regression analysis was used to calculate best fitting planes over the workspace (Kalaska et al., 1983). Joint angles were restricted to 20–140° to avoid nonlinear effects at extreme positions and allow comparisons with experimental data which generally concern limited parts of the workspace. Mean R2 over the population was 0.49 (n = 2500). The distribution of positional selectivity, calculated over neurons for which R2 > 0.7 (n = 904, 36%), was bimodal, with a preferred axis along 40–220°.

Multimodal Neurons

Multimodal neurons combine in a pseudo-multiplicative way the activities of somatic and visual neurons. As the somatic units exhibit monotonic firing profiles, the global effect on multimodal neurons is roughly a gain field. Figure 6 illustrates more precisely this pseudo-multiplicative property. The multimodal neuron has the same preferred direction as its visual input neuron, and its peak activity scales linearly with the somatic input, as for an exact multiplication. However, this multimodal interaction does not reduce to a gain effect as an increase in the somatic input leads to a decrease in the multimodal neuron discharge for the non-preferred direction. In other words, the tuning width of such a neuron is not fixed, but depends on arm posture. The visual–proprioceptive interaction in such a multimodal neuron can thus be described as an arm position-dependent modulation of visual selectivity.

Command Neurons

As expected from the theory (equation A3), command neurons were broadly tuned to movement direction in Cartesian space. This is shown for one neuron in Figure 7A. At each of the 21 tested positions, 95% of the neurons were directionally tuned (linear regression; mean R2 = 0.93). Preferred directions rotated with the upper arm, as shown for the same neuron in Figure 7. In this case, the PD shifted clockwise as the arm extended.

The theory also predicts (equations A4 and A5) that the PD of a neuron is in general different from the direction (in Cartesian space) in which the neuron drives the arm (direction of action, DA). This is clearly illustrated for the same neuron in Figure 7B.

The same results were true for the population. For a half-extended elbow, the PDs closely followed the rotation of the shoulder (Table 2). We tested the model on the paradigm used by Sergio and Kalaska (Sergio and Kalaska, 1997). Shift of PDs between a central position (Pcen) and eight peripheral hand locations uniformly distributed over a circle of 8 cm radius was calculated. Shifting was clockwise (mean 9°) for rightward targets and counterclockwise (mean 6°) for leftward targets. These results are qualitatively consistent with those of Sergio and Kalaska (Sergio and Kalaska, 1997).

The angular differences between PDs and DAs in the population ranged between 0 and 72° at central positions and 1 and 165° at extreme positions. The mean angular difference was slightly lower in the central zone (dashed box in Fig. 8) than in the whole workspace (central zone: 20.6°; global mean: 28.1°). A comparison of PD and DA distributions is shown in Figure 8. By definition, the distribution of DAs is uniform in a very central part of the workspace (Fig. 8A). Outside this region, the DAs tended to cluster along a specific axis. An analogous if more noisy pattern was observed for the PDs (Fig. 8B). The best performance obtained in the central zone was not related to an isotropy of the DAs. The distribution of DAs was anisotropic in 56% of the whole workspace (Rayleigh test on orientations, P < 0.01). This was still true in 40% of the central zone. Interestingly, the PD distribution showed the same global anisotropy (P < 0.01 in 48% of the workspace), but was more uniform than the DAs in the central zone (P < 0.01 in 25% of the zone).

Population Activity

Results obtained in the command layer at the single cell level were collected to obtain a view of the population activity.

Neural Population Vector

A population vector, defined as 

\[{{\sum}_{\mathit{i}=1}^{\mathit{N}_{\mathit{c}}}}\mathit{c}_{\mathit{i}}\mathit{PD}_{\mathit{i}}\]
was calculated at each of the 21 arm positions and for the 16 desired directions used for the estimation of performance. The direction of the neural population vector (NPV) was compared with the desired and actual directions of movement at a central arm position (Pcen) and at a remote position (Prem) chosen among the tested positions (Fig. 2). At Pcen (Fig. 9A), the NPV was close to both desired (mean error calculated over the 16 directions: 9.0°, range: 0.02–17.8°) and actual direction of movement (mean: 9.9°, range: 0.12–22.9°). Larger errors were found at Prem (error for NPV-desired direction, mean: 25.2°, range: 1.63–53.2; for NPV-actual direction, mean: 28.1°, range: 1.67–61.3°), as can be seen in Figure 9B. These errors can be explained by a clustering of PDs along a 130° axis. Note that there was only a slight decrease in performance between the two positions. Over the 21 positions, mean discrepancy between the NPV and the actual movement ranged between 4.0 and 40.3°.

We performed a second analysis based on the idea that different types of neurons (visual, multimodal, command) could be intermingled spatially (Crammond and Kalaska, 1996; Johnson et al., 1996). The population vector was calculated with Nc/2 randomly chosen command neurons and Nc/2 randomly chosen visual and multimodal neurons. In this case, the procedure yields mean errors < 9°. This is not unexpected since the NPV calculated on visual units only is exact.

Movement Direction

Actual movement direction is in general close to the desired direction, while the NPV errs substantially (Fig. 9). The behavior of the NPV is dictated by the distribution of PDs. Errors in NPV are found whenever the distribution is not uniform (Figs 8A and 9A). These errors do not preclude a correct calculation of movement direction. This is explained for a theoretical case in Figure 10. We generated theoretical distributions of PDs (equation A4) and DAs (equation A5) for a 20-neuron output layer (to obtain a more legible plot). Arm position was chosen to obtain a clear-cut effect. The distributions are shown in Figure 10A,B. Reconstruction of the NPV for a given desired direction fails since individual contributions of neurons along their PDs tend to cluster (Fig. 10C). Contributions along the DAs are also not in the desired direction of movement, but they are organized so as to nullify their effect on an axis orthogonal to this direction (Fig. 10D). In fact, command neurons must fire more to produce the movement in directions where theDA vectors are sparse and short. This links PDs and DAs in such a way that the preferred directions cluster along an axis for which DAs are sparse and thus the PD of a cell is nearly orthogonal to its motor effect (equation B1).

Discussion

The present model deals with calculation of coordinate transformations and motor commands that initially drive a non-redundant arm toward a visual target. Appropriate operations are learned in an unsupervised fashion through repeated action– perception cycles by recoding arm-related proprioceptive information. The resulting solution has two interesting properties: (i) the required transformation is executed accurately over a large part of the reaching space, although few positions are actually learned; and (ii) properties of single neurons and populations closely resemble those of neurons and populations in parietal and motor cortical regions. Before discussing these results, we need to define exactly what is the scope of the model. Our model is concerned with neuronal processes involved in preparatory and early phases of arm reaching movements before occurrence of peripheral feedback. As such, relevant comparisons can be made with single unit recordings during reaction time periods of reaching tasks. Correspondence between layers of the network and brain regions can be made tentatively based on anatomical and physiological arguments.

We also need to explain why the observed properties are truly emergent characteristics of the model and not simple consequences of the specific architecture of the network. Discharge properties of somatic units are a by-product of the acquisition of the vectorial visuomotor transformation. Multimodal units are constrained to perform a multiplication, but their modulation by arm position derives from the somatic activities. Last, the discharge behavior of command units is dictated by the non-trivial relationship between the PDs and CDs.

Comparison with Experimental Data

Performance

The network calculates correct commands over most of the workspace. Poorer performances are found in regions where the transformation is strongly position-dependent. Furthermore, there is a trend for clockwise errors in the right part of the workspace and counterclockwise errors in the left part. This pattern of errors is not a characteristic of the model, but only of the simulations (the theory predicts perfect performances). The errors are due to imperfect approximation of nonlinearly varying coefficients of the Jacobian matrix from quasi-linear proprioceptive inputs, and could be reduced using a more complex proprioceptive code, e.g. multiarticular inputs. It is interesting that misrepresentation of arm position has been invoked to explain similar biases during reaching movements (Ghilardi et al., 1995).

A property of the model is its capacity to learn a global transformation from a restricted training set and thus to generalize appropriately to untrained positions. Where does this property come from? To simplify, suppose that the coefficients of the Jacobian matrix are linear functions of muscle lengths, and all response functions in the network are linear so that reconstruction of a coefficient of the Jacobian matrix from positional inputs following training (equation A8) is linear, whatever the training positions. In fact, knowledge of the Jacobian matrix at a restricted number of positions (i.e. enough to identify the coefficients of the linear relationship) is sufficient to determine how the coefficient of the Jacobian matrix varies with the muscle lengths. Thus the model can learn the appropriate matrix with only a few training positions and exhibits strong generalization capacities. A similar phenomenon occurs in the network, limited by the nonlinearities present at various stages. Thus the generalization comes from the structure of the somatic layer. Its lateral connections that contribute to the accuracy of the population computation play only a minor role in the generalization process per se.

Trying to learn the mapping for each arm posture leads to a decrease in the overall performance. This reveals a destructive interference when learning at the extremes of the arm range. Interestingly, the performance of human subjects follows a similar pattern: training on the right side of the workspace to improve locally the visuomotor mapping leads to a mean increase of the angular error over the workspace (Ghilardi et al., 1995).

Somatic Neurons

The somatic layer builds a new representation of arm configuration from proprioceptive inputs. However, the shape of somatic receptive field (sRFs) is defined by constraints of the visuomotor transformation, but not by the proprioceptive information (see equation A2). As the coefficients of the Jacobian matrix, the theoretical sRFs vary in a monotonic way over the workspace: no broad selectivity for a given arm position indeed emerges in the somatic layer.

Possible cortical regions containing neurons similar to somatic units are primary somatosensory cortex (Gardner and Costanzo, 1981; Cohen et al., 1994; Prud'homme and Kalaska, 1994; Helms Tillery et al., 1996), anterior parietal cortex (Kalaska et al., 1983; Lacquaniti et al., 1995), motor cortex (Kettner et al., 1988; Caminiti et al., 1990) and premotor cortex (Caminiti et al., 1991), but also earlier stages in the somato-sensory pathway (Bosco et al., 1996). An observed difference between the cortical areas is related to their sensitivity to changes in arm position. Distributions of positional selectivities are uniform anterior to the central sulcus (Kettner et al., 1988; Scott and Kalaska, 1997), but are biased along an anterior– posterior axis in the somatosensory and parietal regions (Kalaska et al., 1983; Cohen et al., 1994; Prud'homme and Kalaska, 1994; Helms Tillery et al., 1996). Our results showing that the distribution of selectivities in the somatic layer is actually biased along this axis suggest that the somatic layer may be located in anterior regions of the parietal cortex.

A critical test of the model would involve showing that, in monkeys, single neurons modulated by static arm position but unmodulated by visual directional information change their discharge behavior following rotation of the optical display (optical tilt). This is a difficult, but feasible experiment (Wise et al., 1998).

Multimodal Neurons

The multimodal layer contains neurons which are broadly tuned to movement direction and modulated by arm position. The modulation was characterized by an absence of shift in preferred direction and a monotonic effect (gain field) on discharge. This fixed PD is an immediate consequence of multiplying a cosine tuning function (visual) with a monotonic one (somatic) and thus is clearly due to both the preset properties of the model and the emergent features of the somatic layer. Similar neurons have been reported recently in a study of motor cortical cells during wrist movements (Kakei et al., 1999). Our model provides insights into the origin of this discharge behavior and its role in visuomotor processing.

Command Neurons

Command neurons are broadly tuned to movement direction and their PD changes in an orderly fashion with shoulder angle (Caminiti et al., 1990, 1991; Sergio and Kalaska, 1997) [for wrist movements see also (Kakei et al., 1999)]. This is an immediate consequence of the model of command neuron activity. The same principle (combination of nonshifting gain fields and shifting receptive fields to compute coordinate transformations) is found in Salinas and Abbott (Salinas and Abbott, 1995). In fact, our theory of coordinate transformations (Baraduc and Guigon, 2001) is an extension to the multidimensional case of the one-dimensional case developed by Salinas and Abbott (Salinas and Abbott, 1995).

Variations in arm posture not only cause rotation of the PDs but more generally modulate their distribution. There is no direct experimental evidence for this. Scott and Kalaska (Scott and Kalaska, 1997) compared distributions of PDs for a ‘natural’ and an ‘abducted’ posture. They found that both distributions were nonuniform, although it is generally reported that the former is uniform (Schwartz et al., 1988; Caminiti et al., 1990a) [but see (Georgopoulos et al., 1982)]. According to the model, these discrepancies could result from the influence of arm position on PDs.

The model raises a dissociation between the preferred direction of a unit and its direction of action. This is the motor formulation of the distinction between receptive and projective fields in vision (Lehky and Sejnowski, 1988). Neuronal receptive fields are shaped by the spatial distribution of the afferent synaptic weights. In contrast, their projective fields are shaped by the weight distribution of their efferent projections to another layer of retinotopic neurons. Informally, the projective field characterizes the meaning of the cell's discharge for the down-stream layers. Here and similarly, the PD is a property of the receptive field of a command neuron, whereas the DA is the main descriptor of its projective field. The relationship between PDs and DAs is dictated by the choice of the CDs (equations A4 and A5). Accordingly, the two directions need not be the same in general and are not same in the present case. In fact, CDs could be crafted to obtain the same distribution of PDs and DAs. However, this would lead to inappropriate patterns of shift in Pds with arm position. There can be only indirect experimental support to our result since directions of action are not easy to measure in vivo (Lemon, 1988). One supporting argument is discussed below in relation to the population vector. More generally, the measure of PD distribution could give a hint on the distribution of DAs: it could be checked if the latter is compatible with the known anatomical and physiological properties of the muscles.

Population Vector

Reconstruction of movement direction from neuronal activities relies on the assumption that the neurons contribute to movement along their preferred direction (Georgopoulos et al., 1986), that is, the PDs are the DAs. Since PDs and DAs were not the same in our model, deviations of the NPV from the movement or target direction were observed, particularly at extreme arm postures. We noted previously that an adequate choice of the CDs could equate PDs and DAs and thus remove the errors of the NPV. Using equations (A5) and (A4), we see that the CDs should vary with arm position to satisfy 

\[\mathit{CC}^{\mathit{T}}\ {=}\ \mathbf{J}(\mathit{P})\mathbf{J}(\mathit{P})^{\mathit{T}}\]
which has infinitely many solutions. However, it is unclear if some solutions would lead to appropriate shift in PDs.

Scott and Kalaska (Scott and Kalaska, 1995) reported that the population vector of motor cortical neurons calculated at an ‘unnatural’ arm posture deviated from the movement direction whereas the same vector calculated at a ‘natural’ posture was correct. This is an immediate consequence of the nonuniform distribution of PDs (Salinas and Abbott, 1994). This result could also be interpreted in terms of the dissociation between PDs and DAs suggested by the model. However, alternative interpretations would need to be considered before drawing firm conclusions.

Redundancy

Our learning scheme is a case of direct inverse modeling (Jordan and Rumelhart, 1992), that is, the transformation is learned from samples of its inverse. This technique may be unable to find an appropriate solution for a nonlinear one-to-many mapping since the mean of correct outputs is not necessarily a correct output itself [convexity problem (Jordan and Rumelhart, 1992)]. There is no such problem here as the mapping is linear. The difficulty of the redundant case is to find a proprioceptive code which efficiently discriminates arm postures. Simulations show that (i) the representation used here is not sufficient; and (ii) non-linear interactions between afferents from different articulations greatly improve the neural coding of posture and allow the visuomotor transformation for a redundant arm to be learned. Details can be found elsewhere (Baraduc, 1999).

Learning and Locus of Adaptation

The learning rule employed here belongs to the family of error-correction rules. However, it is also an unsupervised rule since there is actually a single source of information for learning [random outputs (Hertz et al., 1991)] and the error term can be computed by the network. The target activity of somatic neurons is the product of visual and command signals, which are available during the training period. A possible biological implementation involves a layer of error units (error layer) that calculate the difference between required and actual somatic activity (Fig. 3). These neurons would be strongly active during early phases of training and would dictate the postsynaptic activity of somatic neurons. Other implementations could also be considered as there are no experimental data to assess the existence of error units. Note that the visual signal can be either the actual visual effect of the command or a predicted effect provided by an internal (forward) model of the direct mapping (CV). This mapping is a position-dependent linear transformation which can be learned in the same way as the inverse mapping.

Functional Representation of Directional Visuomotor Transformations

A neuroimaging study showed that a region of the superior parietal lobule activates during early exposure to optical tilt whereas the postcentral gyrus is active during late exposure (Inoue et al., 1997). Since the acquisition curve of tilt adaptation is a negatively accelerated exponential (Ebenholtz, 1966), a possible interpretation relates the parietal activity to error reduction (in the error layer) in the rapidly varying phase of the curve and postcentral activation to consolidation of learning. The exact significance of the latter activation is unclear. Inoue et al. (Inoue et al., 1997) trained their subjects for 12 min, even though asymptotic performance during adaptation to optical tilt is reached after 1–2 h (Welch, 1986). Thus the cerebral activation might well reflect an ongoing adaptive process instead of a steady state. This interpretation is consistent with a recoding of postural information in a somatosensory region, as used in the model.

Our model can be compared to an approach relying on basis functions (Poggio, 1990; Pouget and Sejnowski, 1994, 1997; Salinas and Abbott, 1995). A common property is the ability to learn directional visuomotor transformations from a restricted training set. In both cases, the network contains three types of broadly tuned neurons: (i) not modulated by arm position; (ii) modulated by arm position with nonshifting PD; and (iii) modulated by arm position with shifting PD. Dissociation of PDs and DAs is a feature of the two types of models. There is, however, a significant difference between the models. Learning is a reconfiguration of inputs in one case and of outputs in the other. This makes little actual difference as long as a single adaptation is used. Whenever several distortions are applied simultaneously (e.g. optical tilt and prismatic deviation), manipulation of the output layer prevents concomitant adaptation to the perturbations. Alternatively, separate recodings of proprioceptive inputs for directional and positional visual information permit simultaneous compensation for tilt and displacement, as expected from psychophysical studies (Redding, 1975).

Conclusion

The present paper describes a realistic model of coordinate transformations and motor command calculation for arm reaching movements. However, a simplification was adopted which will need to be relieved: desired movement direction was represented as the hand–target vector in a body-centered Cartesian reference frame. A more general model should assume that direction of movement is coded in oculocentric coordinates (Henriques et al., 1998; Batista et al., 1999) and should use eye and head position signals to calculate the appropriate transformation (Burnod et al., 1999). Predictions of such an eye-to-hand model would depend on actual implementation of the transformation. There are at least two possibilities. The first is a direct transfer from eye to hand coordinates. In this case, the network should learn the Jacobian matrix of the whole kinematic chain between the eye and the hand. The problem is formally similar to the original one, though it might be more difficult to solve due to redundancy (see above). The corresponding somatic layer would contain neurons tuned to both arm and eye position, and the preferred direction of command neurons would shift with eye position. The second possibility involves two kinematic chains: eye to a body-centered frame and hand to this frame. First, direct kinematics could be used to build a body-centered representation of movement direction when eye and head are immobile. Next, a network learns to reconstruct a body-centered representation of direction for different oculocentric vectors, eye and head positions. In this case, eye and head signals would not directly influence the command neurons. These models will need to be confronted with experimental results.

Notes

We thank Marc Maier and Pierre Fortier for fruitful discussions, and Pierre Fortier for revising our English.

Appendix A: Mathematical Bases of the Model

Consider a nonredundant multijoint arm (D degrees of freedom). Its inverse kinematic transformation can be written ψ = ϕ(χ), where χ contains the Cartesian coordinates of the arm endpoint and ψ the joint angles. Changes in endpoint position following changes in joint angles are related through the Jacobian of the transformation ψ = J(ψ)χ, which can be rewritten 

(A1)
\[\mathit{C}\ {=}\ \mathbf{J}(\mathit{P})\mathit{V}\]
where P, V and C are D-dimensional vectors corresponding to notations of Figure 1B, and J is a D × D matrix. The following derivations explain how equation (A1) can be represented in a neural network. Assuming that vectors C and V are represented by the activity of populations of cosine-tuned neurons, we will show that it is possible to learn to compute the matrix product J(P)V in a population of neurons receiving the input P. This is a simplified account of a general theory which can be found elsewhere (Baraduc and Guigon, 2001).

Consider distributed neuronal representations v (Nv-dimensional vector) and c (Nc-dimensional vector) of V and C defined by their coordinates 

\[\mathit{v}_{\mathit{j}}\ {=}\ \mathit{V}_{\mathit{j}}\ {\cdot}\ \mathit{V}\]
and 
\[\mathit{c}_{\mathit{i}}\ {=}\ \mathit{C}_{\mathit{i}}\ {\cdot}\ \mathit{C}\]
where Vj is a set of Nv uniformly distributed vectors in Cartesian space and Ci is a set of Nc uniformly distributed vectors in angular space termed command direction (CD). Neuronal representations v and c are respectively the firing rates of the visual and command neurons of the network. We note V (resp. C) the matrix of column vectors Vj (resp. Ci). Uniform distribution translates into VVTI and CCTI where I is the D × D identity matrix (Sanger, 1994; Baraduc and Guigon, 2001). For the sake of simplicity, we assume that VVT = I and CCT = I. The vectors v and c are called a cosine population code since their associated vectors V and C can be uniquely recovered as Vv and Cc.

For a given arm configuration P, equation (A1) defines a linear mapping which can be represented in a distributed manner by the Nc × Nv matrix 

(A2)
\[{\Im}(\mathit{P})\ {=}\ \mathit{C}^{\mathit{T}}\mathbf{J}(\mathit{P})\mathbf{V}\]
i.e. C = J(P)V if and only if c = J(P)v. This is easily verified using VVT = I and CCT = I. In case the CDs are nonuniformly distributed, we define C′ = (CCT)–1C and C′ is used instead of C in equations (A2), (A3) and(A4).

From the point of view of neural networks, the matrix J(P) can be considered as weights between an input layer (Nv neurons) and an output layer (Nc neurons). From equation (A2), the activity of an output neuron i is 

(A3)
\[c_{\mathit{i}}\ {=}\ C_{\mathit{i}}^{\mathit{T}}\mathbf{J}(\mathit{P})\mathit{V}\]
Three attributes are attached to a neuron i of the output layer (Baraduc and Guigon, 2001):

  • the command direction Ci defined above;

  • a preferred direction PDi, defined as the vector of the visual space that maximizes ci, i.e. 

    \[\mathit{PD}_{\mathit{i}}\ {=}\ \mathbf{J}(\mathit{P})^{\mathit{T}}\mathit{C}_{\mathit{i}}\]
    In matrix notation, the PDs are the column vectors of 
    (A4)
    \[\mathbf{\mathit{PD}}\ {=}\ \mathbf{J}(\mathit{P})^{\mathit{T}}\mathbf{\mathit{C}}\]

  • a direction of action DAi, defined as the direction in which the arm moves when command Ci is applied, i.e. 

    \[\mathbf{\mathit{DA}}_{\mathit{i}}\ {=}\ \mathbf{J}(\mathit{P})^{{\mbox{--}}1}\mathit{C}_{\mathit{i}}\]
    The DAs are the column vectors of 
    (A5)
    \[\mathbf{\mathit{DA}}\ {=}\ \mathbf{J}(\mathit{P})^{{\mbox{--}}1}\mathbf{\mathit{C}}\]

The distributed representation J(P) can be constructed by Hebbian learning from a set of Nex training pairs of unit vectors {Vν,Cν = J(P)Vν) provided that the input examples are uniformly distributed (Baraduc and Guigon, 2001), i.e. 

(A6)
\[{{\sum}_{\mathit{v}=1}^{\mathit{N}_{ex}}}\mathit{V}^{v}(\mathit{V}^{v})^{\mathit{T}}=\mathit{N}_{ex}\mathbf{I}\]
In this case, the entries of ℑ(P) writes 
(A7)
\[{\Im}_{\mathit{ip}}(\mathit{P})\ =\ {{\sum}_{v=1}^{\mathit{N}_{ex}}}\mathit{c}_{\mathit{i}}^{v}\mathit{v}_{\mathit{i}}^{v}\]
where vjv and civ are the population codes associated to the training examples.

Since c = J(P)v holds at each arm position P, a general solution at all positions is obtained by considering the matrix J(P) as neuronal activities 

(A8)
\[{\Im}(\mathit{P})\ {=}\ \mathit{Wp}\]
where p is a neuronal representation of P. The weights W are then adapted so that equation (A8) is true at all arm positions. A learning rule is 
(A9)
\[{\Delta}\mathit{W}_{\mathit{ijk}}\ {\infty}\ \left[{{\sum}_{v=1}^{\mathit{N}_{\mathit{ex}}}}\mathit{c}_{\mathit{i}}^{v}\mathit{{\nu}}_{\mathit{i}}^{v}\ {-}\ {\Im}_{\mathit{ij}}(\mathit{P})\right]\mathit{P}_{\mathit{k}}^{v}\]
where pv are the neuronal representations of arm positions used during training. Equation (A9) is a Widrow–Hoff rule which states that Jij(P) is made to converge toward its desired value defined by equation (A7). Online learning is possible using a stochastic version of equation (A9): 
(A10)
\[{\Delta}\mathit{W}_{\mathit{ijk}}\ {\infty}\ \left[\mathit{c}_{\mathit{i}}^{v}\mathit{{\nu}}_{\mathit{i}}^{v}\ {-}\ {\Im}_{\mathit{ij}}(\mathit{P})\right]\mathit{P}_{\mathit{k}}^{v}\]
Note that there is no a priori guarantee that equation (A8) can be made exact for all arm positions. It depends on how complex the changes of the Jacobian matrix with arm position are, and how precisely the arm positions can be discriminated based on their neuronal representations.

In the model presented in this paper, the somatic layer computes matrix J through the weights W: after learning, we have sij = Jij. The role of the multimodal layer is to compute an approximate mij of the products Jijvj. A few details have been modified from the theory presented here. Lateral connections have been added to provide resistance to noise and permit a large saving in terms of adjustable weights, and transfer functions g constrain the firing rates to be positive. These changes from the present mathematical framework do not affect the operation of the network noticeably.

These theoretical derivations are limited here to the case of cosine tuning curves and uniform distributions. The case where the CDs are not uniformly distributed are treated in this appendix; other cases are discussed elsewhere (Baraduc and Guigon, 2001). This theory extends to the case of a redundant arm using the Moore–Penrose inverse of the Jacobian of the direct kinematics in equation (A1) (Baraduc and Guigon, 2001). Again the difficulty is in the mapping defined by equation (A8).

Appendix B: Invariant Command Directions in Angular Space

The CDs of the command layer were chosen to obtain a uniform distribution of PDs for a central position of the arm (Pref, Fig. 2). If U is a 2 × Nc matrix of uniformly distributed unit vectors Ui, then C = JrU, where Jr = J(Pref), guarantees the required property. From C′ = (CCT)–1C, we obtain C′ = (Jr–1)TU. Thus, at any position P, PD = JTC′ = (Jr–1J)TU, where J = J(P). This proves that the PD distribution is uniform at Pref. Using equation (A5), we obtain the relationship between PDs and DAs 

(B1)
\[\mathbf{\mathit{PD}}\ {=}\ (\mathbf{J}_{\mathit{r}}^{{\mbox{--}}1}\mathbf{J})^{\mathit{T}}(\mathbf{J}_{\mathit{r}}^{{\mbox{--}}1}\mathbf{J})\mathbf{\mathit{DA}}\]

Table 1

Classification of proprioceptive and somatic units (values given are %)

 Off-center radial Radial Left activated Right activated Left–right gradient Atypical 
50 50 
36 26 14 
 Off-center radial Radial Left activated Right activated Left–right gradient Atypical 
50 50 
36 26 14 
Table 2

Mean rotation (SD) of command unit PDs with shoulder angle (15–145°) at different elbow positions (in degrees)

Elbow angle (°) Rotation PD/rotation shoulder Regression coefficient Population (% of tuned neurons) 
0.51 (0.70) 0.92 (0.12) 88 
45 0.75 (0.59) 0.93 (0.15) 90 
75 0.89 (0.44) 0.98 (0.03) 90 
100 0.97 (0.12) 0.98 (0.03) 86 
145 0.67 (0.66) 0.91 (0.15) 92 
Elbow angle (°) Rotation PD/rotation shoulder Regression coefficient Population (% of tuned neurons) 
0.51 (0.70) 0.92 (0.12) 88 
45 0.75 (0.59) 0.93 (0.15) 90 
75 0.89 (0.44) 0.98 (0.03) 90 
100 0.97 (0.12) 0.98 (0.03) 86 
145 0.67 (0.66) 0.91 (0.15) 92 
Figure 1.

Two network architectures for visuomotor transformation. Arm position information (P) and desired direction of movement (V) are combined to produce a motor command (C). Double lines indicate adaptive pathways. Dashed lines indicate feedback pathways. Strong lines indicate error cortection pathways. (A) Systematic products between arm and direction detectors are calculated (Q = PV). Weighted sums of products lead to a command (C = WQ). Appropriate weights are obtained by correction of error between a random command C*, which elicits a directional visual feedback, and a command C calculated by the network from the visual feedback and current arm position to obtain the same effect as C*. (B) Weighted combinations of arm position detectors (S = WP) and direction detectors are combined (Q = SV) to calculate the command. Weights are calculated by correction of error between C*V and S. The purpose of this rule is explained in Principle of the Model.

Figure 1.

Two network architectures for visuomotor transformation. Arm position information (P) and desired direction of movement (V) are combined to produce a motor command (C). Double lines indicate adaptive pathways. Dashed lines indicate feedback pathways. Strong lines indicate error cortection pathways. (A) Systematic products between arm and direction detectors are calculated (Q = PV). Weighted sums of products lead to a command (C = WQ). Appropriate weights are obtained by correction of error between a random command C*, which elicits a directional visual feedback, and a command C calculated by the network from the visual feedback and current arm position to obtain the same effect as C*. (B) Weighted combinations of arm position detectors (S = WP) and direction detectors are combined (Q = SV) to calculate the command. Weights are calculated by correction of error between C*V and S. The purpose of this rule is explained in Principle of the Model.

Figure 2.

Geometry of the two-link planar arm. One of the distances between the center of rotation of a joint and a muscle insertion (here, in the elbow flexor) is labeled (dELf). Learning (*) and test (○) positions are marked. A reference position Pref (M) was chosen in front of the subject. A central position Pcen (⋄) and a remote position Prem (△) are shown (see text for explanation).

Figure 2.

Geometry of the two-link planar arm. One of the distances between the center of rotation of a joint and a muscle insertion (here, in the elbow flexor) is labeled (dELf). Learning (*) and test (○) positions are marked. A reference position Pref (M) was chosen in front of the subject. A central position Pcen (⋄) and a remote position Prem (△) are shown (see text for explanation).

Figure 3.

Network architecture. Inputs are the arm position P and the desired direction of movement derived from vision V. A somatic layer (S) produces a novel arm representation adapted to the mapping from P through weights Wijk. Then the multimodal layer M combines it with the directional information V. The results of this combination are collected by the command layer (C). In the S and M layers, lateral connections within rows help maintain a consistent population activity. Lines indicate connections between layers. Each unit of P projects to S with a diverging pattern (equation 2; see text). Each unit of V projects to a full column of M with unit weights (equation 3). Each unit of S projects to the corresponding unit of M with a unit weight (equation 3). Each unit of C receives projections from a full row of M with unit weights (equation 4). An extra layer combining efference copy of commands (C*), reafferent visual information (V*) and somatic information (dashed lines) projects to the somatic layer to convey learning-related signals. Parameters were l1 = 0.3, l2 = 0.4, r = 0.03, 𝛉max = 2.8, dSHf = 0.22, dSHe = 0.26, dELf = 0.29, dELe = 0.26, Lmin = 0.25, Lmax = 0.35, Np = 40, Nv = 50, Nc = 50, Nconnex = 380, τ = 0.16, η = 0.001, σc2 = 10, q = 0.15.

Network architecture. Inputs are the arm position P and the desired direction of movement derived from vision V. A somatic layer (S) produces a novel arm representation adapted to the mapping from P through weights Wijk. Then the multimodal layer M combines it with the directional information V. The results of this combination are collected by the command layer (C). In the S and M layers, lateral connections within rows help maintain a consistent population activity. Lines indicate connections between layers. Each unit of P projects to S with a diverging pattern (equation 2; see text). Each unit of V projects to a full column of M with unit weights (equation 3). Each unit of S projects to the corresponding unit of M with a unit weight (equation 3). Each unit of C receives projections from a full row of M with unit weights (equation 4). An extra layer combining efference copy of commands (C*), reafferent visual information (V*) and somatic information (dashed lines) projects to the somatic layer to convey learning-related signals. Parameters were l1 = 0.3, l2 = 0.4, r = 0.03, 𝛉max = 2.8, dSHf = 0.22, dSHe = 0.26, dELf = 0.29, dELe = 0.26, Lmin = 0.25, Lmax = 0.35, Np = 40, Nv = 50, Nc = 50, Nconnex = 380, τ = 0.16, η = 0.001, σc2 = 10, q = 0.15.

Figure 4.

Performance of the network illustrated for 21 starting positions of the arm. Arrows represent the initial direction actually taken by the arm when pointing in 16 equally distributed directions in Cartesian space. Thick arrows correspond to the desired 0° direction. A zone frontal to the subject has been outlined in a dashed rectangle (see text).

Figure 4.

Performance of the network illustrated for 21 starting positions of the arm. Arrows represent the initial direction actually taken by the arm when pointing in 16 equally distributed directions in Cartesian space. Thick arrows correspond to the desired 0° direction. A zone frontal to the subject has been outlined in a dashed rectangle (see text).

Figure 5.

Activity of eight neurons of the somatic layer for hand positions over a part of the workspace depicted at the bottom: Off-center radial (A), Radial (B), Left (C), Right (D), Gradient (E), Atypical (F). The discharge level is figured in shades of gray (arbitrary units; black = maximum discharge).

Figure 5.

Activity of eight neurons of the somatic layer for hand positions over a part of the workspace depicted at the bottom: Off-center radial (A), Radial (B), Left (C), Right (D), Gradient (E), Atypical (F). The discharge level is figured in shades of gray (arbitrary units; black = maximum discharge).

Figure 6.

Activity of a multimodal neuron as a function of its somatic input and the desired direction of movement. The neuron's PD is 180°.

Figure 6.

Activity of a multimodal neuron as a function of its somatic input and the desired direction of movement. The neuron's PD is 180°.

Figure 7.

(A) Activity of an output unit at three different positions (a, b, c in B). (B) Difference between preferred direction for the same unit (PD, solid arrows) and direction in which it drives the arm (DA, dashed arrows) for the 21 test positions.

Figure 7.

(A) Activity of an output unit at three different positions (a, b, c in B). (B) Difference between preferred direction for the same unit (PD, solid arrows) and direction in which it drives the arm (DA, dashed arrows) for the 21 test positions.

Figure 8.

(A) DA distribution in the workspace. (B) PD distribution in the workspace. Bars are graduated according to the number of neurons per 20° sector. The dashed box outlines the central region (see text).

Figure 8.

(A) DA distribution in the workspace. (B) PD distribution in the workspace. Bars are graduated according to the number of neurons per 20° sector. The dashed box outlines the central region (see text).

Figure 9.

Comparison between desired direction of movement, actual movement and population vector for two arm configurations Pcen (A) and Prem (B). Results for directions at 90° are enlarged.

Figure 9.

Comparison between desired direction of movement, actual movement and population vector for two arm configurations Pcen (A) and Prem (B). Results for directions at 90° are enlarged.

Figure 10.

Theoretical reconstruction of movement direction based on 20 neurons at a non-central arm position (145°, 30°). Movement direction is 30°. (A) Distribution of PDs. (B) Distribution of DAs. (C) Population vector (strong line), desired movement direction (dashed line) and individual contributions of the neurons to the NPV (arrow). (D) Actual movement direction (strong line). Arrows: individual contributions of the neurons to the movement.

Figure 10.

Theoretical reconstruction of movement direction based on 20 neurons at a non-central arm position (145°, 30°). Movement direction is 30°. (A) Distribution of PDs. (B) Distribution of DAs. (C) Population vector (strong line), desired movement direction (dashed line) and individual contributions of the neurons to the NPV (arrow). (D) Actual movement direction (strong line). Arrows: individual contributions of the neurons to the movement.

References

Atkeson C (
1989
) Learning arm kinematics and dynamics.
Annu Rev Neurosci
 
12
:
157
–183.
Baraduc P (1999) Modèle neuronal des transformations de coordonnées. Contrôle visiomoteur par recodage de la proprioception. PhD thesis, EHESS-Université Paris VI, Paris.
Baraduc P, Guigon E (2001) Population computation of vectorial trans-formations. Preprint (http://www.snv.jussieu.fr/guigon/cosine.pdf).
Baraduc P, Guigon E, Burnod Y (1999) Where does the population vector of motor cortical cells point during arm reaching movements? In: Advances in neural information processing systems, Vol. 11 (Kearns M, Solla S, Cohn D, eds), pp. 83–89. Cambridge, MA: MIT Press (http://www.snv.jussieu.fr/guigon/nips99.pdf).
Baranyi A, Szente M, Woody C (
1993
) Electrophysiological characterization of different types of neurons recorded in vivo in the motor cortex of the cat. 2. Membrane parameters, action potentials, current-induced voltage and electronic structures.
J Neurophysiol
 
69
:
1865
–1879.
Batista A, Buneo C, Snyder L, Andersen R (
1999
) Reach plans in eye-centered coordinates.
Science
 
285
(5425):
257
–260.
Bosco G, Rankin A, Poppele R (
1996
) Representation of passive hindlimb postures in cat spinocerebellar activity.
J Neurophysiol
 
76
:
715
–726.
Bullock D, Grossberg S (
1991
) Adaptive neural networks for control of movement trajectories invariant under speed and force rescaling.
Hum Mov Sci
 
10
:
3
–53.
Bullock D, Grossberg S, Guenther F (
1993
) A self-organizing neural model of motor equivalence reaching and tool use by a multijoint arm.
J Cogn Neurosci
 
5
:
408
–435.
Buneo C, Soechting J, Flanders M (
1997
) Postural dependence of muscle actions: implications for neural control.
J Neurosci
 
17
:
2128
–2142.
Burnod Y, Baraduc P, Battaglia-Mayer A, Guigon E, Koechlin E, Ferraina S, Lacquaniti F, Caminiti R (
1999
) Parieto-frontal coding of reaching: an integrated framework.
Exp Brain Res
 
129
:
325
–346.
Burnod Y, Grandguillaume P, Otto I, Ferraina S, Johnson P, Caminiti R (
1992
) Visuo-motor transformations underlying arm movements toward visual targets: a neural network model of cerebral cortical operations.
J Neurosci
 
12
:
1435
–1453.
Caminiti R, Johnson P, Galli C, Ferraina S, Burnod Y (
1991
) Making arm movements within different parts of space: the premotor and motor cortical representation of a coordinate system for reaching to visual targets.
J Neurosci
 
11
:
1182
–1197.
Caminiti R, Johnson P, Urbano A (
1990
) Making arm movements within different parts of space: dynamics aspects in the primate motor cortex.
J Neurosci
 
10
:
2039
–2058.
Clark F, Burgess P (
1975
) Slowly adapting receptors in cat knee joint: can they signal joint angle?
J Neurophysiol
 
38
:
1448
–1463.
Cohen D, Prud'homme M, Kalaska J (
1994
) Tactile activity in primate primary somatosensory cortex during active arm movements: correlation with receptive field properties.
J Neurophysiol
 
71
:
161
–172.
Crammond D, Kalaska J (
1996
) Differential relation of discharge in primary motor cortex and premotor cortex to movements versus actively maintained postures during a reaching task.
Exp Brain Res
 
108
:
45
–61.
Douglas R, Koch C, Mahowald M, Martin K, Suarez H (
1995
) Recurrent excitation in neocortical circuits.
Science
 
269
:
981
–985.
Ebenholtz S (
1966
) Adaptation to a rotated visual field as a function of degree of optical tilt and exposure time.
J Exp Psychol
 
72
:
629
–634.
Gardner E, Costanzo R (
1981
) Properties of kinesthetic neurons in somatosensory cortex of awake monkeys.
Brain Res
 
214
:
301
–319.
Georgopoulos A (
1995
) Current issues in directional motor control.
Trends Neurosci
 
18
:
506
–510.
Georgopoulos A, Kalaska J, Caminiti R, Massey J (
1982
) On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex.
J Neurosci
 
2
:
1527
–1537.
Georgopoulos A, Schwartz A, Kettner R (
1986
) Neuronal population coding of movement direction.
Science
 
233
:
1416
–1419.
Ghilardi M, Gordon J, Ghez C (
1995
) Learning a visuomotor transformation in a local area of work space produces directional biases in other areas.
J Neurophysiol
 
73
:
2535
–2539.
Gordon J, Ghilardi M, Ghez C (
1994
) Accuracy of planar reaching movements. I. Independence of direction and extent variability.
Exp Brain Res
 
99
:
97
–111.
Helms Tillery S, Soechting J, Ebner T (
1996
) Somatosensory cortical activity in relation to arm posture: nonuniform spatial tuning.
J Neurophysiol
 
76
:
2423
–2438.
Henriques D, Klier E, Smith M, Lowy D, Crawford J (
1998
) Gaze-centered remapping of remembered visual space in an open-loop pointing task.
J Neurosci
 
18
:
1583
–1594.
Hertz J, Krogh A, Palmer R (1991) Introduction to the theory of neural computation. Redwood City, CA: Addison-Wesley.
Hoffman D, Strick P (
1995
) Effects of a primary motor cortex lesion on step-tracking movements of the wrist.
J Neurophysiol
 
73
:
891
–895.
Inoue K, Kawashima R, Satoh K, Kinomura S, Goto R, Sugiura M, Ito M, Fukuda H (
1997
) Activity in the parietal area during visuomotor learning with optical rotation.
NeuroReport
 
8
:
3979
–3983.
Johnson P, Ferraina S, Bianchi L, Caminiti R (
1996
) Cortical networks for visual reaching: physiological and anatomical organization of frontal and parietal lobe arm regions.
Cereb Cortex
 
6
:
102
–119.
Jordan M, Rumelhart D (
1992
) Forward models: supervised learning with a distal teacher.
Cogn Sci
 
16
:
307
–354.
Kakei S, Hoffman D, Strick P (
1999
) Muscle and movement representations in the primary motor cortex.
Science
 
285
(5436):
2136
–2139.
Kalaska J, Caminiti R, Georgopoulos A (
1983
)Cortical mechanisms related to the direction of two dimensional arm movements: relations in parietal area and comparison with motor cortex.
Exp Brain Res
 
51
:
247
–260.
Kettner R, Schwartz A, Georgopoulos A (
1988
) Primate motor cortex and free-arm movements to visual targets in three-dimensional space. III. Positional gradient and population coding of movement direction from various movement origins.
J Neurosci
 
8
:
2938
–2947.
Kuperstein M (
1988
) Neural model of adaptive hand-eye coordination for single postures.
Science
 
239
:
1308
–1311.
Lacquaniti F, Guigon E, Bianchi L, Ferraina S, Caminiti R (
1995
) Representing spatial information for limb movement: role of area in the monkey.
Cereb Cortex
 
5
:
391
–409.
Lehky S, Sejnowski T (
1988
) Network model of shape-from-shading: neural function arises from both receptive and projective fields.
Nature
 
333
:
452
–454.
Lemon R (
1988
) The output map of the primate motor cortex.
Trends Neurosci
 
11
:
501
–506.
Mel B (
1991
) A connectionist model may shed light on neural mechanisms for visually guided reaching.
J Cogn Neurosci
 
3
:
273
–292.
Olson C, Hanson S (1990) Spatial representation of the body. In: Connectionist modeling and brain function (Hanson S, Olson C, eds), pp. 193–254. Cambridge, MA: MIT Press.
Poggio T (
1990
) A theory of how the brain might work.
Cold Spring Harbor Symp Quant Biol
 
55
:
899
–910.
Pouget A, Sejnowski T (
1994
) A neural model of the cortical representation of egocentric distance.
Cereb Cortex
 
4
:
314
–329.
Pouget A, Sejnowski T (
1997
) Spatial transformations in the parietal cortex using basis functions.
J Cogn Neurosci
 
9
:
222
–237.
Prud'homme M, Kalaska J (
1994
) Proprioceptive activity in primate primary somatosensory cortex during active arm reaching movements.
J Neurophysiol
 
72
:
2280
–2301.
Redding G (
1975
) Simultaneous visuo-motor adaptation to optical tilt and displacement.
Percept Psychophys
 
17
:
97
–100.
Redding G (
1978
) Additivity in adaptation to optical tilt.
J Exp Psychol: Hum Percept Perform
 
4
:
178
–190.
Salinas E, Abbott L (
1994
) Vector reconstruction from firing rates.
J Comput Neurosci
 
1
:
89
–107.
Salinas E, Abbott L (
1995
) Transfer of coded information from sensory to motor networks.
J Neurosci
 
15
:
6461
–6474.
Salinas E, Abbott L (
1996
) A model of multiplicative neural responses in parietal cortex.
Proc Natl Acad Sci USA
 
93
:
11956
–11961.
Sanger T (
1994
) Theoretical considerations for the analysis of population coding in motor cortex.
Neural Comput
 
6
:
29
–37.
Schwartz A, Kettner R, Georgopoulos A (
1988
) Primate motor cortex and free arm movements to visual targets in three-dimensional space. I. Relations between single cell discharge and direction of movement.
J Neurosci
 
8
:
2913
–2927.
Schwindt P, O'Brien J, Crill W (
1997
) Quantitative analysis of firing properties of pyramidal neurons from layer 5 of rat sensorimotor cortex.
J Neurophysiol
 
77
:
2484
–2498.
Scott S, Kalaska J (
1995
) Changes in motor cortex activity during reaching movements with similar hand paths but different arm postures.
J Neurophysiol
 
73
:
2563
–2567.
Scott S, Kalaska J (
1997
) Reaching movements with similar hand paths but different arm orientations. 1. Activity of individual cells in motor cortex.
J Neurophysiol
 
77
:
826
–852.
Sergio L, Kalaska J (
1997
) Systematic changes in directional tuning of motor cortex cell activity with hand location in the workspace during generation of static isometric forces in constant spatial directions.
J Neurophysiol
 
78
:
1170
–1174.
Tanji J (
1975
) Activity of neurons in cortical area 3a during maintenance of steady postures by the monkey.
Brain Res
 
88
:
549
–553.
Vindras P, Viviani P (
1998
) Frames of reference and control parameters in visuomanual pointing.
J Exp Psychol: Hum Percept Perform
 
24
:
569
–591.
Welch R (1986) Adaptation of space perception. In: Handbook of perception and human performance, Vol. 1 (Boff K, Kaufman L, Thomas J, eds), ch. 24, pp. 1–45. New York: John Wiley.
Wise S, Moody S, Blomstrom K, Mitz A (
1998
) Changes in motor cortical activity during visuomotor adaptation.
Exp Brain Res
 
121
:
285
–299.