Abstract

Persistent neuronal activity is widespread in many areas of the cerebral cortex of monkeys performing cognitive tasks with a working memory component. Modeling studies have helped understanding of the conditions under which persistent activity can be sustained in cortical circuits. Here, we first review several basic models of persistent activity, including bistable models with excitation only and multistable models for working memory of a discrete set of pictures or objects with structured excitation and global inhibition. In many experiments, persistent activity has been shown to be subject to changes due to associative learning. In cortical network models, Hebbian learning shapes the synaptic structure and, in turn, the properties of persistent activity when pictures are associated together in the course of a task. It is shown how the theoretical models can reproduce basic experimental findings of neurophysiological recordings from inferior temporal and perirhinal cortices obtained using the following experimental protocols: (i) the pair-associate task; (ii) the pair-associate task with color switch; and (iii) the delay match to sample task with a fixed sequence of samples.

Introduction

Electrophysiological recordings in behaving monkeys have provided a wealth of information about neuronal correlates of memory, from working memory to long-term memory (e.g. Fuster, 1995; Goldman-Rakic, 1995). One of the major findings of these experiments is the phenomenon of selective persistent activity of some recorded cells in the delay period of working memory tasks. It is observed in many areas of the temporal lobe (Fuster et al., 1982; Miyashita, 1988; Miyashita and Chang, 1988; Sakai and Miyashita, 1991; Nakamura and Kubota, 1995; Naya et al., 1996; Miller et al., 1996; Erickson and Desimone, 1999), parietal cortex (Chafee and Goldman-Rakic, 1998) and prefrontal cortex (Fuster and Alexander, 1971; Funahashi et al., 1989, 1990, 1991; Miller et al., 1996). An important difference between persistent activity in inferotemporal (IT) cortex and prefrontal cortex is that the latter is resistant to distractors, while the former is not (Miller et al., 1996). Persistent activity could represent a substrate at the cellular or network level of the ability to hold an item in ‘active’ memory for several seconds for behavioral demands, i.e. working memory.

Evidence for long-term association learning effects on persistent activity have come from several studies in the temporal lobe. Miyashita (1988) reported that when visual patterns are repeatedly presented during training in a fixed sequence, neurons tend to exhibit enhanced persistent activity to visual patterns that are neighbors in the sequence, rather than random patterns. Sakai and Miyashita (1991), using a pair-association task, found that some neurons tended to respond visually to both pictures associated during training, while others that had a strong response to one picture during cue presentation, exhibited increasing activity during the delay period in which the pair associate was used as a cue. Naya et al. (1996) tested the possibility that the delay activity of IT neurons is related to a particular picture as a sought target by using a pair-association with color switch task, in which the monkey knows that the pair associate will be shown when the color of the screen changes in the middle of the delay period. They found that some neurons that were visually responsive for a picture would start to respond in the delay period just after the color switch, when that picture appeared as the pair associate of the cue. Erickson and Desimone (1999) used a task that allowed them to probe association learning in a single training day. They found in perirhinal cortex that early in training, delay period activity tends to be correlated to the visual response of the previously shown stimulus (‘retrospective’ activity), while after two training days, delay period activity would become correlated also with the visual response of the stimulus showed following the delay period (‘prospective’ activity). This study is a direct confirmation of the effect of training on neuronal representations in the temporal lobe. Effects of association learning have also been demonstrated in the primate prefrontal cortex. Asaad et al. (1998) showed that the dynamics of neuronal activity in the delay period evolves during training, with neurons activated progressively earlier in the delay period.

Modeling studies have provided firm grounds for the hypothesis that persistent activity is caused by excitatory feedback loops in networks of heavily interconnected neurons (for reviews, see Amit, 1995; Durstewitz et al., 2000b; Wang, 2001). Amit and collaborators explored the hypothesis that such powerful excitatory feedback loops are formed during learning. This strategy allows in principle a systematic study of network dynamics induced by any type of protocol and the investigation of, in particular, the protocols of association learning used in electrophysiological recording experiments on behaving monkeys. The first success of this approach was to show that the results of Miyashita (1988) could be reproduced using a suitable fixed synaptic structure (Griniasty et al., 1993; Amit et al., 1994). Brunel (1996) explored the learning dynamics induced in such networks by the Miyashita protocol and showed that it could indeed lead to the hypothesized synaptic structure. Other protocols, such as the pair-associate protocol, have been explored only at a preliminary level (Brunel, 1996; Mongillo et al., 2003).

The first part of this paper reviews basic results on persistent activity sustained by recurrent synaptic connectivity, using a simplified neuronal model. Then, the focus switches to how the ‘attractor landscape’ of a recurrent network is affected by learning and, consequently, how the dynamics of persistent activity changes, using as examples the protocols that have been used in monkey experiments: (i) the pair-associate task of (Sakai and Miyashita, 1991) and (Erickson and Desimone, 1999); (ii) the pair-associate task with color switch of Naya et al. (1996); and (iii) the delay match to sample task with a fixed sequence of sample stimuli of Miyashita (1988).

The Neuronal Model

Here, a ‘mean-field’ approach is used: neurons are described purely by an f–I curve, describing how the mean firing rate of the neuron depends on its average synaptic input (the ‘field’). Basic physiological and mathematical considerations (see Appendix 1) lead to the following f–I curve:

graphic

shown in Figure 1, where ν is the neuronal firing rate, ϕ defines the current–frequency relationship, I is the synaptic current (the sum of all synaptic inputs, both excitatory and inhibitory, received by the neuron), Ic defines the scale of the currents and νc defines the scale of the firing rates.

The region at low input currents with supralinear f–I relationship represents the noise-dominated region of firing in spiking neurons, where average inputs are subthreshold and firing is due to fluctuations in the inputs, resulting in Poisson-like firing. The region at higher input currents, with sublinear f–I relationship, represents the ‘drift’ dominated region in which firing is due primarily to the mean synaptic drive and is only weakly influenced by fluctuations. Thus, Ic should be taken as a current close to the current needed to reach threshold, while νc would be the firing rate of a cell with realistic noise levels and an input current close to threshold. For purely illustrative purposes, we use in all figures as plausible values for cortical pyramidal cells Ic = 0.2 nA, νc = 10 Hz (Rauch et al., 2003).

Persistent Activity in Cortical Network Models

Non-selective Persistent Activity in Excitatory Networks with Uniform Connectivity

The simplest architecture for working memory is a single population, with excitatory recurrent coupling J, shown in Figure 2A. J represents the average strength of the total recurrent input coming to a neuron in the network. The scale of the strength of recurrent inputs is Jc= Icνc. It is expressed in pA.s, as the time integral of a post-synaptic current. For a cortical pyramidal cell, Jc= 20 pA.s. A single synapse with peak amplitude of 50 pA and 2 ms decay time would contribute ∼0.1 pA.s. Thus, ∼200 such synapses are needed to reach a recurrent strength of Jc. When the firing rate of the population is ν, the average synaptic current is Jν and the firing rate (in absence of external inputs) is the solution of the equation ν = ϕ(Jν). The solutions to this equation are shown graphically in Figure 2B. One solution to this equation is ν = 0 (silent network). It is the only solution for J < graphicJc/2. At J = graphic 3Jc/2, two additional solutions appear, in which I > Ic. This qualitative change in network behavior represents a ‘bifurcation’. The solution with the highest rate corresponds to ‘persistent activity’, while the solution with intermediate rate corresponds to the frontier of the basins of attraction of silent and persistent states. The meaning of this intermediate firing rate solution is that if the firing rate of the network is initially below it, the network will be eventually attracted towards the silent state. If it is initially above it, it will be attracted towards the persistent state. Detailed expressions for these solutions can be found in Appendix 2.

The firing rates of these three solutions (silent, persistent and boundary of basin of attraction) are drawn as the recurrent coupling strength J is varied in Figure 2C. At J ≈ 17 pA.s, the network becomes bistable. The firing rate of the persistent state at the bifurcation is ν ≈ 17 Hz. When J increases: (i) the persistent rates increase very steeply — it approaches 4νcJ/Jc (doubling J increases the persistent rate by 40 Hz) when J becomes large; and (ii) the basin of attaction of the persistent state becomes very large, since the intermediate solution tends to zero as J becomes large. In that simple model, the quiescent state is always stable.

When an external stimulation Iext is applied to the network, the synaptic input becomes I = Jν + Iext. Firing rates are affected by the external stimulation, as shown in Figure 2D for several values of J. Detailed expressions can be found in Appendix 2. A bistable region appears when the recurrent coupling is J = 0.5Jc at an external stimulation level Iext = 0.5Ic . It grows and crosses the Iext = 0 axis when J = √3Jc/2. For J = Jc, bistability is present for –0.25Ic < Iext < 0.25Ic. Note that at that value of J, the persistent activity state can be reached from the silent state with a transient stimulation of Iext > 0.25Ic, while the persistent state can be switched off using an hyperpolarization Iext < –0.25Ic. The low activity part of the green curve for positive Iext can be considered as ‘spontaneous activity’.

Inhibitory neurons with uniform couplings to and from excitatory cells and between themselves can be added to such a network without changing this picture qualitatively, provided excitation sufficiently dominates the feedback. Recent studies have considered the stability of the persistent state with respect to synchronized oscillations in such bistable networks composed of spiking neurons. Wang (1999) emphasized the role of recurrent excitation mediated by NMDA channels in stabilizing persistent activity in the face of synchronized oscillations mediated by the interplay between fast AMPA recurrence and slower GABA recurrent feedback. Hansel and Mato (2001) showed that interneuron-to-interneuron couplings also help in some conditions to stabilize persistent activity.

More Realistic Networks: Strong Coupling, Sparse Coding

The previous model is useful because it represents the simplest model of bistability due to network interactions. However, it lacks several important features of cortical networks in which persistent activity is observed, as follows. (i) In the persistent state, all neurons of the network are active. In cortex, only a small minority of neurons in any recorded area exhibit persistent activity. (ii) The network is weakly coupled. Jc represents the strength of several hundred typical cortical excitatory synapses. Local modules in cortex seem much more strongly coupled, since several thousand such recurent synapses typically exist. A network for object working memory with excitatory and inhibitory neurons and sparse coding (small fraction of neurons in any persistent activity state) was introduced by Amit and Brunel (1997) as a biophysical implementation of earlier abstract models of associative memory (Hopfield, 1982; Amit, 1989) with integrate-and-fire neurons. Thus, the single neuron transfer function used was the noisy f–I curve of integrate-and-fire neurons (Ricciardi, 1977; Amit and Tsodyks, 1991). Numerical resolution of coupled mean-field equations was performed to find the solutions of stationary network activity. A simplified analysis, using the sparse coding limit, was performed by Brunel (2000). Here, we present the scenario in an even simpler setting, using the f–I curve of equation (1). Qualitatively, the picture is similar in all cases. We consider two populations, excitatory and inhibitory. The excitatory couplings are proportional to JE. The inhibitory currents on excitatory neurons are taken to be linearly dependent on the average activity of the excitatory population times JI.

Spontaneous Activity in Inhibition-dominated Networks

Theoretical studies have shown that in the presence of realistic numbers of recurrent excitatory synapses of physiological efficacy, cortical networks must be dominated by inhibition to stabilize realistic levels of spontaneous activity (van Vreeswijk and Sompolinsky, 1996, 1998; Amit and Brunel, 1997). Such networks receive strong background external inputs, Iext, due to spontaneous activity in the rest of the brain that tend to drive them above firing threshold, but the strong inhibitory feedback maintains the average synaptic currents below threshold. Firing at spontaneous levels is then due to fluctuations due to both the noise in external inputs and an effective noise due to the randomness of network connectivity.

The magnitude of spontaneous activity can be calculated in our simplified setting (see Appendix 3). It is shown as a function of the external inputs Iext in Figure 3, when the inhibitory feedback strength JI exceeds the excitatory feedback strength by a factor Jc = 20 pA.s (obtained with, for example, 4000 excitatory synapses with a peak amplitude of 50 pA and decay time constant of 2 ms and 1000 inhibitory synapses with a peak amplitude of 84 pA and decay time constant of 5 ms, assuming interneurons fire at the same rate as pyramidal cells).

Here, in contrast to the previous situation where recurrence was only excitatory, recurrence acts to decrease firing rates of the network, since inhibition is stronger than excitation. Thus, the bistable behavior shown in the purely excitatory network cannot be present in such a network.

Structure in Excitatory Recurrence Induced by Learning and Sparse Persistent Activity

A simple way to obtain persistent activity in an excitatory–inhibitory network in which feedback is dominated by inhibition is to introduce structure (selectivity) in the excitatory recurrence. This can be done in various ways. One is Hebbian learning in excitatory subpopulations (Hebb, 1949; Amit and Fusi, 1994; Amit and Brunel, 1995, 1997; Brunel et al., 1998). Another way is to assume microcolumnar structure (Goldman-Rakic, 1995). In the first scenario, subpopulations are identified only functionally: they group neurons which are visually responsive to the same object. There is no spatial segregation of neurons having the same selectivity properties. In the second, subpopulations have a spatial correlate, the microcolumn. Intermediate scenarios are of course possible. Several recent reviews have addressed the issue of persistent activity in the ‘microcolumnar’ scenario (Durstewitz et al., 2000b; Wang, 2001), focusing on spatial working memory. Here, we concentrate on the learning scenario, focusing on object working memory.

Theoretical studies have shown that a simple Hebbian learning process allows for the generation of a synaptic structure that sustains selective persistent activity. The simplest scenario is the following:

  • Excitatory synapses onto pyramidal cells have at least two stable states, a high conductance and a low conductance state, that could correspond to high and low numbers of AMPA receptors on the postsynaptic site (Malinow et al., 2000).

  • Transitions between the two states can be induced in a stochastic way during visual presentations, in a Hebbian way, as follows:

    If two neurons have a strong visual response to the same stimulus and an existing synapse between the two is in its low conductance state, then there is a probability that it will make a transition up to the high state. Long-term potentiation (LTP) of the synapse has occurred.

  • If one of the neurons has a strong visual response to the same stimulus, while the other has a weak response (close to spontaneous activity or below) and an existing synapse between the two is in its high conductance state, then there is a probability that it will make a transition down to the low state. Long-term depression (LTD) of the synapse has occurred.

  • In all other cases no transitions occur.

The resulting learning process has been extensively studied (Amit and Fusi, 1994; Amit and Brunel, 1995; Brunel, 1996; Brunel et al., 1998). In a network in which p distinct subpopulations of excitatory cells are selective for p objects, the synaptic strength from population I to J becomes gradually fJE +JSijf)/(1 – f), where δij is equal to 1 when i = j and 0 otherwise, if the objects are shown randomly and repeatedly to the network. The synaptic matrix is therefore defined by two parameters: the ‘coding level’ f, representing the fraction of cells activated by a single stimulus; and the increase in synaptic feedback JS due to LTP in synapses connecting neurons which are selective for the same stimulus. The negative term proportional to –fJS/(1 –f), due to LTD in other synapses, ensures that spontaneous activity is unaffected by the structuring.

Taking again our example of 4000 AMPA synapses with peak amplitude of 50 pA and 2 ms decay time, the physiological interpretation of the structure is the following. Suppose that the coding level f is equal to 0.05 (each stimulus activates 5% of the excitatory neurons of the network). Synapses have two possible states, a high state of 100 pA and a low state of 48 pA. The balance between these two states at the network level is such that the average synaptic strength stays at 50 pA. Such a balance might be implemented in the long term by homeostatic mechanisms (Turrigiano and Nelson, 2000). All synapses which connect two neurons which are selective to the same stimulus become potentiated to the high state, following repeated presentation of that stimulus. Thus, the strength of the feedback from that population to itself is composed of 200 (4000 synapses × 0.05) synapses in the high state, i.e. a total strength of 200 × 100 (peak amplitude) × 0.002 (time constant) = 40 pA.s, which is JS = 20 pA.s more than before learning. Synapses connecting two neurons selective for two different stimuli, on the other hand, become depressed to the low state. The strength of the feedback from one population to another becomes 19.5 pA.s, slightly below the pre-learning value of 20 pA.s. The synaptic structure is illustrated in Figure 4A.

We consider now a network state in which one population has activity ν+, while all others have activity ν. When the coding level f is small, solutions for network activity can again be computed (see Appendix 4). These solutions are shown in Figure 4B,C, as the selective feedback strength is varied. The first solution is ν+ = ν_ = νsp: all populations are at spontaneous activity levels. A new solution corresponding to persistent activity appears when the selective feedback strength is higher than some critical value (between 10 and 20 pA.s). This network is multistable: many different persistent activity states exist, corresponding to different subpopulations at elevated rates. In these persistent activity states, only 5% of the neurons emit at elevated rates (from 10 to 35 Hz for a selective feedback strength between 10 and 20 pA.s), while the remaining 95% fire at spontaneous activity levels (between 0 and 10 Hz, depending on external input). Note that the magnitude of persistent activity depends only on the synaptic feedback strength JS and the magnitude of spontaneous activity νsp (controlled in turn by the interplay between external inputs and the strength of recurrent inhibition).

In the excitatory–inhibitory network, the stabilization of spontaneous and persistent activity is achieved by two independent mechanisms. Stability of spontaneous activity is achieved by a strong global inhibition, while stability of persistent activity is achieved by strong selective excitation inside sub-populations of functionally similar excitatory cells.

Note that the range of multistability depends strongly on the level of spontaneous activity νsp. For example, the multistable range is (in pA.s): 13.3 < JS < 20 for νsp = 2.5 Hz; 11.4 < JS < 14.2 for νsp = 5 Hz; and 10.6 < JS < 11.6 for νsp = 7.5 Hz. Finally, the multistable range vanishes for νsp = 10 Hz (the inflexion point of the single neuron f–I curve of Fig. 1), where the persistent activity state appears through a continuous transition. The width of the multistable range as a function of spontaneous activity is shown in Figure 4D.

Effects of External Inputs

We now turn to the effect of changes in the external inputs on spontaneous and persistent states. External inputs can be either non-selective (all neurons receive the same current) or selective (mimicking a visual presentation of the stimulus for which a particular population of neurons is selective). The effect of both types of inputs is shown in Figure 5.

Changes in Non-selective Inputs or Single Cell Excitability

These are shown in Figure 5, where the green circles show the spontaneous activity (2.5 Hz) and persistent activity (20 Hz) in the ‘control condition’ for JS = 15 pA.s. If the network is in the spontaneous activity state, increasing the external inputs from ‘control’ first increases spontaneous activity. After a critical increase (here 0.08 nA), the spontaneous state become unstable (after the black line crosses the dotted green line). The network jumps to one of the persistent states (vertical arrow at Iext = 0.23 nA). This corresponds to a spontaneous activation of one of the memories. The effects of non-selective inputs on the persistent activity (green curve) are paradoxical. Persistent activity shows an inverted-U relationship with non-selective input. Thus, both a decrease or an increase of external inputs (or single cell excitability) can provoke a decrease of persistent activity (Brunel and Wang, 2001).

The reason for this paradoxical behavior stems from the fact that persistent activity, at fixed JS, depends only on the background rate νsp. Thus, the effect of external inputs on network states can be seen as moving the lower intersection point in Figure 4B along the f–I curve, leaving the slope of the straight line fixed. Initially, as spontaneous activity increases, the highest intersection (representing persistent activity) also increases. However, after some point on the f–I curve, the trend reverses and persistent activity decreases as spontaneous activity increases.

This is a major difference from the simplest bistable network, where the relationship between persistent activity and external stimulation is monotonic. This also represents an interesting prediction as to the behavior of a working memory network in response to non-selective changes of all cells in the network. One example of a possible non-selective modulation in prefrontal cortex is through neuromodulators. Prefrontal cortex receives dopaminergic projections from the ventral tegmental area (Sesack et al., 1995; Krimer et al., 1997) and dopamine is known to affect cellular excitability through its effects on intrinsic currents (e.g. Yang et al., 1996; Maurice et al., 2001). Interestingly, dopamine D1 activation during working memory tasks has been shown to give rise to an inverted-U shape response, both at the behavioral (Zahrt et al., 1997; Arnsten, 1998) and neurophysiological levels (Williams and Goldman-Rakic, 1995). Apart from neuromodulators, non-selective changes in all cells could also arise from changes in spontaneous activity of areas providing inputs to prefrontal cortex.

Changes in Selective Inputs

These are shown in Figure 5B. The picture is now similar to the simplest bistable network, with a bistable range and a monotonic increase of both persistent and spontaneous activity. Note that the presence of spontaneous activity allows the network to jump to the persistent state with much smaller selective inputs: for the parameters of Figure 5B, the network will go to the persistent state for an input as small as 6.4 pA, a 10-fold reduction from the bistable network with no spontaneous activity. Thus, very low contrast stimuli will induce a persistent response, while much higher contrasts are needed in absence of spontaneous activity.

Switching the Network Off

The above results suggest several ways in which a network might switch persistent activity off. A non-selective decrease as well as increase in external inputs will destabilize persistent activity, but these changes must be relatively large (of order 0.1 pA or more). An obvious, though unrealistic, alternative is to hyperpolarize (of order 0.01 pA) in a selective way neurons which have persistent activity. Presentation of another stimulus in some conditions switches off the neurons that were previously active (Brunel and Wang, 2001). Finally, persistent activity in networks of spiking neurons can be switched off by synchronizing their firing (Gutkin et al., 2001).

Dynamics of Persistent Activity Induced by Association Learning

The Pair-association Learning Protocol

Next, we consider the situation in which objects are grouped in pairs by the learning protocol (Sakai and Miyashita, 1991; Erickson and Desimone, 1999). The protocol in the pair-associate task is: first, one picture is shown as a cue; this is followed by a delay period; lastly, the pair associate of the cue is shown together with a different picture and the monkey has to touch the pair associate in order to obtain a reward. Learning in such a situation leads first to the synaptic matrix of Figure 4A. Then, after persistent activity for the individual objects becomes stable, LTP becomes possible between synapses that connect a neuron responsive to a picture and a neuron responsive to the pair associate, because of the temporal overlap of high firing rates in these two neurons at the beginning of the second presentation of the trial. As a result, a fraction a of the synapses connecting pair-associate neurons become potentiated. Note that only a small fraction of these synapses are potentiated because LTD potentially also occurs during the first presentation of the trial and during the end of the second presentation. Thus, a represents a balance between LTP and LTD in those synapses. Eventually, the synaptic structure is as shown in Figure 6A (Brunel, 1996; Mongillo et al., 2003).

How are persistent activity patterns modified by pair learning? Solutions of the network activity show that two kinds of states are present (shown in Fig. 6). The first type of persistent activity is an ‘individual’ attractor state: it evolves continuously from the a = 0 memory state. One of the objects of the pair has strong activity (>10 Hz), while the other has intermediate activity (growing from 2 Hz for a = 0 to ∼5 Hz for a ≈ 0.06, when it disappears). The second type of persistent activity pattern is a network state in which both pair associates have equal elevated persistent activity. Typically, in a paired-associate task, the following course of events will occur.

  • For a < 0.06: after presentation of the cue, the network goes to the memory state corresponding to the cue. The neurons corresponding to the pair associate are active at levels slightly above spontaneous activity. This slight rise in rates corresponds to ‘prospective’ activity, since it reflects a stimulus that will be presented after the delay, while ‘retrospective’ activity reflects a stimulus that was presented before. Firing rates in ‘prospective’ activity are much lower than in ‘retrospective’ activity and would be constant during the delay, since the attractor is reached very rapidly after the end of the visual presentation. Upon presentation of the pair associate, the network can either switch to the memory state corresponding to the pair associate, or to the pair of stimuli, since both types of states exist and are stable for this level of pair learning.

  • For a > 0.06: after presentation of the cue, the network goes to the pair memory state and stays there after presentation of the pair associate. Individual attractors (memory states) disappear.

Note that a very small number of potentiated synapses actually suffices to induce these effects: if a neuron receives on average 200 synapses from its pair-associate population, a = 0.05 corresponds to only 10 potentiated synapses out of the 200.

Transitions between Pair-associate States

Naya et al. (1996) found that transitions between pair-associate persistent states can be provoked by biased inputs. The experiment of Naya et al. (1996) is a pair associate with color switch (PACS) task. In the experiment, 24 pictures are grouped in 12 pairs. Each pair is composed of a green picture and a cyan picture. During a trial of the task, a picture is shown as a cue. In the initial phase of the delay, the picture has disappeared but the screen retains the color of the cue picture. In 50% of the trials, nothing occurs in the 6 s delay period until presentation of two images, signaling to the monkey that he has to perform a delay-match-to-sample task. In the remaining 50% of the trial, the color of the screen changes abruptly in the middle of the delay period to the color of the pair associate of the cue. This signals to the monkey that he has to perform the PA task and that the pair associate of the cue will be shown together with another picture. The main result of the experiment is that neurons selective for the pair associate of the cue picture become activated in the delay period immediately after the color switch, but not when there is no color switch.

Effectively, the color switch represents an input which is biased to neurons selective for a picture with that color, i.e. half of the pictures. Thus, one can model the color input by a weak input current to all neurons selective for the corresponding pictures. The effect of the color switch can then be investigated as a function of the pair learning index a and of the magnitude of the biased input current.

Figure 7 summarizes the possible behaviors. For weak pair learning and very weak biased input (<12 pA), the memory state corresponding to the cue picture (A) remains stable. For higher values of the biased input, the network can either go back to the spontaneous state (for too weak pair learning) or switch to the pair-associate state (A → A′) if the pair learning index a is strong enough. Finally, with a too large pair learning index, the network goes to the pair state in which both populations are active at high rates. In the experiment of Naya et al. (1996), neurons selective for the cue picture (A) return to baseline right after the color switch, while neurons selective for the pair associate (A′) increase their rates at that time (see Fig. 2 of Naya et al., 1996). Thus, the phenomenology of the experiment of Naya et al. (1996) corresponds to the region in which transition from A to A′ occurs.

Fixed Temporal Sequence of Cues

Next, we consider the situation in which cue pictures in a DMS protocol are shown in a fixed order (Miyashita, 1988). Learning with such a protocol leads to reinforced synapses between neurons which are selective for nearest neighbors in the sequence of cue pictures (Griniasty et al., 1993; Amit et al., 1994; Brunel, 1996; Yakovlev et al., 1998). Potentiation for synapses connecting two populations selective for nearest neighbors occurs because: (i) neurons selective for a cue stimulus maintain their activity throughout the delay period; (ii) in 50% of the trials, the match is identical to the sample and, hence, the same neurons keep on firing during match presentation; (iii) the same neurons keep sustained activity after the behavioral response and until the next trial begins, as shown by Yakovlev et al. (1998); (iv) hence, at the beginning of the next trial, these neurons are still in a persistent state, leading to possible reinforcement in connections between these neurons and the neurons selective for the next stimulus. The fraction of such synapses which are potentiated, a, represents the strength of sequence learning. The structure of the network after learning is shown in Figure 8A.

This structure gives rise to a very rich repertoire of persistent activity states, as shown in Figure 8B. As with pair learning, different types of persistent activity states exist, in which different numbers of neighboring populations (in the sense of selectivity to stimuli which are neighbors in the sequence) are active at elevated rates (shown schematically in the panels marked 1, 2, 3, 4). Different types of states coexist for any given value of the parameter a, meaning that different attractor states can be reached depending on the initial network state; for a < 0.03, states with one or two active populations coexist; for 0.03 < a < 0.06, states with one, two or three active populations coexist; etc.

The insets in the figure show a population histogram of rates in the delay period, or alternatively the histogram of the activity of a single neuron in the delay period as a function of the position of the cue that elicited that activity in the sequence (Amit et al., 1994). Note the ‘towers’ of activity which become broader as the sequence learning index becomes larger. Note also the similarity with the patterns reported by Miyashita (1988).

Networks of Spiking Neurons versus Mean-field Analysis: Transitions between States

Simulations of networks of spiking neurons have been shown to agree qualitatively with the mean-field analysis shown in the present paper. The agreement becomes quantitative if the mean-field analysis includes the f–I curve of the simulated integrate-and-fire model neuron, where noise is determined in a self-consistent way (Amit and Brunel, 1997; Brunel, 2000).

A major difference between a system of a finite number of spiking neurons and mean-field analysis, however, is the presence of ‘random’ fluctuations of the global activity of each population due to the finite size of the system. Hence, each ‘stable’ state of the system is stable only in the limit of an infinite system. What can occur in practice in a finite system is transitions between states, that can be of several types, as follows:

  • The network can jump from the background ‘spontaneous’ state to a memory state, especially if the basin of attraction of the spontaneous state is small. This is equivalent to a ‘spontaneous activation’ of a memory in the absence of external cue.

  • A memory state can decay back to the background state, indicating a loss of short-term memory (Koulakov, 2001).

  • Lastly, transitions can occur between selective memory states. In particular, the probability of such transitions can be greatly enhanced by learning of associations between stimuli, that leads to strengthened synapses between the corresponding selective populations. For example, in the pair-associate task, transitions are likely to occur between a state corresponding to an individual memory and a state corresponding to the pair to which that individual memory belongs. In practice, neurons selective for the pair associate will see their activity increase during the delay period, giving rise to pronounced ‘prospective’ activity (Mongillo et al., 2003), as observed in neurophysiological recordings (Sakai and Miyashita, 1991; Asaad et al., 1998; Erickson and Desimone, 1999; Rainer et al., 1999).

Discussion

In the present paper, some of the results obtained in recent years on models of object working memory have been illustrated using a simplified mean-field model. The model neuron is extremely simplified, but still captures some of the essential features of cortical neurons in the presence of noise. Even at this very simplified level, many features observed in neurophysiological recordings on awake monkeys can be reproduced and understood in simple terms.

Neurons in the network show spontaneous and persistent firing rates at physiological levels. An important issue is that of robustness of the coexistence interval between the spontaneous activity state and ‘memory states’. Most of the data seem to indicate a ratio of persistent/spontaneous activity of about three — see, for example, Nakamura and Kubota (1995) for data on several areas of the temporal lobe. Figure 4C shows that for spontaneous rates of 5 Hz, multistability exists when 11.4 < JS < 14.2, i.e. of robustness of ∼30% in parameter variation. In this whole range, persistent activity varies between 12 and 20 Hz, which are rather realistic values, both in terms of absolute values and of ratios of persistent to spontaneous activity. In a particular spiking neuron model, the integrate-and-fire neuron, the intervals are usually smaller. However, this is a quantitative issue that is likely to be affected by many details of the single neuron and synaptic models, such as short-term depression, adaptation mechanisms, etc. Thus, it is, in principle, no problem to obtain realistic firing rates.

In a network of spiking neurons connected by synapses with realistic kinetics, persistent activity states must also be shown to be stable with respect to synchronized oscillations that can sometimes destabilize such states. For example, a combination of fast excitatory and slower inhibitory transmission promotes the generation of oscillations that tend to disrupt persistent activity, though in some cases oscillatory persistent activity can be stable (Wang, 1999; Compte et al., 2000). Slow excitatory synaptic transmission, such as that mediated by NMDA receptors, helps to stabilize asynchronous patterns of persistent activity (Wang, 1999, 2001).

What about other statistical properties such as the coefficient of variation (CV)? Visual inspection of rasters of cells during persistent activity indicates that their CV is large, though a systematic analysis is currently lacking. A large CV in persistent activity would be hard to account for by the class of models described here, because the persistent activity occurs only when the input currents are suprathreshold, where firing is typically rather regular.

The effect of different learning protocols on patterns of persistent activity reproduces many features seen in experiments, in particular recordings in the temporal lobe. The pair-associate protocol leads to ‘prospective’ activity during the delay period preceding the presentation of the pair associate of the cue picture (Sakai and Miyashita, 1991; Erickson and Desimone, 1999); weak biased inputs can provoke transitions between memory states associated during learning (Naya et al., 1996); and presentations of cue pictures in a fixed sequence correlate the patterns of persistent activity elicited by cues which are neighbors in the sequence (Miyashita, 1988).

Non-selective changes in the single cell excitability lead to an inverted-U shaped curve of persistent activity. Such a curve has been reported in an experiment where iontophoresis of a dopamine D1 antagonist was performed (Williams and Goldman-Rakic, 1995). Dopamine could act to reduce excitability by a reduction in high voltage activated calcium currents (Yang et al., 1996), or a reduction of sodium currents (Maurice et al., 2001). A major prediction from network studies is that such an effect should occur when any agent that sufficiently changes single cell excitability is injected into the system. For the particular case of modulation by dopamine, Durstewitz et al. (1999), 2000a) have studied in detail various scenarios based on combinations of known cellular and synaptic effects of dopamine in networks of various complexity levels. A detailed discussion of these results is beyond the scope of this paper (for a review, see Durstewitz et al., 2000b).

In the models considered here, persistent activity represents the correlate of working memory of a learned object. However, non-familiar items can also be held in working memory. This would imply either very fast, one-shot learning of the corresponding synaptic structure, or persistence induced by other mechanisms (for a review, see Wang, 2001). Learning induced modifications in the synaptic structure could still shape patterns of persistent activity induced by these other mechanisms.

Finally, the models that have been considered in this paper have discrete attractors, each attractor corresponding to a memorized object. In principle, spatial or parametric working memory requires models with continuous attractors. Several such models have been proposed recently (Seung, 1996; Camperi and Wang, 1998; Compte et al., 2000; other papers in this issue). It is important to note, however, that models with continuous attractors are intrinsically non-robust to heterogeneities in single neuron or synaptic properties, as noted in the above-mentioned studies. In the presence of such heterogeneities, continuous attractors usually break down to a set of discrete attractors which can be thought of as a discrete coarse-grained representation of the underlying continuous space of parameters. In the one-dimensional case, the attractor landscape would turn out to be rather similar to the attractor landscape obtained from learning sequences that was considered here.

Appendix

1. Single Neuron f–I Curve

For physiological reasons, the f–I curve should have the following properties. It should go to zero for sufficiently negative (hyperpolarizing) currents. It should have a convex region for intermediate (sub-threshold) currents, representing the noise-dominated firing region. It should have a concave region for sufficiently high (super-threshold) currents, representing the region where noise has little effect on the firing.

For mathematical reasons, we further require that it is continuous and its derivative with respect to I is continuous and that the functional dependency on I be as simple as possible, e.g. either quadratic or square root. The last property is motivated by our goal of finding explicit expressions for the firing frequencies of the neurons in the network, as far as is possible. Such functional dependency will ensure that the firing frequencies will always be the solutions of quadratic equations and, thus, explicit solutions will be easily found.

The only transfer function satisfying to these properties is (up to an arbitrary translation in the input currents) the following transfer function

graphic

Besides its simplicity, the f–I curve has several appealing features. Its ‘noiseless’ counterpart,

graphic

has a square root behavior near the transition to firing, which is exactly what is expected for neuronal models with saddle-node bifurcation leading to firing, such as many single cell models (Ermentrout and Kopell, 1986; Ermentrout, 1996). In this sense, the transfer function is more realistic than the classical integrate-and-fire model, whose f–I curve in the absence of noise has a logarithmic behavior close to threshold. The quadratic behavior in the noise-dominated region in an intermediate regime has been shown to be a good approximation of the behavior of many model neurons (Hansel and van Vreeswijk, 2002). For simplicity, we set in the following Ic = 1 and νc = 1. To recover ‘physiological’ values, all currents can be multiplied by 0.2 nA and all rates by 10 Hz.

2. Non-selective Persistent Activity in Excitatory Networks

When the firing rate of the population is ν, the synaptic current is Jν and the firing rate (in absence of external inputs) is the solution of the equation ν = ϕ (Jν). One obvious solution to this equation is ν = 0 (silent network). It is the only solution for J < √3/2. At J = √3/2, there is a saddle-node bifurcation and two additional solutions appear

graphic

for √3/2 < J < 1. These two solutions appear in the suprathreshold region. The upper solution corresponds to ‘persistent activity’, while the lower solution corresponds to the frontier of the basins of attraction of silent and persistent states.

For J ≥ 1, the lower solution goes to the subthreshold region and becomes

graphic

With an external stimulation, the suprathreshold solutions become

graphic

for Iext > 3/4–J2 and the subthreshold solutions are

graphic

for Iext < 1/(4J).

3. Spontaneous Activity in Inhibition-dominated Networks

Subthreshold spontaneous activity is given by

graphic

Suprathreshold spontaneous activity is

graphic

4. Structure in Excitatory Recurrence Induced by Learning and Sparse Persistent Activity

We consider a network state in which one population has activity ν+, while all others have activity ν. When f is small, we find the following suprathreshold solutions for network activity:

graphic

graphic

when

graphic

The subthreshold solutions are

graphic

graphic

when

graphic

When there is no spontaneous activity, νsp = 0, equations (9) and (12) reduce to equations (3) and (4).

I thank Daniel Amit for discussions and comments on a previous version of the manuscript.

Figure 1. The single neuron f–I curve ϕ(I). For input currents below the threshold current Ic (noise-induced firing), the f–I curve is supralinear. Above the threshold, the f–I curve is sub-linear.

Figure 1. The single neuron f–I curve ϕ(I). For input currents below the threshold current Ic (noise-induced firing), the f–I curve is supralinear. Above the threshold, the f–I curve is sub-linear.

Figure 2. The simplest bistable network is a purely excitatory network with uniform coupling. (A) Network architecture. (B) The fixed points of network activity can be found plotting a straight line whose slope is inversely proportional to the synaptic excitatory feedback against the f–I curve. (C) Solutions of network activity versus strength of excitatory coupling. Full line: persistent activity (stable fixed point). Dotted line: basin of attraction of persistent activity (unstable fixed point). Note the high slope of the persistent activity versus coupling strength curve beyond the point where it appears at J ∼ 17 pA.s. (D) Solutions of network activity versus strength of external input. This shows the basic mechanism by which the network can switch between the up and the down states. Thin curve: J = 0 (single cell f–I curve). Medium curve: J = 0.5Jc = 10 pA.s (bistable region is about to appear at Iext = 0.1 nA). Thick curve: J = Jc = 20 pA.s (bistable region between Iext = –0.05 and Iext = 0.05 nA). For J = Jc, an external input current above Iext = 0.05 nA switches the network to the up state. From the up state, a hyperpolarizing input Iext = –0.05 nA switches the network to the down state. For intermediate values of the input current, the network is bistable.

Figure 2. The simplest bistable network is a purely excitatory network with uniform coupling. (A) Network architecture. (B) The fixed points of network activity can be found plotting a straight line whose slope is inversely proportional to the synaptic excitatory feedback against the f–I curve. (C) Solutions of network activity versus strength of excitatory coupling. Full line: persistent activity (stable fixed point). Dotted line: basin of attraction of persistent activity (unstable fixed point). Note the high slope of the persistent activity versus coupling strength curve beyond the point where it appears at J ∼ 17 pA.s. (D) Solutions of network activity versus strength of external input. This shows the basic mechanism by which the network can switch between the up and the down states. Thin curve: J = 0 (single cell f–I curve). Medium curve: J = 0.5Jc = 10 pA.s (bistable region is about to appear at Iext = 0.1 nA). Thick curve: J = Jc = 20 pA.s (bistable region between Iext = –0.05 and Iext = 0.05 nA). For J = Jc, an external input current above Iext = 0.05 nA switches the network to the up state. From the up state, a hyperpolarizing input Iext = –0.05 nA switches the network to the down state. For intermediate values of the input current, the network is bistable.

Figure 3. Spontaneous activity in an excitatory–inhibitory network. (A) Architecture of the excitatory–inhibitory network. (B) Spontaneous activity as a function of external input, for recurrent inhibition exceeding excitation by 20 pA.s. The dotted curve represents the f–I curve of an isolated cell. The strong inhibitory feedback acts to reduce the firing rates of the cells in the network. Compare with Figure 2D.

Figure 3. Spontaneous activity in an excitatory–inhibitory network. (A) Architecture of the excitatory–inhibitory network. (B) Spontaneous activity as a function of external input, for recurrent inhibition exceeding excitation by 20 pA.s. The dotted curve represents the f–I curve of an isolated cell. The strong inhibitory feedback acts to reduce the firing rates of the cells in the network. Compare with Figure 2D.

Figure 4. Cortical network with selective sup-populations of excitatory cells. (A) Functional architecture of a network with four subpopulations coding for objects a, b, c, d. The relative strengths of the recurrent connections between excitatory populations are indicated by the thickness of the corresponding arrows. (B) Stationary states of network activity are again obtained by the intersection of the f–I curve and a straight line (cf. Fig. 2B). The straight line is constrained by inhibition to pass through the point of the f–I curve where the firing rate is equal to spontaneous activity and has a slope inversely proportional to the selective synaptic strength JS. (C) Rates in stationary states versus synaptic strength. Spontaneous activity (thin lines), persistent activity (thick lines) and unstable fixed point (dotted line) as a function of synaptic structure parameter JS. There are five set of curves, corresponding to spontaneous activity levels of 0, 2.5, 5, 7.5 and 10 Hz. (D) ‘Phase diagram’ in the plane νspJS. The size of the multistable region decreases with νsp and vanishes when νsp = 10 Hz.

Figure 4. Cortical network with selective sup-populations of excitatory cells. (A) Functional architecture of a network with four subpopulations coding for objects a, b, c, d. The relative strengths of the recurrent connections between excitatory populations are indicated by the thickness of the corresponding arrows. (B) Stationary states of network activity are again obtained by the intersection of the f–I curve and a straight line (cf. Fig. 2B). The straight line is constrained by inhibition to pass through the point of the f–I curve where the firing rate is equal to spontaneous activity and has a slope inversely proportional to the selective synaptic strength JS. (C) Rates in stationary states versus synaptic strength. Spontaneous activity (thin lines), persistent activity (thick lines) and unstable fixed point (dotted line) as a function of synaptic structure parameter JS. There are five set of curves, corresponding to spontaneous activity levels of 0, 2.5, 5, 7.5 and 10 Hz. (D) ‘Phase diagram’ in the plane νspJS. The size of the multistable region decreases with νsp and vanishes when νsp = 10 Hz.

Figure 5. Spontaneous and persistent activity as a function of external input. (A) Non-selective external input: input to the whole network. Thin black line: spontaneous activity. Medium line: persistent activity, JS = 0.75Jc. Thick line: persistent activity, JS = Jc. The arrows show how persistent and spontaneous activity are affected by a non-selective external input, starting from a ‘control’ parameter set indicated by the full circles. A decrease in external input decreases both spontaneous and persistent activity, as expected. An increase in external input decreases persistent activity. Above some level of stimulation, spontaneous activity becomes unstable and the network jumps in one of the persistent activity states (vertical arrow). (B) Selective external input: input to one of the subpopulations. Activity of subpopulation under selective stimulation. JS = 0.7Jc, νsp = 2.5 Hz (indicated as dashed line).

Figure 5. Spontaneous and persistent activity as a function of external input. (A) Non-selective external input: input to the whole network. Thin black line: spontaneous activity. Medium line: persistent activity, JS = 0.75Jc. Thick line: persistent activity, JS = Jc. The arrows show how persistent and spontaneous activity are affected by a non-selective external input, starting from a ‘control’ parameter set indicated by the full circles. A decrease in external input decreases both spontaneous and persistent activity, as expected. An increase in external input decreases persistent activity. Above some level of stimulation, spontaneous activity becomes unstable and the network jumps in one of the persistent activity states (vertical arrow). (B) Selective external input: input to one of the subpopulations. Activity of subpopulation under selective stimulation. JS = 0.7Jc, νsp = 2.5 Hz (indicated as dashed line).

Figure 6. The pair-associate protocol. (A) Architecture of a network after learning of pair associates. (B) Firing rates in memory states, as a function of pair learning index. All the states indicated in this graph are stable states. Black curve: spontaneous activity (activity in subpopulations A, A′, B, B′ shown schematically in inset SAS). Individual attractor state (IAS, see inset) between a = 0 and a ≈ 0.06: red line (neurons selective for cue A), red dashed line (neurons selective for pair associate A′) and red dotted line(other neurons). Pair attractor state (PAS, see inset) state: green line (neurons selective for both cue and pair associate of cue), green dotted line (other neurons). (C) Diagram showing the regions where the different attractors live, in the space of synaptic variables (JS, a). The thick black curve shows the boundary of the region of existence of the pair states. Thin lines show the boundary of the region of existence of the individual states.

Figure 6. The pair-associate protocol. (A) Architecture of a network after learning of pair associates. (B) Firing rates in memory states, as a function of pair learning index. All the states indicated in this graph are stable states. Black curve: spontaneous activity (activity in subpopulations A, A′, B, B′ shown schematically in inset SAS). Individual attractor state (IAS, see inset) between a = 0 and a ≈ 0.06: red line (neurons selective for cue A), red dashed line (neurons selective for pair associate A′) and red dotted line(other neurons). Pair attractor state (PAS, see inset) state: green line (neurons selective for both cue and pair associate of cue), green dotted line (other neurons). (C) Diagram showing the regions where the different attractors live, in the space of synaptic variables (JS, a). The thick black curve shows the boundary of the region of existence of the pair states. Thin lines show the boundary of the region of existence of the individual states.

Figure 7. Pair associate task with color switch. Effect of a weak biased input to the class of the paired associate in the middle of the delay period, as a function of a and the strength of the biased input (given to half of the network). After presentation of the cue, the network goes to the asymmetric A attractor in which cue neurons are highly active and pair-associate neurons are weakly active. After a weak biased stimulation (similar to color in the experiment of Naya et al., 1996), the network switches to S (non-selective spontaneous activity), A (i.e. stays in same attractor), A′ (goes to the asymmetric attractor in which pair associate has high activity, cue has low activity), or P (symmetrical pair attractor, both types of neurons have high activity).

Figure 7. Pair associate task with color switch. Effect of a weak biased input to the class of the paired associate in the middle of the delay period, as a function of a and the strength of the biased input (given to half of the network). After presentation of the cue, the network goes to the asymmetric A attractor in which cue neurons are highly active and pair-associate neurons are weakly active. After a weak biased stimulation (similar to color in the experiment of Naya et al., 1996), the network switches to S (non-selective spontaneous activity), A (i.e. stays in same attractor), A′ (goes to the asymmetric attractor in which pair associate has high activity, cue has low activity), or P (symmetrical pair attractor, both types of neurons have high activity).

Figure 8. Fixed sequence of cue stimuli. (A) Architecture of the network after learning. (B) Persistent activity patterns, labelled by the number of populations strongly active in such patterns (1, red; 2, blue; 3, green; 4, brown; see insets), as a function of sequence learning index. The transitions where states appear or disappear are marked with dotted vertical lines. Similar transitions where more populations become activated occur at higher values of a.

Figure 8. Fixed sequence of cue stimuli. (A) Architecture of the network after learning. (B) Persistent activity patterns, labelled by the number of populations strongly active in such patterns (1, red; 2, blue; 3, green; 4, brown; see insets), as a function of sequence learning index. The transitions where states appear or disappear are marked with dotted vertical lines. Similar transitions where more populations become activated occur at higher values of a.

References

Amit D (
1989
) Modeling brain function. Cambridge: Cambridge University Press.
Amit DJ (
1995
) The Hebbian paradigm reintegrated: local reverberations as internal representations.
Behav Brain Sci
 
18
:617.
Amit DJ, Brunel N (
1995
) Learning internal representations in an attractor neural network with analogue neurons.
Network
 
6
:
359
–388.
Amit DJ, Brunel N (
1997
) Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex.
Cereb Cortex
 
7
:
237
–252.
Amit DJ, Fusi S (
1994
) Dynamic learning in neural networks with material sysnapses.
Neural Comput
 
6
:
957
–982.
Amit DJ, Tsodyks MV (
1991
) Quantitative study of attractor neural network retrieving at low spike rates I: substrate — spikes, rates and neuronal gain.
Network
 
2
:
259
–274.
Amit DJ, Brunel N, Tsodyks MV (
1994
) Correlations of cortical Hebbian reverberations: experiment vs theory.
J Neurosci
 
14
:
6435
–6445.
Arnsten AFT (
1998
) Catecholamine modulation of prefrontal cortical cognitive function.
Trends Cogn Sci
 
2
:
436
–447.
Asaad WF, Rainer G, Miller EK (
1998
) Neural activity in the primate prefrontal cortex during associative learning.
Neuron
 
21
:
1399
–1407.
Brunel N (
1996
) Hebbian learning of context in recurrent neural networks.
Neural Comput
 
8
:
1677
–1710.
Brunel N (
2000
) Persistent activity and the single cell f–I curve in a cortical network model.
Network
 
11
:
261
–280.
Brunel N, Wang XJ (
2001
) Effects of neuromodulation in a cortical network model of object working memory dominated by recurrent inhibition.
J Comput Neurosci
 
11
:
63
–85.
Brunel N, Carusi F, Fusi S (
1998
) Slow stochastic Hebbian learning of classes in recurrent neural networks.
Network
 
9
:
123
–152.
Camperi M, Wang, X-J (
1998
) A model of visuospatial short-term memory in prefrontal cortex: recurrent network and cellular bistability.
J Comput Neurosci
 
5
:
383
–405.
Chafee MV, Goldman-Rakic PS (
1998
) Matching patterns of activity in primate prefrontal area 8a and parietal area 7ip neurons during a spatial working memory task.
J Neurophysiol
 
79
:
2919
–2940.
Compte A, Brunel N, Goldman-Rakic PS, Wang X-J (
2000
) Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model.
Cereb Cortex
 
10
:
910
–923.
Durstewitz D, Kelc M, Güntürkun O (
1999
) A neurocomputational theory of the dopaminergic modulation of working memory functions.
J Neurosci
 
19
:
2807
–2822.
Durstewitz D, Seamans JK, Sejnowski TJ (
2000
) Dopamine-mediated stabilization of delay-period activity in a network model of prefrontal cortex.
J Neurophysiol
 
83
:
1733
–1750.
Durstewitz D, Seamans JK, Sejnowski TJ (
2000
) Neurocomputational models of working memory.
Nat Neurosci
  Suppl
1184
–1191.
Erickson CA Desimone R (
1999
) Responses of macaque perirhinal neurons during and after visual stimulus association learning.
J Neurosci
 
19
:
10404
–10416.
Ermentrout GB (
1996
) Type I membranes, phase resetting curves, and synchrony.
Neural Comput
 
8
:
979
–1001.
Ermentrout, GB, Kopell, N (
1986
) Parabolic bursting in an excitable system coupled with a slow oscillation.
SIAM J Appl Math
 
46
:
233
–253.
Funahashi S, Bruce CJ, Goldman-Rakic PS (
1989
) Mnemonic coding of visual space in the monkey’s dorsolateral prefrontal cortex.
J Neurophysiol
 
61
:
331
–349.
Funahashi S, Bruce CJ, Goldman-Rakic PS (
1990
) Visuospatial coding in primate prefrontal neurons revealed by oculomotor paradigms.
J Neurophysiol
 
63
:
814
–831.
Funahashi S, Bruce CJ, Goldman-Rakic PS (
1991
)
Neuron
 al activity related to saccadic eye movements in the monkey’s dorsolateral prefrontal cortex.
J Neurophysiol
 
65
:
1464
–1483.
Fuster JM (
1995
) Memory in the cerebral cortex. Cambridge, MA: MIT Press.
Fuster JM Alexander G (
1971
)
Neuron
  activity related to short-term memory.
Science
 
173
:
652
–654.
Fuster JM, Bauer RH, Jervey JP (
1982
) Cellular discharge in the dorsolateral prefrontal cortex of the monkey in cognitive tasks.
Exp Neurol
 
77
:
679
–694.
Goldman-Rakic PS (
1995
) Cellular basis of working memory.
Neuron
 
14
:
477
–485.
Griniasty M, Tsodyks MV, Amit DJ (
1993
) Conversion of temporal correlations between stimuli to spatial correlations between attractors.
Neural Comput
 
5
:
1
–17.
Gutkin BS, Laing CR, Colby CL, Chow CC, Ermentrout GB (
2001
) Turning on and off with excitation: the role of spike time asynchrony and synchrony in sustained neural activity.
J Comput Neurosci
 
11
:
121
–134.
Hansel, D and Mato, G (
2001
) Existence and stability of persistent states in large neuronal networks.
Phys Rev Lett
 
10
:
4175
–4178.
Hansel, D and van Vreeswijk, C (
2002
) How noise contributes to contrast invariance of orientation tuning in cat visual cortex.
J Neurosci
 
22
:
5118
–5128.
Hebb DO (
1949
) Organization of behavior. New York: Wiley.
Hopfield JJ (
1982
) Neural networks and physical systems with emergent collective computational abilities.
Proc Natl Acad Sci USA
 
79
:
2554
–2558.
Koulakov A (
2001
) Properties of synaptic transmission and the global stability of delayed activity states.
Network
 
12
:
47
–74.
Krimer LS, Jacob RL, Goldman-Rakic PS (
1997
) Quantitative three-dimensional analysis of the catecholaminergic innervation of identified neurons in the macaque prefrontal cortex.
J Neurosci
 
17
:
7450
–7461.
Malinow R, Mainen ZF, Hayashi Y (
2000
) LTP mechanisms: from silence to four-lane traffic.
Curr Opin Neurobiol
 
10
:
352
–357.
Maurice N, Tkatch T, Meisler M, Sprunger LK, Surmeier DJ (
2001
) D1/D5 dopamine receptor activation differentially modulates rapidly inactivating and persistent sodium currents in prefrontal cortex pyramidal neurons.
J Neurosci
 
21
:
2268
–2277.
Miller EK, Erickson CA, Desimone R (
1996
) Neural mechanisms of visual working memory in prefrontal cortex of the macaque.
J Neurosci
 
16
:
5154
–5167.
Miyashita Y (
1988
)
Neuron
 al correlate of visual associative long-term memory in the primate temporal cortex.
Nature
 
335
:
817
–820.
Miyashita Y, Chang HS (
1988
)
Neuron
 al correlate of pictorial short-term memory in the primate temporal cortex.
Nature
 
331
:
68
–70.
Mongillo G, Amit DJ, Brunel N (
2003
) Retrospective and prospective persistent activity induced by Hebbian learning in a recurrent cortical network.
Eur J Neurosci
  (in press).
Nakamura K, Kubota K (
1995
) Mnemonic firing of neurons in the monkey temporal pole during a visual recognition memory task.
J Neurophysiol
 
74
:
162
–178.
Naya Y, Sakai K, Miyashita Y (
1996
) Activity of primate inferotemporal neurons related to a sought target in pair-association task.
Proc Natl Acad Sci USA
 
93
:
2664
–2669.
Rainer G, Rao SC, Miller EK (
1999
) Prospective coding for objects in primate prefrontal cortex.
J Neurosci
 
19
:
5493
–5505.
Rauch A, La Camera G, Luscher H-R, Senn W, Fusi S (
2003
) Neocortical pyramidal cells respond as integrate-and-fire neurons to in-vivo-like input currents.
J Neurophysiol
 
90
:
1598
–1612.
Ricciardi LM (
1977
) Diffusion processes and related topics on biology. Berlin: Springer.
Sakai K, Miyashita Y (
1991
) Neural organization for the long-term memory of paired associates.
Nature
 
354
:
152
–155.
Sesack SR, Snyder CL, Lewis DA (
1995
) Axon terminals immunolabeled for dopamine or tyrosine hydrozylase synapse on GABA-immunoreactive dendrites in rats and monkey cortex.
J Comp Neurol
 
363
:
264
–280.
Seung HS (
1996
) How the brain keeps the eyes still.
Proc Natl Acad Sci USA
 
93
:
13339
–13344.
Turrigiano GG, Nelson, SB (
2000
) Hebb and homeostasis in neuronal plasticity.
Curr Opin Neurobiol
 
10
:
358
–364.
van Vreeswijk C, Sompolinsky H (
1996
) Chaos in neuronal networks with balanced excitatory and inhibitory activity.
Science
 
274
:
1724
–1726.
van Vreeswijk C and Sompolinsky H (
1998
) Chaotic balanced state in a model of cortical circuits.
Neural Comput
 
10
:
1321
–1371.
Wang X-J (
1999
) Synaptic basis of cortical persistent activity: the importance of NMDA receptors to working memory.
J Neurosci
 
19
:
9587
–9603.
Wang X-J (
2001
) Synaptic reverberation underlying mnemonic persistent activity.
Trends Neurosci
 
24
:
455
–463.
Williams GV, Goldman-Rakic PS (
1995
) Modulation of memory fields by dopamine D1 receptors in prefrontal cortex.
Nature
 
376
:
572
–575.
Yakovlev V, Fusi S, Berman E, Zohary E (
1998
) Inter-trial neuronal activity in inferior temporal cortex: a putative vehicle to generate long-term visual associations.
Nature Neurosci
 
1
:
310
–317.
Yang CR, Seamans JK, Gorelova N (
1996
) Electrophysiological and morphological properties of layer V–VI principal pyramidal cells in rat prefrontal cortex in vitro.
J Neurosci
 
16
:
1904
–1921.
Zahrt J, Taylor JR, Mathew RG, Arnsten AFT (
1997
) Supranormal stimulation of D1 dopamine receptors in the rodent prefrontal cortex impairs spatial working memory performance.
J Neurosci
 
17
:
8528
–8535.