Abstract

Recent studies have shown that reverberation underlying mnemonic persistent activity must be slow, to ensure the stability of a working memory system and to give rise to long neural transients capable of accumulation of information over time. Is the slower the underlying process, the better? To address this question, we investigated 3 slow biophysical mechanisms that are activity-dependent and prominently present in the prefrontal cortex: Depolarization-induced suppression of inhibition (DSI), calcium-dependent nonspecific cationic current (ICAN), and short-term facilitation. Using a spiking network model for spatial working memory, we found that these processes enhance the memory accuracy by counteracting noise-induced drifts, heterogeneity-induced biases, and distractors. Furthermore, the incorporation of DSI and ICAN enlarges the range of network's parameter values required for working memory function. However, when a progressively slower process dominates the network, it becomes increasingly more difficult to erase a memory trace. We demonstrate this accuracy–flexibility tradeoff quantitatively and interpret it using a state-space analysis. Our results supports the scenario where N-methyl-d-aspartate receptor-dependent recurrent excitation is the workhorse for the maintenance of persistent activity, whereas slow synaptic or cellular processes contribute to the robustness of mnemonic function in a tradeoff that potentially can be adjusted according to behavioral demands.

Introduction

Working memory is thought to be represented by persistent activity (Fuster and Alexander 1971; Gnadt and Andersen 1988; Funahashi et al. 1989; Amit 1995; Goldman-Rakic 1995; Miller et al. 1996; Romo et al. 1999; Wang 2001; Major and Tank 2004). Such activity patterns are likely sustained by positive feedback processes in a neural circuit, but the precise mechanisms remain unresolved. Computational models stressed the role of recurrent synaptic excitation (Amit 1995; Amit and Brunel 1997; Camperi and Wang 1998; Durstewitz et al. 2000; Brunel and Wang 2001) that depends on the N-methyl-d-aspartate (NMDA) receptors (Wang 1999; Compte et al. 2000; Lim and Goldman 2013), a prediction supported by findings from a recent experiment (Wang et al. 2013).

Other synaptic and cellular process, present in the prefrontal cortex (PFC), are likely involved in mnemonic persistent activity, including short-term facilitation (STF; Hempel et al. 2000; Wang et al. 2006; Mongillo et al. 2008; Szatmary and Izhikevich 2010; Hansel and Mato 2013), depolarization-induced suppression of inhibition (DSI; Carter and Wang 2007), and calcium-dependent nonspecific cationic current (ICAN; Egorov et al. 2002; Tegnér et al. 2002; Fransén et al. 2006; Yoshida and Hasselmo 2009; Kulkarni et al. 2011; Kalmbach et al. 2013). STF and ICAN provide feedback excitation, whereas DSI is a disinhibition process. All are activity-dependent, thus become selective for neurons that show elevated persistent activity. Furthermore, these mechanisms operate with biophysical time constants much slower than the NMDA receptor-mediated synaptic excitation. Therefore, the long-standing question (Major and Tank 2004) has gained urgency: What may be the relative contributions to working memory function of these slow synaptic and cellular processes versus the recurrent network mechanism?

We analyzed the role of slow biophysical processes in mnemonic persistent activity, using a biologically based continuous spiking circuit model for spatial working memory. This model system is endowed with a resting state and a continuum of spatially tuned persistent activity patterns (“bump attractors”) for memory storage of an analog quantity such as spatial location (Camperi and Wang 1998; Compte et al. 2000; Gutkin et al. 2001; Laing and Chow 2001; Renart et al. 2003; Carter and Wang 2007; Wei et al. 2012; Murray et al. 2014). During a mnemonic delay period, a bump attractor drifts over time (Compte et al. 2000; Carter and Wang 2007; Murray et al. 2014), resulting in random deviations of the memory away from the to-be-remembered sensory cue. Additionally, heterogeneity in single neurons disrupts the continuous family of attractors (Ben-Yishai et al. 1995; Tsodyks and Sejnowski 1995; Zhang 1996), leading to systematic drifts of memory trace (Renart et al. 2003; Itskov et al. 2011). Furthermore, the system may be perturbed by external distractor stimuli. Interestingly, we found that while STF, DSI, and ICAN enhance the accuracy of a memory trace, they hinder rapid memory erasure and network reset. The latter is functionally desirable, since behavior demands that brief transient inputs should be sufficient to switch a working memory system from its resting state to a memory state or vice versa (Compte et al. 2000; Gutkin et al. 2001). Therefore, our study reveals a fundamental tradeoff between robustness and flexibility of working memory function instantiated by slow neurobiological mechanisms in a recurrent network.

Materials and Methods

In an oculomotor delayed response (ODR) task, the sensory stimulus is a visual cue and the motor response is a saccade to the cued location. A subject is briefly shown a visual cue that must be remembered during a delay period of a few seconds. This memory is subsequently used to perform a memory-guided behavioral response (the saccade). During the delay period, many neurons in the dorsolateral PFC show high persistent activity that is spatially selective (Funahashi et al. 1989). The present work uses a spiking network model for the ODR task that has been tested thoroughly (Compte et al. 2000; Carter and Wang 2007; Wei et al. 2012; Murray et al. 2014). The parameters were modified starting with the original “control parameter set” in Compte et al. (2000). The model consists of a population of excitatory pyramidal cells and a population of inhibitory interneurons. Pyramidal cells are arranged in a ring-like fashion and labeled by their preferred cue direction, from 0 to 360°. A schematic of the network structure is shown in Figure 1A.

Figure 1.

Persistent activity and random drifts of a memory trace in a spiking network model for spatial working memory. (A) Schematic of the network connectivity (all-to-all) between the excitatory (blue circles) and inhibitory (yellow circle) neurons. Light gray and black connectors indicate, respectively, excitatory and inhibitory synapses. Each excitatory cell is selective for a direction (black arrows), and the strength of connection between 2 excitatory cells is a decreasing function of the difference in their preferred directions. (B) Lower panel: applied current to excitatory cells. The first positive step current corresponds to cue presentation. The second negative current represents a shutdown signal. Upper panel: average firing rate of a group of 200 neurons (with preferred directions around cue location) during a trial. The activity ramps up during cue presentation, persists during delay, and is reset to a spontaneous baseline by the shutdown pulse. (C) Left panel: spatiotemporal pattern of excitatory cells of the same simulation as in (A) (cue presented at 180°). Each dot represents a spike. The yellow line is the population vector, which traces the peak of the bell-shaped persistent activity pattern (bump attractor) as the internal representation of the cue location. Right panel: Population firing profile, averaged over the delay period. (D) Remembered cue as measured by the population vector from 20 sample trials with the same cue location. The memory traces drift away from the initial cue during the delay, the VPV across trials quantifies this deviation so that the smaller is the VPV, and the more accurate is the memory readout. (E) Drift magnitude at 5–6 s of the delay period, as measured by the VPV (N = 500 trials), is plotted as a function of the time constant of the NMDAR-mediated synaptic excitation τS. The VPV decreases steeply with increasing τS; the fitting line is an exponential function for ease of eye inspection.

Figure 1.

Persistent activity and random drifts of a memory trace in a spiking network model for spatial working memory. (A) Schematic of the network connectivity (all-to-all) between the excitatory (blue circles) and inhibitory (yellow circle) neurons. Light gray and black connectors indicate, respectively, excitatory and inhibitory synapses. Each excitatory cell is selective for a direction (black arrows), and the strength of connection between 2 excitatory cells is a decreasing function of the difference in their preferred directions. (B) Lower panel: applied current to excitatory cells. The first positive step current corresponds to cue presentation. The second negative current represents a shutdown signal. Upper panel: average firing rate of a group of 200 neurons (with preferred directions around cue location) during a trial. The activity ramps up during cue presentation, persists during delay, and is reset to a spontaneous baseline by the shutdown pulse. (C) Left panel: spatiotemporal pattern of excitatory cells of the same simulation as in (A) (cue presented at 180°). Each dot represents a spike. The yellow line is the population vector, which traces the peak of the bell-shaped persistent activity pattern (bump attractor) as the internal representation of the cue location. Right panel: Population firing profile, averaged over the delay period. (D) Remembered cue as measured by the population vector from 20 sample trials with the same cue location. The memory traces drift away from the initial cue during the delay, the VPV across trials quantifies this deviation so that the smaller is the VPV, and the more accurate is the memory readout. (E) Drift magnitude at 5–6 s of the delay period, as measured by the VPV (N = 500 trials), is plotted as a function of the time constant of the NMDAR-mediated synaptic excitation τS. The VPV decreases steeply with increasing τS; the fitting line is an exponential function for ease of eye inspection.

Single Neuron Model

Both pyramidal cells and interneurons are modeled as leaky integrate-and fire-units (Tuckwell 1988). Each type of cell is characterized by total capacitance Cm, total leak conductance gL, leak reversal potential VL, threshold potential Vth, reset potential Vres, and refractory time τref. The values that we use in the simulations are Cm = 0.5 nF, gL = 25 nS, VL = −70 mV, Vth = −50 mV, Vres = −60 mV, and τref = 2 ms for pyramidal cells; and Cm = 0.2 nF, gL = 20 nS, VL = −70 mV, Vth = −50 mV, Vres = −60 mV, and τref = 1 ms for interneurons. The subthreshold membrane potential, V(t), follows: 

CmdV(t)dt=gL(V(t)VL)Isyn(t)
where Isyn(t) is the total synaptic current to the cell.

Synaptic Interactions

The network consists of NE = 2048 pyramidal cells and NI = 512 inhibitory interneurons. Neurons receive recurrent, background, and external inputs. Excitatory synaptic currents are mediated by 2-amino-3-(3-hydroxy-5-methyl-isoxazol-4-yl) propanoic acid receptors (AMPARs) and NMDARs, and inhibitory synaptic currents are mediated by γ-aminobutyric acid type A receptors (GABAARs). The total synaptic current to each neuron is 

Isyn=INMDA+IAMPA+IGABA+Iext
where Iext delivers stimulus input to pyramidal cells. The dynamics of synaptic currents for neuron i follow: 
Ii,AMPA=(ViVE)jgji,AMPAsj,AMPA
 
Ii,NMDA=(ViVE)jgji,NMDAsj,NMDA1+[Mg2+]exp(0.062Vi/mV)/3.57
 
Ii,GABA=(ViVI)jgji,GABAsj,GABA
where VE = 0 mV and VI = −70 mV and gji,syn denotes the synaptic conductance strength on neuron i from neuron j. NMDAR-mediated currents exhibit voltage dependence controlled by the extracellular magnesium concentration [Mg2+] = 1 mM (Jahr and Stevens 1990).

Given a spike train {tk} in the presynaptic neuron j, the gating variables sj,AMPA and sj,GABA for AMPAR- and GABAR-mediated currents, respectively, are modeled as: 

dsdt=kδ(ttk)sτs

The gating variable sj,NMDA for NMDAR-mediated current is modeled as: 

dxdt=αxkδ(ttk)xτx
 
dsdt=αsx(1s)sτs
with αx = 1 (dimensionless), τx = 2 ms, and αs = 0.5 kHz. The decay time constant τs is 2 ms for AMPA, 10 ms for GABA, and 100 ms for NMDA. For simplicity, background inputs are mediated entirely by AMPARs, and recurrent excitatory inputs are mediated entirely by NMDARs, as they are critical for the stability of persistent activity (Wang 1999; Compte et al. 2000; Wang et al. 2013). All cells receive background excitatory inputs from other cortical areas. This overall external input is modeled as uncorrelated Poisson spike trains to each neuron at a rate of νext = 1800 Hz per cell, with AMPAR maximal conductances of 3.1 nS on pyramidal cells and 2.38 nS on interneurons.

Network Connectivity

As stated above, pyramidal cells are organized in a ring architecture and are tuned to the angular location on a circle (0–360°, Fig. 1A), with uniform distribution of their preferred angles. The network structure follows a columnar architecture, such that pyramidal cells with similar stimulus selectivity are preferentially connected to each other. The synaptic conductance on neuron i from neuron j, gji,syn = W(θjθi)Gsyn, where θi is the preferred angle of neuron i, and W(θjθi) is the connectivity profile normalized such that: 

13600360W(θ)dθ=1

For pyramidal-to-pyramidal connections, W(θjθi) = J+ (J+J) exp[−(θjθi)2/2σ2]. We use J+ = 1.62 and σ = 14.4°. J is determined using the normalization condition of W. All other synaptic connection profiles are unstructured. Synaptic conductance strengths are given by GEE = 0.381 nS, GEI = 0.292 nS, GIE = 1.336 nS, GII = 1.024 nS.

Stimulus

Inputs are modeled as an injected current with a Gaussian profile, I(θ) = I0 exp[−(θθc)2/2σI2], where the maximum current I0 = 200 pA, except otherwise noted. θc is the stimulus location, and the width parameter σI = 18°.

Slow Calcium-Dependent Nonspecific Cationic Current

ICAN can trigger a sustained depolarization outlasting the stimulus for several seconds (Haj-Dahmane and Andrade 1998; Strübing et al. 2001; Egorov et al. 2002; Tegnér et al. 2002). The activation of this current requires a rise in intracellular calcium. In some simulations (results in Figs 2, 4, and 5), ICAN was added to the network model (described above) according to the following equation (Tegnér et al. 2002): 

ICAN=gCANmCAN2(VECAN)
 
dmCANdt=ϕCAN×m([Ca2+])mCANτCAN([Ca2+])
 
m([Ca2+])=α[Ca2+]2α[Ca2+]2+β
 
τCAN([Ca2+])=1α[Ca2+]2+β
with gCAN = 1.5 nS, ECAN = -20 mV, β = 0.002 ms−1, α = 0.0056 ms−1 μM−2. ϕCAN is used to adjust the effective time constant of ICAN, without changing the steady-state levels of activity.

Figure 2.

Tradeoff between memory accuracy and flexibility with ICAN. (A) An integrate-and-fire neuron model endowed with ICAN. A step current (bottom panel) induces initial firing activity (upper panel). Each spike triggers a small calcium influx (middle upper panel), which leads to a slow activation of ICAN (middle lower panel). When the applied current stops, the high level of ICAN activation is sufficient to induce afterdischarge of spikes. (B) Variance of the remembered cue location (VPV) during the delay period with max τCAN of 1 (black trace) and 3 (red trace) s (N = 500 trials). A longer time constant leads to smaller random drifts after an initial time needed for the mechanism to take effect. (C) With max τCAN = 500 ms, a negative pulse of 200 ms to excitatory cells is required in order to shutdown the bump state at the end of delay. Lower panel shows applied current with 2 negative pulses of lasting 100 (red) and 200 (blue) ms. Middle and upper panels: the average population firing rates and ICAN activation, respectively, of 200 cells in the bump state around the initial cue location, under the 2 conditions (the same color scheme, N = 10 trials). With 100 ms, ICAN activation decays by a small amount but immediately increases after the shutdown input is over, providing the necessary positive feedback for the return of the high-firing memory state. After a longer shutdown pulse (200 ms), the activation decays to such an extent that ultimately leads to the resting state. (D) State space analysis with the population rate and the ICAN activation shown in (C) plotted against each other in phase space. Each trajectory corresponds to a trial and starts immediately at the shutdown pulse offset. Red trajectories evolve to the bump attractor; blue proceed to shutdown (resting state). There is a clear diagonal boundary that separates the 2 attractors (dashed black curve), suggesting the presence of an unstable manifold. (E) Tradeoff between decrease in variance of remembered cue location (VPV) and minimum time to shutdown (tSHUT,MIN), with increasing max τCAN. Open circles were determined as in Figure 1E, with max τCAN between 50 ms and 4 s. Filled circles express tSHUT,MIN (see Materials and Methods) (N = 500 trials). The 2 data sets are fitted as a sum of 2 exponentials (VPV) or as a simple exponential (tSHUT,MIN). A compromise corresponds to an optimal value of max τCAN≈ 1.5 s.

Figure 2.

Tradeoff between memory accuracy and flexibility with ICAN. (A) An integrate-and-fire neuron model endowed with ICAN. A step current (bottom panel) induces initial firing activity (upper panel). Each spike triggers a small calcium influx (middle upper panel), which leads to a slow activation of ICAN (middle lower panel). When the applied current stops, the high level of ICAN activation is sufficient to induce afterdischarge of spikes. (B) Variance of the remembered cue location (VPV) during the delay period with max τCAN of 1 (black trace) and 3 (red trace) s (N = 500 trials). A longer time constant leads to smaller random drifts after an initial time needed for the mechanism to take effect. (C) With max τCAN = 500 ms, a negative pulse of 200 ms to excitatory cells is required in order to shutdown the bump state at the end of delay. Lower panel shows applied current with 2 negative pulses of lasting 100 (red) and 200 (blue) ms. Middle and upper panels: the average population firing rates and ICAN activation, respectively, of 200 cells in the bump state around the initial cue location, under the 2 conditions (the same color scheme, N = 10 trials). With 100 ms, ICAN activation decays by a small amount but immediately increases after the shutdown input is over, providing the necessary positive feedback for the return of the high-firing memory state. After a longer shutdown pulse (200 ms), the activation decays to such an extent that ultimately leads to the resting state. (D) State space analysis with the population rate and the ICAN activation shown in (C) plotted against each other in phase space. Each trajectory corresponds to a trial and starts immediately at the shutdown pulse offset. Red trajectories evolve to the bump attractor; blue proceed to shutdown (resting state). There is a clear diagonal boundary that separates the 2 attractors (dashed black curve), suggesting the presence of an unstable manifold. (E) Tradeoff between decrease in variance of remembered cue location (VPV) and minimum time to shutdown (tSHUT,MIN), with increasing max τCAN. Open circles were determined as in Figure 1E, with max τCAN between 50 ms and 4 s. Filled circles express tSHUT,MIN (see Materials and Methods) (N = 500 trials). The 2 data sets are fitted as a sum of 2 exponentials (VPV) or as a simple exponential (tSHUT,MIN). A compromise corresponds to an optimal value of max τCAN≈ 1.5 s.

Calcium influx to pyramidal cells is triggered by spikes and obeys first-order kinetics as follows (Liu and Wang 2001): 

d[Ca2+]dt=αCaiδ(tti)[Ca2+]τCa

When an action potential fires (at time ti), [Ca2+] is incremented by αCa (0.2 μM). The calcium concentration decays back to zero exponentially, with a time constant τCa (240 ms).

Depolarization-Induced Suppression of Inhibition

DSI is detected in various regions of the brain (Llano et al. 1991; Pitler and Alger 1992; Trettel and Levine 2003). DSI is dependent on endocannabinoids that are released by active pyramidal cells, triggered by calcium influx (Ohno-Shosaku et al. 2001; Wilson and Nicoll 2001; Wilson et al. 2001). These endogenous cannabinoids retrogradely activate type 1 cannabinoid receptors (CB1R) located on the axon terminals of interneurons that coexpress GABA and cholecystokinin (Katona et al. 1999; Marsicano and Lutz 1999). The activation of CB1R results in the suppression of transmitter release to postsynaptic pyramidal cells.

DSI was added to the network model (Figs 3–5) as previously described in Carter and Wang (2007) and the same parameters were used, unless noted otherwise. Briefly, the inhibitory synaptic conductance gGABA to a pyramidal cell is multiplied by a factor D, which is proportional to the fraction of inhibitory synapses that are sensitive to cannabinoid and their presynaptic release probability. D varies between 0 and 1. There is no DSI effect if D is set to 1. DSI is the fractional reduction in inhibitory event size or frequency. The dynamics of D are described by the following equation (Carter and Wang 2007): 

dDdt=ϕD×1DτDβD×[Ca2+]×(DDmin)
where [Ca2+] represents the intracellular calcium concentration in the pyramidal cell and has the same kinetics as ICAN. When [Ca2+] accumulates, D decreases with a rate controlled by βD (1.66 × 10−5μM−1 ms−1, leading to disinhibition. D is bounded below at Dmin, which determines the maximum disinhibition and biophysically corresponds to the maximum number of synapses that are cannabinoid-sensitive multiplied by the maximal reduction in release probability at each synapse due to DSI. Unless stated otherwise, Dmin was set to 0.96, corresponding to a maximum DSI of 4%. When the pyramidal cell ceases to be active, D recovers back to a maximal value of 1 with a time constant τD (16.7 s). The factor ϕD accounts for temperature sensitivity and was used to adjust the effective time constant of DSI without changing the steady-state levels of activity.

Figure 3.

Tradeoff between memory accuracy and flexibility with DSI. (A) Schematic of network model of spatial working memory endowed with DSI. This mechanism is implemented as a cell-specific reduction in inhibitory input conductance. Adapted from Carter and Wang (2007). (B) Left panel: spatiotemporal pattern of excitatory cells endowed with DSI (τD = 5 s). Cue was presented at 180° during the 0.75–1 s interval. A shutdown pulse of 500 ms was applied at 8 s. The yellow lines represent the remembered cue location during delay and after shutdown pulse. Right panel: population firing profiles, averaged over the delay period (blue) or over the last second of the simulation (red), showing that the bump state survives the shutdown input and the memory trace is not erased. (C) Spatiotemporal representation of the activation variable (D) of DSI (inverted scale, 1 means no DSI) of the same trial. Only the (D) value of 41 cells (recorded equidistantly in the network) is plotted. The lingering DSI trace, visible after the shutdown pulse, is sufficient to induce the re-emergence of the bump state (in B). (D) The accuracy–flexibility tradeoff with DSI. The variance of the remembered cue location (VPV) during the delay period with effective τD of 1 (black trace) and 5 (red trace) s (N = 500 trials). In the former scenario, the VPV keeps increasing almost linearly. In contrast, in the latter, it stabilizes after an initial period of 2 s. (E) Tradeoff between decrease in the VPV (open symbols) and tSHUT,MIN (closed circles), as τD is increased from 50 ms to 5 s (N = 500 trials). The VPV was determined during 2 intervals of the delay period: 5–6 s (open circles, same as Figs 1E and 2E) or 12–13 s (open squares). The data sets were fitted by solid curves for eye inspection.

Figure 3.

Tradeoff between memory accuracy and flexibility with DSI. (A) Schematic of network model of spatial working memory endowed with DSI. This mechanism is implemented as a cell-specific reduction in inhibitory input conductance. Adapted from Carter and Wang (2007). (B) Left panel: spatiotemporal pattern of excitatory cells endowed with DSI (τD = 5 s). Cue was presented at 180° during the 0.75–1 s interval. A shutdown pulse of 500 ms was applied at 8 s. The yellow lines represent the remembered cue location during delay and after shutdown pulse. Right panel: population firing profiles, averaged over the delay period (blue) or over the last second of the simulation (red), showing that the bump state survives the shutdown input and the memory trace is not erased. (C) Spatiotemporal representation of the activation variable (D) of DSI (inverted scale, 1 means no DSI) of the same trial. Only the (D) value of 41 cells (recorded equidistantly in the network) is plotted. The lingering DSI trace, visible after the shutdown pulse, is sufficient to induce the re-emergence of the bump state (in B). (D) The accuracy–flexibility tradeoff with DSI. The variance of the remembered cue location (VPV) during the delay period with effective τD of 1 (black trace) and 5 (red trace) s (N = 500 trials). In the former scenario, the VPV keeps increasing almost linearly. In contrast, in the latter, it stabilizes after an initial period of 2 s. (E) Tradeoff between decrease in the VPV (open symbols) and tSHUT,MIN (closed circles), as τD is increased from 50 ms to 5 s (N = 500 trials). The VPV was determined during 2 intervals of the delay period: 5–6 s (open circles, same as Figs 1E and 2E) or 12–13 s (open squares). The data sets were fitted by solid curves for eye inspection.

Figure 4.

Multistability analysis of the working memory model as a dynamical system reveals that ICAN and DSI increase the robustness of memory function. Simulations were ran with (black dots) or without (red dots) cue presentation, for a range of recurrent excitatory conductance (GEE) values. The maximum firing rate among all excitatory cells, at the end of the delay period, is either low (2–6 Hz) corresponding to the resting state or higher than 20 Hz corresponding to a memory sate. The resulting state diagram is shown for the control network without slow mechanisms (A), with only DSI (B) or ICAN (C) or both (D). The range of GEE values for multistability are delimited by 2 vertical dashed lines. The presence of DSI (B) and ICAN (C) alone increased the multistability range and also the firing rate separation between memory and resting states. These effects are larger when both mechanisms are combined (D).

Figure 4.

Multistability analysis of the working memory model as a dynamical system reveals that ICAN and DSI increase the robustness of memory function. Simulations were ran with (black dots) or without (red dots) cue presentation, for a range of recurrent excitatory conductance (GEE) values. The maximum firing rate among all excitatory cells, at the end of the delay period, is either low (2–6 Hz) corresponding to the resting state or higher than 20 Hz corresponding to a memory sate. The resulting state diagram is shown for the control network without slow mechanisms (A), with only DSI (B) or ICAN (C) or both (D). The range of GEE values for multistability are delimited by 2 vertical dashed lines. The presence of DSI (B) and ICAN (C) alone increased the multistability range and also the firing rate separation between memory and resting states. These effects are larger when both mechanisms are combined (D).

Figure 5.

DSI and ICAN stabilize the memory trace in the presence of heterogeneity across neurons in the network. Simulations were carried out where the cue was applied at 20 evenly spaced locations along the 360° space. The maintenance and retrieval of memory require that the remembered location at any given point in time should closely match that of the to-be-remembered cue. (A) The remembered cue locations of the simulations with the control parameter set systematically drift to a few privileged locations. (B) When DSI (4% maximum effect) and ICAN (gCAN = 1.5 nS) were incorporated in the network, the internal representation of the cue location becomes much better (the population vector is nearly stable across time). (C) The mean drift from the original cue location (at the end of a 9-s delay) is greatly reduced with DSI and ICAN compared with the control (N = 500 trials). The time constant for DSI and ICAN were, respectively, 2 and 0.5 s.

Figure 5.

DSI and ICAN stabilize the memory trace in the presence of heterogeneity across neurons in the network. Simulations were carried out where the cue was applied at 20 evenly spaced locations along the 360° space. The maintenance and retrieval of memory require that the remembered location at any given point in time should closely match that of the to-be-remembered cue. (A) The remembered cue locations of the simulations with the control parameter set systematically drift to a few privileged locations. (B) When DSI (4% maximum effect) and ICAN (gCAN = 1.5 nS) were incorporated in the network, the internal representation of the cue location becomes much better (the population vector is nearly stable across time). (C) The mean drift from the original cue location (at the end of a 9-s delay) is greatly reduced with DSI and ICAN compared with the control (N = 500 trials). The time constant for DSI and ICAN were, respectively, 2 and 0.5 s.

Short-Term Facilitation

In simulations where we incorporated STF (results in Figs 6–8), only the recurrent excitatory synapses are facilitatory. To implement STF, the parameter αx is multiplied by F, which is the facilitation factor and obeys the following dynamical equation (Matveev and Wang 2000): 

dFdt=αFiδ(tti)(1F)FτF

Figure 6.

STF of recurrent excitatory synapses reduces random drifts. (A) tSHUT,MIN (filled circles) increases with τF (fitted with an exponential equation). Likewise, the variance of the remembered cue location (VPV) also increases with slower STF (exponential fit), but remains much smaller than that in the absence of STF (VPV = 206 deg2 in Fig. 1E, τS = 100 ms; N = 500). (B) Steady-state profiles of F+ (the facilitation variable, F, after a spike) for 5 different τF (7 s after delay start, N = 400). For longer time constants, the peak of the profile broadens (dashed gray double arrow), resulting in a region effectively without facilitation. This explains increased drifts with longer τF. (C) Phase space plot of F and the population firing rate. Each trajectory corresponds to a trial and starts immediately at the shutdown pulse offset. The network either revert back to the mnemonic bump state (trials in red) or rest to the resting state (trials in blue), depending on the stochastic network dynamics. The F variable fluctuates from trial to trial and is significantly larger in red trajectories than blue ones (see Results). Note that, at the pulse offset, the population of excitatory cells was silent. However, due to the temporal sliding window (50 ms) used to calculate firing rates, the trajectories depicted start at >0 Hz.

Figure 6.

STF of recurrent excitatory synapses reduces random drifts. (A) tSHUT,MIN (filled circles) increases with τF (fitted with an exponential equation). Likewise, the variance of the remembered cue location (VPV) also increases with slower STF (exponential fit), but remains much smaller than that in the absence of STF (VPV = 206 deg2 in Fig. 1E, τS = 100 ms; N = 500). (B) Steady-state profiles of F+ (the facilitation variable, F, after a spike) for 5 different τF (7 s after delay start, N = 400). For longer time constants, the peak of the profile broadens (dashed gray double arrow), resulting in a region effectively without facilitation. This explains increased drifts with longer τF. (C) Phase space plot of F and the population firing rate. Each trajectory corresponds to a trial and starts immediately at the shutdown pulse offset. The network either revert back to the mnemonic bump state (trials in red) or rest to the resting state (trials in blue), depending on the stochastic network dynamics. The F variable fluctuates from trial to trial and is significantly larger in red trajectories than blue ones (see Results). Note that, at the pulse offset, the population of excitatory cells was silent. However, due to the temporal sliding window (50 ms) used to calculate firing rates, the trajectories depicted start at >0 Hz.

Figure 7.

A simplified model with fixed F profile shows that the network is multistable within a range of STF values. (A) The black curve corresponds to the orange profile (τF = 1 s) in Figure 6B, and the other curves were obtained by assuming an exponential decay in time of the black profile, during different temporal intervals (see Results). (B) Bifurcation diagram for τF = 1 and 2 s (upper and lower panels, respectively). Simulations were run with (black dots) or without (red dots) cue presentation, and plotted is the maximum firing rate among all excitatory cells, at the end of the delay period. In these simulations, F did not change dynamically but was set as a parameter and given spatial profiles as those shown in (A). The peaks of the corresponding F profiles are shown in the abscissa. Below F1, the network was always in the resting state. Above F2, no cue was necessary to initiate a bump. (C) F1 and F2 as a function of τF = 0.5, 1, 2, 3, 4 s (fit with single exponentials). The shaded area represents the presence of multistability.

Figure 7.

A simplified model with fixed F profile shows that the network is multistable within a range of STF values. (A) The black curve corresponds to the orange profile (τF = 1 s) in Figure 6B, and the other curves were obtained by assuming an exponential decay in time of the black profile, during different temporal intervals (see Results). (B) Bifurcation diagram for τF = 1 and 2 s (upper and lower panels, respectively). Simulations were run with (black dots) or without (red dots) cue presentation, and plotted is the maximum firing rate among all excitatory cells, at the end of the delay period. In these simulations, F did not change dynamically but was set as a parameter and given spatial profiles as those shown in (A). The peaks of the corresponding F profiles are shown in the abscissa. Below F1, the network was always in the resting state. Above F2, no cue was necessary to initiate a bump. (C) F1 and F2 as a function of τF = 0.5, 1, 2, 3, 4 s (fit with single exponentials). The shaded area represents the presence of multistability.

Figure 8.

STF stabilizes the remembered cue locations in the presence of heterogeneity across neurons in the network. In stimulations, the cue location was applied at 20 evenly spaced locations along the 360° space. (A) The remembered cue locations with STF (τF = 1 s) show visibly less drifts than the control (Fig. 5A). (B) The mean heterogeneity-induced systematic drifts (at the end of a 9-s delay) for the network model without STF (control) or with STF operating at 3 different time constants (N = 400).

Figure 8.

STF stabilizes the remembered cue locations in the presence of heterogeneity across neurons in the network. In stimulations, the cue location was applied at 20 evenly spaced locations along the 360° space. (A) The remembered cue locations with STF (τF = 1 s) show visibly less drifts than the control (Fig. 5A). (B) The mean heterogeneity-induced systematic drifts (at the end of a 9-s delay) for the network model without STF (control) or with STF operating at 3 different time constants (N = 400).

The parameter αF controls the facilitation potency and was set at 0.6. The facilitation factor F changes smoothly during a spike, but undergoes a discrete jump in the limit of approximating the spike by a delta function. In the numerical simulation, F is updated at each spike time as: F+ = 1 − (1 − F) eαF, where F and F+ correspond to the values just before and after the spike, respectively.

Parameter Change

A key manipulation in our study is to gradually change the timescale of a biophysical process. For ICAN, we varied the parameter ϕCAN, which scales the speed of the channel kinetics without affecting the averaged steady-state level of the activity variable mCAN. Similarly, we varied the parameter ϕD to systematically change the time constant of DSI while preserving the average level of the activity variable D. Unlike ICAN or DSI, for STF the activity variable F undergoes discrete jumps in time and what matters is its value immediately after each jump due to a presynaptic spike, rather than the temporal average. For this reason, we varied τF directly (see Results for more details).

When a slow mechanism is added to a network model, the overall level of activity of the excitatory population changes significantly, to a degree correlated to the nature and strength of the mechanism. This changes the shape of a population activity pattern and may even disrupt its stability. For this reason, when ICAN, DSI, or STF were present in the model, GEE was adjusted from 0.381 to 0.378, 0.379, or 0.383 nS, respectively. This way, the network maintained consistently a fixed steady-state activity across all simulations, allowing a fair comparison between scenarios.

Analysis of Simulation Data

To determine the remembered cue location at any given time, we used the population vector, which is a simple readout of the peak location of a spatially tuned persistent activity pattern (Georgopoulos et al. 1982).

The minimum time to shutdown (tSHUT,MIN), in Figures 2E, 3E, and 6A, was determined as follows. For each time constant (τ), a range of shutdown pulse durations (tSHUT) was considered. For each τ and tSHUT, a set of model simulations was run, where an inhibitory input current lasting for tSHUT was applied when the network was in a bump attractor state. At the end of each simulated trial (seconds after pulse offset), whether the bump state was still present or not was judged through the maximum of the firing rate profile. If >95% of simulations of a set yielded successful shutdowns, the corresponding pulse duration was accepted. Finally, for each τ, tSHUT,MIN was chosen as the minimum of those accepted pulse durations.

Bistability Analysis and Bifurcation Diagrams

To plot the bifurcation diagrams in Figures 4 and 7B, we ran simulations across a range of values for the varied parameter (GEE and F profile, respectively) with and without cue input and measured the firing rate during the delay. The maximum firing rate across the network indicated whether the system had evolved to the memory state (typically >20 Hz) or remained at the baseline state (<5 Hz).

The simulations presented in Figure 7 were obtained with a modified model where the facilitation factor F is not a variable but is treated as a parameter with a particular spatial profile, obtained as follows. For τF = 1 s, we determined the F profile at the onset of a shutdown input, averaged over a number of trials from previous simulations where F was a variable (τF = 1 s, N = 400 trials, Fig. 7A, black curve). During the shutdown period, there is no spiking activity and F simply decays exponentially with τF. To reproduce this process, new F profiles were mathematically determined by decay of the original profile for periods of 4–328 ms, in 4 ms steps (gray profiles in Fig. 7A, only 16 examples are shown). Similar procedures were applied to τF = 0.5, 2, 3, and 4 s. The profiles for longer τF are broader, as seen in Figure 6B. These facilitation profiles were supplied to the population of excitatory cells. Note that in both versions of the model relevant for these simulations (F as a variable or as a parameter), the stimulus was presented at the same location (180°).

Simulation Method

The model was implemented in python in the Brian simulator (Goodman and Brette 2009). The equations were integrated using a second-order Runge-Kutta algorithm (time step = 0.02 ms). The simulations were carried out in the cluster facilities of the Yale University Biomedical High Performance Computing Center.

Results

Our working memory model was designed for an ODR task, which proceeds from cue (angle) presentation, to a delay period and memory-guided behavioral response. The cue stimulus activates a group of pyramidal neurons with preferred directions around the sensory cue (first step current, lower panel of Fig. 1B). If the firing rate of this subpopulation of neurons is sufficiently elevated and mutual excitation among them is strong enough, reverberation can give rise to self-sustained persistent activity after the stimulus offset (plateau in Fig. 1B, upper panel; Wang 2001). At the end of the delay, a negative input is applied to all excitatory neurons in the network (Fig. 1B, lower panel, second step current). This shutdown pulse should be sufficient long to switch the network back to the baseline resting state.

The spatiotemporal activity pattern of the network model is shown in Figure 1C (left panel). The memory trace is encoded as a population activity pattern that persists during the delay period. The spatial profile of the bump state, corresponding to the activity during the delay period, has a typical Gaussian shape (Fig. 1C, right panel). The population vector (shown in yellow) quantifies the peak location of the bump attractor as the internal representation of the sensory cue at any instant. In this example, the remembered cue location fluctuates slightly around the initial cue (180°) and remains reasonably close to it at the end of the delay period. Consequently, in this trial, the PFC circuit model successfully encodes and maintains a spatial memory trace, leading to an accurate readout.

Dominant Time Constant Determines Memory Accuracy

The analysis of simulations across trials reveals that the remembered cue (the population vector) as encoded by the network activity pattern displays random drifts over time (Fig. 1D). This is because the system is endowed with a continuous family of bump attractors, each for a directional angle as an analog quantity. During a delay period, irregular neural activity leads to random shifting of the network state among those bump states. At the end of a trial, if the drifts have grown over time greatly, the remembered cue location could be located significantly away from the sensory cue angle. This is shown in some trials of Figure 1D, with deviations of >20°. These simulations therefore show a relatively low accuracy of memory representation, which implies poor performance. Note that, across trials, the average of random drifts is zero (i.e., there is no systematic drift), whereas the variance increases roughly linearly over time (Camperi and Wang 1998; Compte et al. 2000; Renart et al. 2003; Carter and Wang 2007). This variance of population vector (VPV) quantifies the magnitude of random drifts, which we used as a measure to assess the network's function: the smaller is the VPV, the more accurate is the representation of a memory trace, and the better is the behavioral performance.

A key ingredient in our working memory model is that persistent activity is stabilized by slow reverberation mediated by the NMDARs at the recurrent excitatory synapses (Wang 1999). The NMDAR-dependent synaptic current has a time constant τS on the order of 50–100 ms. We hypothesized that, the longer is τS, the more robust will be the memory trace. To test this possibility, we gradually varied the value of the NMDAR decay time constant, and measured the variance of the remembered cue location during a delay interval across hundreds of trials. The VPV decreases with increasing τS (Fig. 1E). The VPV is 206.2 deg2 with τS equal to 100 ms. A substantial reduction in the VPV is observed when τS is increased 3-fold (300 ms, σ2 = 61.5 deg2). This result serves as a proof-of-principle of the idea that extending the dominant time constants decreases random drifts of persistent activity and improves the accuracy of memory representation. In the following, we will consider 3 slow, biophysically plausible mechanisms that are present in the PFC and may improve working memory function.

ICAN Increases Memory Stability but Decreases System Flexibility

Figure 2A shows the spiking activity of an integrate-and-fire single neuron model endowed with the slow inward current ICAN. An external current results in action potentials that induce calcium influx, which in turn activates ICAN. After the stimulus offset, the activation of ICAN decays slowly, which allows it to provide positive feedback that is enough to trigger a few additional spikes (afterdischarges). It is worth noting that we assumed that ICAN is not sufficiently strong to produce stable persistent activity in an isolated neuron (Fig. 2A), and we were interested in examining the contribution of the activity-dependent ICAN in single neurons to the maintenance of a persistent firing pattern in a recurrent working memory circuit.

We ran simulations with ICAN present in excitatory cells and measured the VPV of the delay period memory trace across trials. We tested 2 different values of max τCAN that lie within the experimentally measured range (Partridge and Valenzuela 1999; Faber et al. 2006; Gross et al. 2009; Sidiropoulou et al. 2009). With a shorter max τCAN (1 s), the VPV increases quasi-linearly with time (Fig. 2B, black curve). In contrast, with max τCAN = 3 s, the VPV shows a pronounced increase during the first second of the delay period and then plateaus in the range 10–15 deg2 (Fig. 2B, red curve). A possible explanation for the initial rise in drifts (which is not visible for max τCAN = 1 s) is that, with a slower time constant, the ICAN takes longer to be activated and does not provide robustness against drifts as promptly. The crossover between the 2 time courses shows that shorter τCAN is more advantageous for shorter delay periods, whereas slower τCAN increases memory accuracy in longer delays.

The increase in memory robustness provided by ICAN, however, is just one of the effects this current has in the working memory model; the incorporation of a slow mechanism also makes it harder to erase memory. At the end of a delay period, memory erasure was simulated using a negative current input to all excitatory cells, which completely silences the network. If this pulse is not sufficiently long, the network returns to the memory state, with high ICAN activation and elevated neural firing (Fig. 2C, red traces, 100 ms pulse). With a longer shutdown pulse, in contrast, ICAN deactivates to a sufficiently low level that does not allow the return of the high spiking activity and the network is switched off from a bump attractor state (Fig. 2C, blue traces, 200 ms pulse).

To further demonstrate the role of ICAN in the memory erasure process, we studied the dependence between this current's activation and the activity level of the network. We recorded simultaneously the activation variable of ICAN (mCAN) and the firing rate of the network and plotted them in a state space, for several trials (Fig. 2D). We only recorded neurons around the cue location and in simulations that successfully maintained a memory during the delay. All trajectories initiate immediately after the shutdown (“pulse offset”). There is a clear divergence between 2 kinds of traces: In a given trial, the system's trajectory either revert back to the memory state (red traces, “bump”) or decays to the resting state (blue traces, “shutdown”). A boundary (dashed line) separates the regions of attraction of the 2 states. This result shows that even though a relative weak ICAN [which by itself does not yield persistent activity in a single neuron (Fig. 2A)] does not determine whether a network generates persistent activity per se, it can have a remarkably significant impact on the network's behavior.

Therefore, ICAN stabilizes the memory trace by reducing memory drifts over time; at the same time it renders the network less flexible, that is, it may be harder to load inputs and discard old memories. This accuracy–flexibility tradeoff was demonstrated more explicitly when we varied max τCAN parametrically (Fig. 2E). The increase in max τCAN decreases the variance of the remembered cue location (the VPV, open circles), but increases the minimum time required to shutdown the network (filled circles). A “sweet spot” corresponds to the crossover point of the 2 curves (max τCAN = 1–2 s), where the VPV is close to the minimum while tSHUT,MIN is reasonably short (a few hundreds of milliseconds). However, an optimal compromise for a working memory circuit could be different depending on the functional demand that may emphasize either accuracy or flexibility.

DSI Also Shows Tradeoff Between Accuracy and Flexibility

DSI is a cannabinoid-dependent process through which synaptic inhibition to excitatory neurons is reduced by the magnitude of DSI, which in turn is controlled by the activity of the same E cells (Fig. 3A). Thus, for each neuron, a higher level of excitation leads to a weaker inhibition, resulting in an effective positive feedback. In the control network employed in this study, the recurrent excitation mostly mediated by the NMDAR is balanced by lateral inhibition, which prevents the outburst in activity. The incorporation of DSI only reduces inhibition by a modest degree (controlled by the parameter Dmin) and does not significantly alter the balance E/I. As a result, the firing activity remains at reasonable levels without diverging.

The cells that are most active during an ODR task are those around the peak of the bump activity pattern (Fig. 3B, cue location at 180°). Therefore, due to its activity-dependence, DSI is the strongest in this group as well. This is depicted in the blue region of the spatiotemporal activity pattern in Figure 3C (note the inverted scale, with hotter colors representing less DSI activation). This creates a favorable bias for the network at the location of the sensory cue, thereby reducing spontaneous drift and stabilizing the neuronal representation of the remembered cue (Carter and Wang 2007).

To quantify this DSI-induced effect, we determined the variance of the remembered cue location (the VPV). We proceeded in a similar way as described above, and the results are remarkably similar. When DSI is controlled by a long time constant (5 s), there is an initial period of rise in drifts (Fig. 3D, red trace, first 2 s of delay), similar to a network without DSI. However, once the mechanism is fully activated (with a longer delay), the VPV does not grow any longer, reaching a plateau instead. For the shorter time constant (1 s), the variance increases almost monotonically (Fig. 3D, black trace).

Another notable feature in the particular sample trial of Figure 3B,C is the persistence of the inhibition suppression. Given the slow nature of its decay (τD = 5 s), DSI does not have sufficient time to fade away during a negative pulse lasting 0.5 s (compare with Fig. 1C). The remaining trace of disinhibition is strong enough to restart the memory bump at approximately the same angle, without a new cue presentation (Fig. 3B, right panel, red profile).

As shown in Figure 3E (open symbols), the duration of a step current required to reset the network increases dramatically with τD (0.5 s: tSHUT,MIN = 130 ms; 5 s: tSHUT,MIN = 3.75 s). On the other hand, the variance of the remembered cue location, the VPV, is larger in simulations with short τD and decreases for progressively longer τD, reaching a low plateau for τD larger than 1 s. Compared with the control (Fig. 1E with τS = 100 ms, VPV = 206 deg2) with the same delay period duration of 5–6 s, a circuit endowed with DSI displays a smaller variance of drifts overall (VPV = 70.5 deg2 with τD = 50 ms and 42.1 deg2 with τD = 5 s; Fig. 3E, open circles). With larger delays (12–13 s), the simulations show higher variance due to the accumulation of drifts over a longer time (Fig. 3E, open squares). However, in agreement with the traces in Figure 3D, this relative increase of VPV due to longer delays is mostly observed for shorter τD and is minimal for longer ones. Therefore, our analysis shows a tradeoff between the ease of shutdown and memory accuracy, which is the same with DSI as that observed with ICAN.

ICAN and DSI Enhance the Robustness of Working Memory

We next examined the network behavior when the model system is endowed with a combination of both DSI and ICAN. To quantify the robustness of the working memory system, we determined the range of the parameter space where there is coexistence of a resting state (low-firing rates) with memory states (high rates). This multistability range corresponds to the regime where the system remains in the resting state in the absence of stimulation, but encodes a memory after a cue presentation, which is the desirable behavior of a working memory system. We determined the regime boundaries for a range of the recurrent connectivity strength between excitatory neurons (GEE) in the form of bifurcation diagrams (Fig. 4). When DSI and ICAN are not present, the multistability range (bounded by 2 dashed lines) is restricted to a narrow range around GEE = 0.38 nS (Fig. 4A). If either DSI or ICAN is incorporated (same as in previous simulations: 4% DSI or gCAN = 1.5 nS), the lower boundary of the range is extended to smaller GEE values (Fig. 4B,C). The maximum broadening effect occurs when both slow mechanisms are present (Fig. 4D). This is readily understood: with the help of DSI and ICAN, less recurrent excitation is required to generate persistent activity.

A second noteworthy feature of Figure 4 is that the slow biophysical mechanisms increase the firing rate of memory states while that of the resting state remains roughly the same. This is because DSI and ICAN are activity-dependent, therefore minimal in the low-firing spontaneous activity, but significant in the high-rate memory states. This leads to a larger separation between the resting and memory states. Consequently, a random fluctuation in spontaneous spiking activity will be less prone to give rise to a “false” memory, and the network function is more reliable.

To conclude, ICAN and DSI are beneficiary to the system by making it less sensitive to variations of the network properties (such as GEE), and less prone to spontaneous transitions by noise between the resting state and memory states. Both effects enhance the robustness of working memory behavior.

ICAN and DSI Counteract Heterogeneity

A continuum of attractor states requires that the network is homogeneous and neurons are homogeneous, so that the system is translationally invariant (Ben-Yishai et al. 1995). Under this condition, if a localized pattern of activity is spatially displaced, it will lead to another identical pattern centered at the new location. However, any neural network shows a certain degree of variability across cells (Marder and Goaillard 2006). A homeostatic mechanism that equalizes the long-term firing rates of all cells to a predetermined level was shown to recover the accuracy of the memory trace (Renart et al. 2003). Alternatively, can DSI and ICAN remedy the system's vulnerability to heterogeneity, by virtue of reinforcing a privileged location in the network in an activity-dependent manner? To investigate this question, we implemented a modest amount of heterogeneity, by assuming that the leak potential VL varies from cell to cell according to a Gaussian distribution [mean VL = −70 mV and standard deviation SD(VL) = 1 mV].

Across a large number of trials, the input cues are presented at 20 angle locations equally distributed along the 360° of a circle. When both mechanisms are absent, the remembered cue locations display systematic drifts and, as previously reported (Renart et al. 2003), tend to converge to a few privileged locations (Fig. 5A, θ = 180 and 320°). These locations are determined by the heterogeneous distribution of the cellular excitability across the network, which disrupts the continuous family of bump attractors. The mean drift from the cue location is minimal in networks with DSI and ICAN (8.9 ± 6.9°) and significantly different (2-sample t-test, P = 5 × 10−110) from that of the control network (46.7 ± 32.5°; Fig. 5C).

Intuitively, when DSI and ICAN are included, the remembered cue locations show much smaller drifts (Fig. 5B). Both mechanisms add a second layer of activity-dependency besides NMDAR and operate on a slow timescale. Therefore, their presence “anchors” the original location of the bump attractor that encodes the sensory cue. These slow mechanisms are powerful enough to overcome the disrupting effect of heterogeneity.

Short-Term Facilitation Increases Memory Accuracy

Finally, we considered the effect of STF in our working memory model. STF shares similar features with ICAN and DSI, namely activity-dependence, positive feedback, and slow time course of activation (Zucker 1989; Fisher et al. 1997; Tsodyks and Markram 1997; Abbott and Regehr 2004). It is especially prevalent in excitatory synapses between pyramidal cells in the frontal cortex (Hempel et al. 2000; Wang et al. 2006).

The implementation of STF in the model reduced random drifts of the memory trace during the delay. Compared with the control network (Fig. 1E, τS = 100 ms, VPV = 206 deg2), the variance of the remembered cue location was lower for any τF (Fig. 6A, open circles, VPV ranges 71–126 deg2). However, contrary to DSI and ICAN, the VPV increased rather than decreases with longer τF. This unexpected result is elucidated by the analysis of a profile of the peak value of the facilitation variable F, for a bump attractor. For each cell in the network, every time there is a spike, F is increased by a discrete jump and the resulting value F+ is used to update the synaptic conductance. Between spikes, F decayed until the next spike takes place. Thus, neurons in the bump that had elevated firing rates also show higher F+ (profile in Fig. 6B). For longer τF, the decay is very slow, resulting in more temporal summation and, eventually, in a saturation of F+ (Fig. 6B, gray dashed double arrow). A wide steady-state F+ profile effectively removed the facilitation effect in that spatial region and selective enhancement created by the activity-dependent-positive feedback. For this reason, augmenting the STF time constant increased memory drifts and, consequently, increased the variance of the remembered cue location (Fig. 6A, open circles).

This saturating feature was not observed with the other 2 slow mechanisms because of the following differences between the biological processes. The magnitudes of ICAN and DSI vary quasi-continuously over time through their dependence on intracellular calcium, which accumulates and declines slowly. Furthermore, they influence the excitability of the cell at almost any point in time. Therefore, the spatial profile of the activity variable (mCAN or D, respectively) can be fixed and remains not saturating, when the time constant is varied through a scaling factor (ϕCAN or ϕD). On the other hand, STF is not a continuous process, but acts only during synaptic events. This means that the value of the variable F is only used at times of spikes (F+) and ignored when it decays away between spikes. For this reason, a scaling method is not appropriate, because it would only preserve the time-averaged steady state of F but not the steady state of F+.

Similarly to what is observed for ICAN and DSI, prolonged STF time constant makes it more difficult to reset the network (Fig. 6A, filled circles). When τF is 0.5 s, the required time to shutdown is just 50 ms. At the other end of the tested range, a τF of 4 s requires a negative input pulse lasting for at least 1.4 s to erase a memory trace.

The minimum shutdown time is determined by the decay for F during the inhibitory input. At the end of the shutdown phase, the magnitude of F for neurons in the bump attractor reaches a level that depends on the pulse duration and τF. This level fluctuates from trial to trial, and has a large influence on whether the bump reappears or not in any given trial. In Figure 6C, are shown the system trajectories (state space) of F versus firing rate, for 40 trials with τF = 1 s. This facilitation time constant corresponds to a minimum time to shutdown of 90 ms (Fig. 6A), which means that shorter pulses should not be able to shutdown the network. The red traces (tSHUT,MIN = 50 ms, leading to return of the bump state) start at an average of F = 0.89 ± 0.02, whereas the blue ones (tSHUT,MIN = 100 ms, leading to shutdown) begin at F = 0.84 ± 0.04—a significant difference (2-sample t-test, P = 8.58 × 10−6). Longer τF requires longer shutdown pulses in order for F to decay to a low enough level, so that the recurrent excitation is too weak to enable the bump to reemerge.

To shed further insights into how the degree of facilitation decay over the entire neuronal network determines the success of the memory shutdown, we ran new simulations in a modified model where the facilitation factor F is no longer a variable but is treated as a parameter. To each simulation, we assigned a fixed F spatial profile. For realistic purposes, these are all mathematically derived (time decay) from the average facilitation profile at the end of delay of previous simulations, where F was a variable (Fig. 7A, see Materials and Methods). This replicates the decay of F during the shutdown phase, with different pulse durations. In these simulations, Fpeak is a parameter that characterizes the profile of facilitation expected after a shutdown pulse with a given duration—lower Fpeaks correspond to longer pulses. Roughly, the profiles that result from decays longer than tSHUT,MIN should not be able to sustain a bump without cue. The emergence of either a resting (low rates) or a memory state (high rates) as a function of Fpeak was presented in bifurcation diagrams (Fig. 7B, τF = 1, 2 s). The thresholds F1 and F2 delimit the bistability regime (Fig. 7B,C; Fig. C, shaded area). With longer STF time constants, it is necessary to reach a profile with lower Fpeak such that simulations without cue input remain in the resting state (below F2). This behavior roughly corresponds to the increase in tSHUT,MIN for longer τF, in simulations where F changes dynamically.

As was shown in a recent study using a firing-rate model, systematic drifts of memory trace due to heterogeneity could be dramatically reduced by STF (Itskov et al. 2011). We checked the effect of STF in our spiking network model in the presence of cellular heterogeneity [〈VL 〉 = −70 mV and SD(VL) = 1 mV]. It is evident by visual inspection that, with STF, the memory storage of the sensory cue is more stable over time (Fig. 8A), compared with those under the control condition without STF (Fig. 5A). This impression is confirmed statistically (Fig. 8B). There is a significant (2-sample t-test) decrease in the mean drifts between the control network (without STF, 46.9 ± 33.4°) and each of the 3 scenarios with a different τF (1 s, 32.7 ± 24.7°, P = 2 × 10−11; 2 s, 34.8 ± 24.1°, P = 8 × 10−9; 3 s, 38.6 ± 27.5°, P = 1 × 10−4).

In summary, like ICAN and DSI, STF reduces noise-induced random drifts or heterogeneity-induced systematic drifts of memory traces, thereby rendering working memory function more robust. In contrast to the other 2 slow mechanisms, a longer time constant of STF leads to larger drifts of a memory trace, but drifts remain smaller than those in the control network without STF.

Slow Mechanisms Protect Memory Against Distractors

A cortical circuit assigned to store a particular stimulus in memory may receive, at any point in time, additional external signals with the potential to alter its network state and output. Depending on the nature of the new, distractor signal, the circuit may respond within a range of possible behaviors. It can erase the previous memory and encode the new one. The second stimulus can also modify quantitatively the established memory. Finally, the circuit may filter out the distractor completely. Considering the influence that slow mechanisms have on memory robustness and flexibility, they may also play a crucial role in this process.

In our network model, if we apply a particular distractor stimulus during the delay period, the new remembered cue location will shift toward its location and away from the cue stimulus angle. The deviation of the bump peak location is clearly visible for a network with the control parameter set (Fig. 9A, upper panel, θ1 = 178°, θ2 = 244°). The magnitude of this deviation (θ2θ1) depends on the angular difference between the cue stimulus (θS) and the distractor (θD). The distraction increases with θDθS before reaching a maximum. Beyond this point, the influence of the distractor decreases abruptly and the final location of the bump is much closer to the cue angle. Longer distractor durations result in significantly larger deviations of the final memory trace (Fig. 9B, 500 ms).

Figure 9.

Slow mechanisms preserve cue representation and decrease the influence of long distractor stimuli. (A) Smoothed spatiotemporal activity pattern of the network's excitatory cells under control conditions (upper panel) or with DSI (lower panel), in the presence of a distractor. An initial cue stimulus (peak angle θS = 180°, 750 ms–1 s, first pair of vertical dashed lines) drives the network to the memory state. The application of a distractor during the delay period (peak angle θD = 300°, 100 pA, 6–6.25 s, second pair of dashed lines) pulls the location of the bump closer to it. In these 2 example trials, the deviation of the bump, measured as the difference between the remembered cue location after the distractor (θ2, 8 – 9 s) and before (θ1, 4.5–5.5 s), is larger in the control network than with DSI. (B) The average difference between θ2 and θ1 as a function of the difference in peak angles of distractor (θD) and cue stimulus (θS), for 3 distractor durations (N = 150). The deviation increases and approaches the perfect distraction (diagonal dashed line) before declining for more distant distractors. Longer durations produce generally larger deviations that have a maximum at larger distractor angles. (C) Same as in (B) but for network with DSI. The differences in remembered cue locations are visibly smaller than with the control network for all 3 distractor durations (N = 150). (D) Distraction indicators for sets of trials with different distractor durations, under control network (grey symbols) or with DSI (black symbols). Upper panel: the maximum distraction is small and increases almost linearly in a network with DSI. Under control conditions, this measure is larger throughout the whole range and has a more prominent increase. The edge-colored data points were taken from (B) and (C) with the same color scheme. Lower panel: similarly, the distraction angle (θDθS) at which the maximum deviation of the bump is observed is wide and increases with duration in the control network, but is narrower and almost stable when DSI is present. This slow mechanism limits the effects of closer distractors and protects the memory against farther ones almost independently of their duration.

Figure 9.

Slow mechanisms preserve cue representation and decrease the influence of long distractor stimuli. (A) Smoothed spatiotemporal activity pattern of the network's excitatory cells under control conditions (upper panel) or with DSI (lower panel), in the presence of a distractor. An initial cue stimulus (peak angle θS = 180°, 750 ms–1 s, first pair of vertical dashed lines) drives the network to the memory state. The application of a distractor during the delay period (peak angle θD = 300°, 100 pA, 6–6.25 s, second pair of dashed lines) pulls the location of the bump closer to it. In these 2 example trials, the deviation of the bump, measured as the difference between the remembered cue location after the distractor (θ2, 8 – 9 s) and before (θ1, 4.5–5.5 s), is larger in the control network than with DSI. (B) The average difference between θ2 and θ1 as a function of the difference in peak angles of distractor (θD) and cue stimulus (θS), for 3 distractor durations (N = 150). The deviation increases and approaches the perfect distraction (diagonal dashed line) before declining for more distant distractors. Longer durations produce generally larger deviations that have a maximum at larger distractor angles. (C) Same as in (B) but for network with DSI. The differences in remembered cue locations are visibly smaller than with the control network for all 3 distractor durations (N = 150). (D) Distraction indicators for sets of trials with different distractor durations, under control network (grey symbols) or with DSI (black symbols). Upper panel: the maximum distraction is small and increases almost linearly in a network with DSI. Under control conditions, this measure is larger throughout the whole range and has a more prominent increase. The edge-colored data points were taken from (B) and (C) with the same color scheme. Lower panel: similarly, the distraction angle (θDθS) at which the maximum deviation of the bump is observed is wide and increases with duration in the control network, but is narrower and almost stable when DSI is present. This slow mechanism limits the effects of closer distractors and protects the memory against farther ones almost independently of their duration.

When DSI is incorporated in the network, the deviation of the bump induced by a distractor is visibly smaller (Fig. 9A, lower panel, θ1 = 185°, θ2 = 196°) than in the control network. This outcome is observed across the range θDθS and for all distractor durations (Fig. 9C). Consequently, the maximum distraction (Fig. 9D, upper panel) with DSI is smaller and grows slower with distractor duration (150 ms, 11.5 ± 3.7°; 500 ms, 29.0 ± 4.8°) than in control conditions (150 ms, 41.6 ± 12.3°; 500 ms, 135.0 ± 25.6°). The angle difference between distractor and cue that originated those maximum distractions [(θDθS)max] corresponds to the distractor location that has maximum influence on the memory bump. This indicator was higher with control parameters than with DSI for all distractor durations. Remarkably, the presence of the slow mechanism resulted in a more stable (θDθS)max (150 ms, 90°; 500 ms, 110°) than in the control network (150 ms, 110°; 500 ms, 155°). Similar results were obtained with ICAN and STF (Supplementary Fig. 1).

Taken together, these results suggest that DSI decreases the influence of all distractors regardless of their location. Moreover, it reduces the range of distractor locations that significantly deviate the memory bump. Finally, the protection against farther distractors is almost independent of their duration.

Discussion

Until now it is commonly recognized that a working memory circuit should not be conceptualized in terms of rapid switches between attractor states. Instead, reverberation underlying persistent activity must be slow, likely involving the NMDARs at recurrent excitatory synapses (Wang 1999; Wang et al. 2013). Slow network dynamics enables a single microcircuit mechanism to subserve working memory and decision-making. The latter requires accumulation of information over time by virtue of slow neural transients such as quasi-linear ramping activity (Wang 2002, 2008). It is noteworthy that persistent activity during a mnemonic delay period often displays slow temporal variations, as well as a rich heterogeneity across neurons (Batuev et al. 1979; Baeg et al. 2003; Miller et al. 2003; Goldman 2009; Machens et al. 2010; Barak et al. 2013; Stokes et al. 2013).

Is the slower the underlying mechanism, the better? In the present work, we investigated 3 biophysical mechanisms in a network model of spatial working memory. ICAN, DSI, and STF are present in frontal neurons and are activity-dependent. They provide positive feedback to active excitatory cells and operate on a slow timescale. Our main finding was that slow timescale has a tradeoff effect. ICAN, DSI, and STF render working memory representation more robust. However, their slow decay leaves a lingering memory trace even after the termination of persistent firing activity, which makes it difficult to reset the circuit by brief inputs, a fundamental requirement for normal function of a working memory system. These findings suggest that recurrent attractor dynamics are the “working horse” for mechanisms that sustain delay activity, and that very slow processes contribute to the accuracy of memory maintenance.

Random Drifts

Our study started with the premise that very slow processes are not necessary for the generation of persistent activity per se, but may play a role in determining the accuracy and robustness of a working memory circuit's behavior. It has been previously reported that, in continuous attractor networks, a mnemonic activity pattern (bump attractor) exhibits random drifts that accumulate over time and deviate the stored spatial information away from the cue location (Camperi and Wang 1998; Compte et al. 2000; Carter and Wang 2007). This decreases the accuracy of the memory readout. We found that the incorporation of DSI, ICAN, or STF helps to reduce random drifts of a memory trace. Moreover, we demonstrated that this stabilizing effect is dependent on the effective time constant of the mechanism considered. For ICAN and DSI, the increase in τ is associated with a decrease in random drifts before reaching a plateau (∼1–2 s). In contrast, with STF, although it also increases accuracy when compared with the control network, random drifts actually increase with τF. We showed that this happens because, for longer time constants, the facilitation variable F saturates around the bump, effectively removing facilitation in that part of the network.

Heterogeneity-Induced Drifts

A different type of drifts of memory trace arises from heterogeneity across neurons, which is detrimental to the realization of a continuous family of attractors. A homeostatic mechanism that scales the excitatory synapses was shown to recover the accuracy of the remembered cue location under those conditions (Renart et al. 2003). This activity-dependent mechanism scales the excitatory synaptic weights of each cell, so that the long-term average firing rate is similar for all and equal to a predetermined level. Recently, it was demonstrated using a firing-rate model that STF slows down the velocity of drifts in the presence of synaptic heterogeneity (Itskov et al. 2011). Building upon these insights and following our successful stabilization of random drifts, we tested the effect of the 3 slow mechanisms on our spiking model in the presence of cellular heterogeneity. A combination of ICAN and DSI effectively counteracted the tendency of a bump to drift to privileged locations, during a relatively long delay period. Likewise, STF significantly reduced systematic drifts due to heterogeneity, strengthening the previous results with a biophysically realistic spiking network model.

Memory Flexibility

Whereas slow biophysical mechanisms increased the accuracy of memory representation, they have the opposite implications on the flexibility to switch between dynamical states. The precise mechanism used by the brain to erase a working memory trace remains poorly understood. We used a negative current of sufficiently strong amplitude to bring neurons below the firing threshold, which is the most efficiently way to switch off from a memory state. If we used a different input to only reduce the firing rate rather than silencing neurons, the duration of that input would need to be longer than tSHUT,MIN. This analysis leads us to very similar conclusions regarding ICAN, DSI, and STF. The minimum duration of the pulse required to shutdown the network dramatically increases with the effective time constant of the mechanism.

Our study approaches a relevant debate regarding the basis of neural circuit mechanism of working memory. At one extreme, a working memory system is viewed in terms of fast switches between multiple steady-state attractors, with virtually no transient. At the other extreme, there are no multiple attractors but a single resting state. In this scenario, a transient cue simply perturbs the system to another location in the state space, and the network has a short-term memory merely in terms of very slow decay (returning to the resting state) after the stimulus offset. This typically requires really slow biophysical time constants, such as those provided by ICAN, DSI, and STF. Our model is not yet at this extreme, but it should follow from our results that the scenario without multiple attractors would have the same problem of shutting down.

We also studied how a slow mechanism may help preserve the location of the memory bump against distractor stimuli. DSI limits the effects of closer distractors and protects the memory against farther ones almost independently of their duration. Whereas this effect is desirable to discard unwanted stimuli, it also uncovers the potential for intertrial persistency. When a new cue is presented at a different location, the trace of disinhibition from the previous trial will act as a distractor and pull the location of the new bump toward the old one. Taken together, these results establish a tradeoff between memory accuracy and flexibility.

An accuracy–flexibility tradeoff can also be obtained by the modulation of the strength of the slow mechanisms instead of their time constant. The strengthening of DSI or ICAN increases the memory trace stability but, on the other hand, increases the ease to shutdown (Supplementary Fig. 2). However, in these simulations, the overall firing rates increase when the slow mechanism became stronger, which presents a confounding factor that hinders the comparison between different strengths.

Slow Mechanisms Modulate Dynamics of a Working Memory System

The bifurcation analysis of the network model with GEE as a control parameter revealed that, in the presence of DSI and ICAN, the model system shows a wider multistability range and a larger separation between the firing rate of persistent activity and resting states. A wider multistability range implies a higher degree of robustness because normal function is less sensitive to variations of network properties. A larger separation of firing rates implies that it is harder for spontaneous transitions between states to occur merely by noise. We also showed that the realistic range of the facilitation factor F contained a multistability range that shifts with the time constant τF. These raise the possibility that, in a working memory circuit such as PFC, some modulatory mechanisms could flexibly tune slow biophysical processes for optimal behavior.

Figure 10 offers a conceptual understanding common for all 3 slow mechanisms. This schematic depiction is partly deduced from the phase space plots in Figures 2D and 6C. It shows, in a state space, how activity of neurons engaged in working memory storage and the activation variable (X) of any of the 3 slow mechanisms interact with each other dynamically. Just before the shutdown pulse (Fig. 10A), the phase plane consists of a stable manifold, which separates the resting and memory attractors, and an unstable manifold. The intersection between the 2 lines creates an unstable saddle point. During the shutdown pulse presentation (Fig. 10B), only the resting state exists. The negative input pulse immediately inhibits all firing activity and X decays exponentially. At the pulse offset (Fig. 10C), the system regains its previous landscape with both attractors. Afterwards, the network trajectory depends on the extent of X decay, which is an exponential function of the duration of the pulse. If X is below the stable manifold, the system will progress to the resting state, resulting in a successful shutdown (blue trace). Otherwise, if X decayed less and remains above the stable manifold at the pulse offset, the system will revert back to the mnemonic attractor state (red trace).

Figure 10.

Schematic phase-plane diagram of our working memory model, during 3 stages of a shutdown process. This scheme applies to all 3 slow biophysical mechanisms considered in this paper, with X representing the activation variable of ICAN, DSI, or STF. The inset in (B) displays the timing of the 3 stages according to the presentation of the negative shutdown input. (A) The state space displays a stable manifold (line with converging arrows) and an unstable manifold (line with diverging arrows), and their intersection creates a saddle point. There are 2 stable steady states (filled circle) representing a memory state and a rest state. At the end of delay, the system is in the memory state. (B) During the application of the negative pulse, there is only one steady state (filled circle), with a low-firing rate and low X magnitude. After the quick suppression of all firing activity (“FAST”), the system moves along the direction of the exponential decay of X (“SLOW”) over the duration of the pulse. (C) The attractor landscape (A) is restored after the pulse offset. Depending on whether the state of the system at the offset of the shutdown input is on the left or the right side of the stable manifold, the system will revert back to the memory state (red trajectory) or reset to the resting state (blue trajectory, successful shutdown).

Figure 10.

Schematic phase-plane diagram of our working memory model, during 3 stages of a shutdown process. This scheme applies to all 3 slow biophysical mechanisms considered in this paper, with X representing the activation variable of ICAN, DSI, or STF. The inset in (B) displays the timing of the 3 stages according to the presentation of the negative shutdown input. (A) The state space displays a stable manifold (line with converging arrows) and an unstable manifold (line with diverging arrows), and their intersection creates a saddle point. There are 2 stable steady states (filled circle) representing a memory state and a rest state. At the end of delay, the system is in the memory state. (B) During the application of the negative pulse, there is only one steady state (filled circle), with a low-firing rate and low X magnitude. After the quick suppression of all firing activity (“FAST”), the system moves along the direction of the exponential decay of X (“SLOW”) over the duration of the pulse. (C) The attractor landscape (A) is restored after the pulse offset. Depending on whether the state of the system at the offset of the shutdown input is on the left or the right side of the stable manifold, the system will revert back to the memory state (red trajectory) or reset to the resting state (blue trajectory, successful shutdown).

Accuracy–Flexibility Tradeoff

The circuits of the PFC that encode working memory, like all systems in the nervous system, have a rich variety of processes that modulate their performance. In this study, we considered a group of mechanisms that may be involved in the dynamical stabilization of the memory trace. The apparent conflict resulting from a tradeoff between accuracy and flexibility of the memory trace may turn out to be significant for neuronal modulation. According to environmental conditions and behavioral task demands, the network may be instructed to tilt the balance in favor of increased accuracy at the expense of flexibility. Under these circumstances, ICAN, DSI, or STF may be strongly activated so that the memory is encoded as precisely as possible. On the other hand, when the task requires faster response to cue stimulation, the network may be tuned to decrease the activation of the slow mechanisms or shorten their time constants. This prevents the previous memory from interfering with the encoding of the new stimulus. Interestingly, we reveal the location of “sweet spots” for the models with ICAN and DSI. In these time constant ranges (1–2 s), these slow mechanisms stabilize the memory to a great degree without significantly hampering the shutdown process. This observation raises the question of whether a working memory system in the brain can be tuned to that optimal configuration and what might be a neurobiological mechanism for achieving such optimality. Future experiments and theory are worth pursuing in this direction.

It has been proposed that an emphasis on robust online representation of information versus rapid switching could be adjusted by dopamine signaling, with D1 (respectively D2) receptors acting in favor of robustness (respectively flexibility; Durstewitz and Seamans 2008; Rolls et al. 2008). Additionally, several other pathways modulate the 3 slow mechanisms. The channels that mediate ICAN are highly sensitive to muscarinic and metabotropic receptor activation (Haj-Dahmane and Andrade 1998; Sidiropoulou et al. 2009), DSI is obviously dependent on endocannabinoids (Ohno-Shosaku et al. 2001; Wilson and Nicoll 2001; Wilson et al. 2001) and STF is controlled by synaptic vesicles release (Hempel et al. 2000). Our results suggest that slow processes, including those studied here, are potentially effective targets of action by dopamine or other neuromodulators, which can optimally adjust the tradeoff between robustness of memory storage and cognitive flexibility. This prediction may be tested through experiments to understand precisely how modulation occurs and to determine under which circumstances each slow mechanism is predominant in the encoding of working memory in the PFC.

At the present time, there still exists a large gap between neural circuits and behavior (Carandini 2012); this wide gap must be bridged in order to achieve our goal of understanding the brain mechanisms of cognitive functions and their impairments associated with mental disorders. The present work illustrates how biophysically based computational modeling, in interplay with experimentation, can help make progress in this direction, through elucidation of how specific cellular and synaptic processes shape network activity patterns (persistent activity) and contribute to a key functional requirement (accuracy–flexibility tradeoff) in a cognitive process.

Supplementary Material

Supplementary material can be found at: http://www.cercor.oxfordjournals.org/.

Funding

This work was supported by the PhD Program in Computational Biology of the Instituto Gulbenkian de Ciência, the US National Institutes of Health (grant R01 MH062349), and the John Simon Guggenheim Memorial Foundation Fellowship (to X.-J.W.).

Notes

Conflict of Interest: None declared.

References

Abbott
LF
Regehr
WG
.
2004
.
Synaptic computation
.
Nature
 .
431
:
796
803
.
Amit
DJ
.
1995
.
The Hebbian paradigm reintegrated: Local reverberations as internal representations
.
Behav Brain Sci
 .
18
:
617
626
.
Amit
DJ
Brunel
N
.
1997
.
Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex
.
Cereb Cortex
 .
7
:
237
252
.
Baeg
E
Kim
Y
Huh
K
Mook-Jung
I
Kim
H
Jung
M
.
2003
.
Dynamics of population code for working memory in the prefrontal cortex
.
Neuron
 .
40
:
177
188
.
Barak
O
Sussillo
D
Romo
R
Tsodyks
M
Abbott
L
.
2013
.
From fixed points to chaos: three models of delayed discrimination
.
Prog Neurobiol
 .
103
:
214
222
.
Batuev
AS
Pirogov
AA
Orlov
AA
.
1979
.
Unit activity of the prefrontal cortex during delayed alternation performance in monkey
.
Acta Physiol Acad Sci Hung
 .
53
:
345
353
.
Ben-Yishai
R
Bar-Or
RL
Sompolinsky
H
.
1995
.
Theory of orientation tuning in visual cortex
.
Proc Natl Acad Sci USA
 .
92
:
3844
3848
.
Brunel
N
Wang
XJ
.
2001
.
Effects of neuromodulation in a cortical network model of object working memory dominated by recurrent inhibition
.
J Comput Neurosci
 .
11
:
63
85
.
Camperi
M
Wang
XJ
.
1998
.
A model of visuospatial working memory in prefrontal cortex: recurrent network and cellular bistability
.
J Comput Neurosci
 .
5
:
383
405
.
Carandini
M
.
2012
.
From circuits to behavior: a bridge too far?
Nat Neurosci
 .
15
:
507
509
.
Carter
E
Wang
XJ
.
2007
.
Cannabinoid-mediated disinhibition and working memory: dynamical interplay of multiple feedback mechanisms in a continuous attractor model of prefrontal cortex
.
Cereb Cortex
 .
17
:
i16
i26
.
Compte
A
Brunel
N
Goldman-Rakic
PS
Wang
XJ
.
2000
.
Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model
.
Cereb Cortex
 .
10
:
910
923
.
Durstewitz
D
Seamans
JK
.
2008
.
The dual-state theory of prefrontal cortex dopamine function with relevance to catechol-O-methyltransferase genotypes and schizophrenia
.
Biol Psychiatry
 .
64
:
739
749
.
Durstewitz
D
Seamans
JK
Sejnowski
TJ
.
2000
.
Neurocomputational models of working memory
.
Nat Neurosci
 .
3
:
1184
1191
.
Egorov
AV
Hamam
BN
Fransen
E
Hasselmo
ME
Alonso
AA
.
2002
.
Graded persistent activity in entorhinal cortex neurons
.
Nature
 .
420
:
173
178
.
Faber
E
Sedlak
P
Vidovic
M
Sah
P
.
2006
.
Synaptic activation of transient receptor potential channels by metabotropic glutamate receptors in the lateral amygdala
.
Neuroscience
 .
137
:
781
794
.
Fisher
SA
Fischer
TM
Carew
TJ
.
1997
.
Multiple overlapping processes underlying short-term synaptic enhancement
.
Trends Neurosci
 .
20
:
170
177
.
Fransén
E
Tahvildari
B
Egorov
AV
Hasselmo
ME
Alonso
AA
.
2006
.
Mechanism of graded persistent cellular activity of entorhinal cortex layer V neurons
.
Neuron
 .
49
:
735
746
.
Funahashi
S
Bruce
CJ
Goldman-Rakic
PS
.
1989
.
Mnemonic coding of visual space in the monkey's dorsolateral prefrontal cortex
.
J Neurophysiol
 .
61
:
331
349
.
Fuster
JM
Alexander
GE
.
1971
.
Neuron activity related to short-term memory
.
Science
 .
173
:
652
654
.
Georgopoulos
AP
Kalaska
JF
Caminiti
R
Massey
JT
.
1982
.
On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex
.
J Neurosci
 .
2
:
1527
1537
.
Gnadt
J
Andersen
R
.
1988
.
Memory related motor planning activity in posterior parietal cortex of macaque
.
Exp Brain Res
 .
70
:
216
220
.
Goldman
MS
.
2009
.
Memory without feedback in a neural network
.
Neuron
 .
61
:
621
634
.
Goldman-Rakic
P
.
1995
.
Cellular basis of working memory
.
Neuron
 .
14
:
477
485
.
Goodman
DFM
Brette
R
.
2009
.
The brian simulator
.
Front Neurosci
 .
3
:
192
197
.
Gross
SA
Guzmán
GA
Wissenbach
U
Philipp
SE
Zhu
MX
Bruns
D
Cavalié
A
.
2009
.
TRPC5 is a Ca2+-activated channel functionally coupled to Ca2+-selective ion channels
.
J Biol Chem
 .
284
:
34423
34432
.
Gutkin
BS
Laing
CR
Colby
CL
Chow
CC
Ermentrout
GB
.
2001
.
Turning on and off with excitation: the role of spike-timing asynchrony and synchrony in sustained neural activity
.
J Comput Neurosci
 .
11
:
121
134
.
Haj-Dahmane
S
Andrade
R
.
1998
.
Ionic mechanism of the slow afterdepolarization induced by muscarinic receptor activation in rat prefrontal cortex
.
J Neurophysiol
 .
80
:
1197
1210
.
Hansel
D
Mato
G
.
2013
.
Short-term plasticity explains irregular persistent activity in working memory tasks
.
J Neurosci
 .
33
:
133
149
.
Hempel
CM
Hartman
KH
Wang
XJ
Turrigiano
GG
Nelson
SB
.
2000
.
Multiple forms of short-term plasticity at excitatory synapses in rat medial prefrontal cortex
.
J Neurophysiol
 .
83
:
3031
3041
.
Itskov
V
Hansel
D
Tsodyks
M
.
2011
.
Short-term facilitation may stabilize parametric working memory trace
.
Front Comput Neurosci
 .
5
:
1
19
.
Jahr
CE
Stevens
CF
.
1990
.
Voltage dependence of NMDA-activated macroscopic conductances predicted by single-channel kinetics
.
J Neurosci
 .
10
:
3178
3182
.
Kalmbach
BE
Chitwood
RA
Dembrow
NC
Johnston
D
.
2013
.
Dendritic generation of mGluR-mediated slow afterdepolarization in layer 5 neurons of prefrontal cortex
.
J Neurosci
 .
33
:
13518
13532
.
Katona
I
Sperlágh
B
Sk
A
Käfalvi
A
Vizi
ES
Mackie
K
Freund
TF
.
1999
.
Presynaptically located CB1 cannabinoid receptors regulate GABA release from axon terminals of specific hippocampal interneurons
.
J Neurosci
 .
19
:
4544
4558
.
Kulkarni
M
Zhang
K
Kirkwood
A
.
2011
.
Single-cell persistent activity in anterodorsal thalamus
.
Neurosci Lett
 .
498
:
179
184
.
Laing
CR
Chow
CC
.
2001
.
Stationary bumps in networks of spiking neurons
.
Neural Comput
 .
13
:
1473
1494
.
Lim
S
Goldman
M
.
2013
.
Balanced cortical microcircuitry for maintaining information in working memory
.
Nat Nerosci
 .
16
:
1306
1314
.
Liu
YH
Wang
XJ
.
2001
.
Spike-frequency adaptation of a generalized leaky integrate-and-fire model neuron
.
J Comput Neurosci
 .
10
:
25
45
.
Llano
I
Leresche
N
Marty
A
.
1991
.
Calcium entry increases the sensitivity of cerebellar Purkinje cells to applied GABA and decreases inhibitory synaptic currents
.
Neuron
 .
6
:
565
574
.
Machens
CK
Romo
R
Brody
CD
.
2010
.
Functional, but not anatomical, separation of “what” and “when” in prefrontal cortex
.
J Neurosci
 .
30
:
350
360
.
Major
G
Tank
D
.
2004
.
Persistent neural activity: prevalence and mechanisms
.
Curr Opin Neurobiol
 .
14
:
675
684
.
Marder
E
Goaillard
JM
.
2006
.
Variability, compensation and homeostasis in neuron and network function
.
Nat Rev Neurosci
 .
7
:
563
574
.
Marsicano
G
Lutz
B
.
1999
.
Expression of the cannabinoid receptor CB1 in distinct neuronal subpopulations in the adult mouse forebrain
.
Eur J Neurosci
 .
11
:
4213
4225
.
Matveev
V
Wang
XJ
.
2000
.
Differential short-term synaptic plasticity and transmission of complex spike trains: to depress or to facilitate?
Cereb Cortex
 .
10
:
1143
1153
.
Miller
EK
Erickson
CA
Desimone
R
.
1996
.
Neural mechanisms of visual working memory in prefrontal cortex of the macaque
.
J Neurosci
 .
16
:
5154
5167
.
Miller
P
Brody
CD
Romo
R
Wang
XJ
.
2003
.
A recurrent network model of somatosensory parametric working memory in the prefrontal cortex
.
Cereb Cortex
 .
13
:
1208
1218
.
Mongillo
G
Barak
O
Tsodyks
M
.
2008
.
Synaptic theory of working memory
.
Science
 .
319
:
1543
1546
.
Murray
JD
Anticevic
A
Gancsos
M
Ichinose
M
Corlett
PR
Krystal
JH
Wang
XJ
.
2014
.
Linking microcircuit dysfunction to cognitive impairment: effects of disinhibition associated with schizophrenia in a cortical working memory model
.
Cereb Cortex
 .
24
:
859
872
.
Ohno-Shosaku
T
Maejima
T
Kano
M
.
2001
.
Endogenous cannabinoids mediate retrograde signals from depolarized postsynaptic neurons to presynaptic terminals
.
Neuron
 .
29
:
729
738
.
Partridge
LD
Valenzuela
CF
.
1999
.
Ca2+ store-dependent potentiation of Ca2+-activated nonselective cation channels in rat hippocampal neurones in vitro
.
J Physiol
 .
521
:
617
627
.
Pitler
TA
Alger
BE
.
1992
.
Postsynaptic spike firing reduces synaptic GABAA responses in hippocampal pyramidal cells
.
J Neurosci
 .
12
:
4122
4132
.
Renart
A
Song
P
Wang
XJ
.
2003
.
Robust spatial working memory through homeostatic synaptic scaling in heterogeneous cortical networks
.
Neuron
 .
38
:
473
485
.
Rolls
ET
Loh
M
Deco
G
Winterer
G
.
2008
.
Computational models of schizophrenia and dopamine modulation in the prefrontal cortex
.
Nat Rev Neurosci
 .
9
:
696
709
.
Romo
R
Brody
CD
Hernández
A
Lemus
L
.
1999
.
Neuronal correlates of parametric working memory in the prefrontal cortex
.
Nature
 .
399
:
470
473
.
Sidiropoulou
K
Lu
FM
Fowler
MA
Xiao
R
Phillips
C
Ozkan
ED
Zhu
MX
White
FJ
Cooper
DC
.
2009
.
Dopamine modulates an mGluR5-mediated depolarization underlying prefrontal persistent activity
.
Nat Neurosci
 .
12
:
190
199
.
Stokes
MG
Kusunoki
M
Sigala
N
Nili
H
Gaffan
D
Duncan
J
.
2013
.
Dynamic coding for cognitive control in prefrontal cortex
.
Neuron
 .
78
:
364
375
.
Strübing
C
Krapivinsky
G
Krapivinsky
L
Clapham
DE
.
2001
.
TRPC1 and TRPC5 form a novel cation channel in mammalian brain
.
Neuron
 .
29
:
645
655
.
Szatmary
B
Izhikevich
EM
.
2010
.
Spike-timing theory of working memory
.
PLoS Comput Biol
 .
6
:
e1000879
.
Tegnér
J
Compte
A
Wang
XJ
.
2002
.
The dynamical stability of reverberatory neural circuits
.
Biol Cybern
 .
87
:
471
481
.
Trettel
J
Levine
ES
.
2003
.
Endocannabinoids mediate rapid retrograde signaling at interneuron pyramidal neuron synapses of the neocortex
.
J Neurophysiol
 .
89
:
2334
2338
.
Tsodyks
M
Sejnowski
T
.
1995
.
Associative memory and hippocampal place cells
.
Int J Neural Syst
 .
6
:
81
86
.
Tsodyks
MV
Markram
H
.
1997
.
The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability
.
Proc Natl Acad Sci USA
 .
94
:
719
723
.
Tuckwell
H
.
1988
.
Introduction to theoretical neurobiology
 .
Cambridge
(
UK
):
Cambridge University Press
.
Wang
M
Yang
Y
Wang
CJ
Gamo
NJ
Jin
LE
Mazer
JA
Morrison
JH
Wang
XJ
Arnsten
AFT
.
2013
.
NMDA receptors subserve persistent neuronal firing during working memory in dorsolateral prefrontal cortex
.
Neuron
 .
77
:
736
749
.
Wang
XJ
.
2008
.
Decision making in recurrent neuronal circuits
.
Neuron
 .
60
:
215
234
.
Wang
XJ
.
2002
.
Probabilistic decision making by slow reverberation in cortical circuits
.
Neuron
 .
36
:
955
968
.
Wang
XJ
.
1999
.
Synaptic basis of cortical persistent activity: the importance of NMDA receptors to working memory
.
J Neurosci
 .
19
:
9587
9603
.
Wang
XJ
.
2001
.
Synaptic reverberation underlying mnemonic persistent activity
.
Trends Neurosci
 .
24
:
455
463
.
Wang
Y
Markram
H
Goodman
PH
Berger
TK
Ma
J
Goldman-Rakic
PS
.
2006
.
Heterogeneity in the pyramidal network of the medial prefrontal cortex
.
Nat Neurosci
 .
9
:
534
542
.
Wei
Z
Wang
XJ
Wang
DH
.
2012
.
From distributed resources to limited slots in multiple-item working memory: a spiking network model with normalization
.
J Neurosci
 .
32
:
11228
11240
.
Wilson
RI
Kunos
G
Nicoll
RA
.
2001
.
Presynaptic specificity of endocannabinoid signaling in the hippocampus
.
Neuron
 .
31
:
453
462
.
Wilson
RI
Nicoll
RA
.
2001
.
Endogenous cannabinoids mediate retrograde signalling at hippocampal synapses
.
Nature
 .
410
:
588
592
.
Yoshida
M
Hasselmo
ME
.
2009
.
Persistent firing supported by an intrinsic cellular mechanism in a component of the head direction system
.
J Neurosci
 .
29
:
4945
4952
.
Zhang
K
.
1996
.
Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory
.
J Neurosci
 .
16
:
2112
2126
.
Zucker
RS
.
1989
.
Short-term synaptic plasticity
.
Annu Rev Neurosci
 .
12
:
13
31
.