Abstract

The lateral intraparietal area (area LIP) contains a multimodal representation of extra-personal space. To further examine this representation, we trained rhesus monkeys on the predictive-cueing task. During this task, monkeys shifted their gaze to a visual target whose location was predicted by the location of an auditory or visual cue. We found that, when the sensory cue was at the same location as the visual target, the monkeys’ mean saccadic latency was faster than when the sensory cue and the visual target were at different locations. This difference in mean saccadic latency was the same for both auditory cues and visual cues. Despite the fact that the monkeys used auditory and visual cues in a similar fashion, LIP neurons responded more to visual cues than to auditory cues. This modality-dependent activity was also seen during auditory and visual memory-guided saccades but to a significantly greater extent than during the predictive-cueing task. Additionally, we found that the firing rate of LIP neurons was inversely correlated with saccadic latency. This study indicates further that modality-dependent differences in LIP activity do not simply reflect differences in sensory processing but also reflect the cognitive and behavioral requirements of a task.

Introduction

The posterior parietal cortex receives input from different sensory modalities and combines this information to form a multimodal representation of extra-personal space (Duhamel et al., 1991; Andersen, 1997; Cohen and Andersen, 2004). For instance, neurons in the lateral intraparietal area (area LIP; Mazzoni et al., 1996; Stricanne et al., 1996; Grunewald et al., 1999; Linden et al., 1999; Mullette-Gillman et al., 2002; Gifford and Cohen, 2004) and the parietal reach region (Batista et al., 1999; Cohen and Andersen, 2000; Snyder et al., 2000; Cohen et al., 2002) code the locations of auditory and visual stimuli. Similarly, neurons in the ventral intraparietal area have overlapping auditory, visual and somatosensory spatial response fields (Duhamel et al., 1998; Schlack et al., 2000; Bremmer et al., 2002; Schlack et al., 2002). Neurons in these parietal areas can also be further modulated by vestibular and proprioceptive signals (Brotchie et al., 1995; Snyder et al., 1998; Bremmer et al., 2002; Cohen and Andersen, 2002).

In some parietal areas, particularly area LIP, neurons code aspects of a sensory stimulus in a modality-dependent manner. During fixation and saccade tasks, LIP neurons are modulated more by visual stimuli than by auditory stimuli (Linden et al., 1999; Mullette-Gillman et al., 2002; Gifford and Cohen, 2004). The response fields of LIP neurons are also more selective for visual-stimuli locations than for auditory-stimuli locations (Linden et al., 1999; Mullette-Gillman et al., 2002; Gifford and Cohen, 2004). In contrast to these observations, area LIP codes the locations of auditory and visual stimuli in the same, modality-independent reference frame (Cohen and Andersen, 2002).

The factors that underlie this modality-dependent coding are unclear. For instance, modality-dependent coding may simply reflect differences between auditory and visual sensory processing (Brown et al., 1978a,b, 1980; Wightman and Kistler, 1993; Blauert, 1997; Recanzone et al., 1998). An alternative possibility is that it reflects salience differences between auditory and visual stimuli (Kusunoki et al., 2000; Toth and Assad, 2002). For example, during saccade tasks, LIP neurons may respond more to visual stimuli than to auditory stimuli (Linden et al., 1999; Mullette-Gillman et al., 2002; Gifford and Cohen, 2004), because, in the context of planning eye movements, visual stimuli are more salient than auditory stimuli. A third possibility is that these coding differences may stem from modality-dependent differences in the behavioral responses of the monkeys. For instance, while monkeys (and other animals, including humans) can make gaze shifts to auditory or visual stimuli, there are, in general, modality-dependent differences between the kinetics and dynamics of these gaze shifts (Zambarbieri et al., 1982; Stein and Meredith, 1993; Baro et al., 1995; Groh and Sparks, 1996; Yao and Peck, 1997; Boucher et al., 2004).

The purpose of this study was to study further how these factors contribute to modality-dependent LIP activity. We recorded LIP activity while monkeys participated in the predictive-cueing task. In this task, monkeys shifted their gaze to a visual target whose location was predicted by the location of an auditory or visual cue (Posner, 1980). The advantages of this task design were that it controlled for salience and behavioral-response differences that may underlie modality-dependent LIP activity. First, differences between the task relevance (salience) of the auditory and visual stimuli could be quantified, since we could measure how the monkeys’ saccadic latencies were affected by the locations of the auditory or visual cues. Secondly, the monkeys’ behavioral response (gaze shifts to visual targets) was independent of cue modality.

We found that the monkeys’ saccadic latencies were affected by the location of a sensory cue: mean saccadic latency was faster during trials when the cue and target were at the same location than during trials when they were at different locations. Importantly, the effect on mean saccadic latency was the same for both auditory and visual cues. Despite this similarity, visual cues modulated LIP activity more than auditory cues. In a control task (memory-guided saccades), LIP activity was also modulated more by visual cues than by auditory cues. However, the difference between visual and auditory activity during this control task was significantly greater than the difference during the predictive-cueing task. Together, these data suggest that modality-dependent LIP activity was due to differences in the behavioral task and not wholly sensory processing. In later epochs of the predictive-cueing task, modality-dependent differences were minimized and were not reliably different when the monkeys were generating saccades to the visual-target location; the firing rate of LIP neurons during these latter portions of the trial was also negatively correlated with saccadic latency.

Materials and Methods

Subjects

Two female rhesus monkeys (Macaca mulatta) were used in these experiments. These monkeys are referred to as monkey Z and monkey E. Both monkeys were trained on all of the tasks described in this study. They weighed between 5.0 and 6.0 kg. All surgical, recording, and training sessions were in accordance with the National Institutes of Health’s Guide for the Care and Use of Laboratory Animals and were approved by the Dartmouth Institutional Animal Care and Use Committee.

Surgical Procedures

The surgical procedures were carried out under aseptic, sterile conditions, using general anesthesia (isoflurane). These procedures were performed in a dedicated surgical suite operated by the Animal Resource Center at Dartmouth College.

In the first procedure, titanium bone screws were implanted in the skull, and a methylmethacrylate implant was constructed. A Teflon-insulated, 50-gauge stainless-steel wire coil was also implanted between the conjunctiva and the sclera; the wire coil allowed us to monitor the monkey’s eye position (Judge et al., 1980). Finally, a head-positioning cylinder (FHC-S2; Crist Instruments) was embedded in the implant; this cylinder connected to a primate chair (model CC; Crist Instruments) and stabilized the monkey’s head during behavioral training and recording sessions.

After the monkeys had learned the behavioral tasks (see below), a craniotomy was performed and a recording cylinder (ICO-J20; Crist Instruments) was implanted. This surgical procedure provided chronic access to area LIP for neurophysiological recording. The recording cylinder was centered over stereotaxic coordinates six posterior and 12 lateral, which overlap with the intraparietal sulcus.

Experimental Setup

Behavioral training and recording sessions were conducted in a darkened room with sound-attenuating walls. The walls and floor of the room were covered with anechoic foam insulation (Sonomatt; Auralex). Since the room was darkened, the speakers that produced the auditory stimuli were not visible to the monkeys. While inside the room, the monkeys were seated in the primate chair. The primate chair was placed in the center of a 152 cm diameter, two-dimensional, magnetic coil (Riverbend Instruments) that was part of a system monitoring the monkeys’ eye position (Judge et al., 1980). Eye position was sampled with an analog-to-digital converter (PXI-6052E; National Instruments Inc.) at a rate of 1.0 kHz.

Auditory and visual stimuli were presented through a custom-built system (Walsh Electronics). This system contained two XY gantries capable of positioning a stimulus assembly, relative to the center of each monkey’s head, from 20° to the left of straight ahead to 20° to the right and ±20° of elevation with an accuracy of <0.4°. Each stimulus assembly consisted of a speaker (TWO25V2; Audax) and a red light-emitting diode (LED; 276–307; Radio Shack); the LED was centered on the speaker face. In addition, another LED, which served as the ‘central’ visual fixation light, was placed directly in front of the monkeys at their eye level. Custom-designed software (LabVIEW; National Instruments) interfaced with a microcontroller (Basic Stamp 2SX; Parallax Inc.) and a stepper-motor control board (ST400nt; RMV Electronics) to control the gantry motors (LA23ACH-2; EADmotors) and the onset/offset of auditory or visual stimuli.

While the gantry motors produced sounds, we did not observe any behavior suggesting that the monkeys used these sounds to track the location of the stimulus assemblies. Nevertheless, a broad-band masking sound was presented during gantry movement. The duration of this masker was constant: it began before the gantries moved and ended after gantries had stopped moving. Consequently, the monkeys could not use the masker duration as a cue for the duration of gantry movement. Also, during training sessions to further minimize the possibility that the monkeys were using the gantry sounds as a reliable indicator of stimulus location, both stimulus assemblies were moved on every trial, even if only one assembly generated a sensory stimulus during a task.

Behavioral Tasks

Three different tasks were used: the visually guided saccade task, the predictive-cueing task, and the memory-guided saccade task.

The stimulus locations in each of the three tasks were the same. Relative to the central LED, the stimuli were (in xy coordinates) at (±10°, 0°), (±10°, ±10°), (+3°, +10°) and (–3°, –10°); negative x-coordinates were locations to the left of the central LED and negative y-coordinates were locations below the central LED. The two stimulus locations at (+3°, +10°) and (–3°, –10°) were offset from x = 0° so that the width of the XY gantry would not block the central LED.

Visually Guided Saccade Task

In the visually guided saccade task (Fig. 1A), 400–700 ms after fixating the central LED, a visual stimulus was presented at one of the eight peripheral locations. Concomitant with introduction of the peripheral stimulus, the central LED was extinguished. The extinction of the central LED signaled the monkey to saccade to the location of the peripheral stimulus. After saccading, she maintained her gaze at the stimulus location for an additional 400–500 ms to be rewarded. The monkey maintained her gaze within 2° of the central LED or the peripheral stimulus.

Memory-guided Saccade Task

In the memory-guided saccade task (Fig. 1B), 400–700 ms after fixating the central LED, a sensory cue was presented at one of the eight peripheral locations. The monkey maintained her gaze at the location of the central LED during the presentation of the peripheral sensory cue and for an additional 400–700 ms after cue offset. Following this delay period, the central LED was extinguished and the monkey saccaded to the remembered location of the peripheral cue. After saccading, she maintained her gaze at the remembered location for an additional 400–500 ms to be rewarded. Prior to saccading to the location of the peripheral cue, the monkey maintained her gaze within 2° of the central LED; at the remembered location, the fixation requirement was 5°. In the visual memory-guided saccade task, the sensory cue was a visual stimulus and, in the auditory memory-guided saccade task, the sensory cue was an auditory stimulus.

Predictive-cueing Task

The predictive-cueing task is analogous to the cueing paradigm introduced by Posner (1980). In our version (Fig. 1C), there were two sensory-cue locations and two visual-target locations. On any given trial, only one sensory-cue location and one visual-target location were used. On 80% of the successful trials, the sensory cue and the visual target were at the same location (a ‘predictive’ trial) and, on 20% of the successful trials, the cue and the target were at different locations (a ‘non-predictive’ trial). By varying the modality of the sensory cue, two variants of the predictive-cueing task were generated: (i) in the visual predictive-cueing task, the sensory cue was a visual stimulus and (ii) in the auditory predictive-cueing task, the sensory cue was an auditory stimulus.

A schematic of the time course of the predictive-cueing task is shown in Figure 1C. A trial began with the monkey fixating the central LED; 400–500 ms after fixating the central LED, a 300 ms sensory cue was introduced into the environment; 200–250 ms after cue offset, the central LED was extinguished. Concomitant with central-LED offset, the visual target was introduced, which signaled the monkey to saccade its location. She maintained her gaze within 2° of the visual-target location for 400–500 ms to be rewarded.

The locations of the cues and targets depended on each neuron’s spatial response field; a neuron’s response field was generated from data collected during the visually guided saccade task (see Recording Strategy for more details). The location that elicited the maximal response during the visually guided saccade task was designated as one cue-target location (the ‘IN’ location). The location that was 180° contralateral to the IN location was the second cue-target location (the ‘OUT’ location). For example, if a neuron responded maximally at (+10°, 0°), the IN location would be (+10°, 0°) while the OUT location would be (–10°, 0°). During predictive trials, a sensory cue and the visual target were either at the IN location (Fig. 1Da) or OUT (Fig. 1Db) locations. During non-predictive trials, (i) a sensory cue was at the IN location and the visual target was at the OUT location (Fig. 1Dc) or (ii) a sensory cue was at the OUT location and the visual target was at the IN location (Fig. 1Dd).

As a control, the monkeys participated in a ‘neutral’ version of the predictive-cueing task in which sensory cues (either auditory or visual stimuli) were presented simultaneously at both cue locations. As a consequence of this design, the sensory cues were not predictive of the future location of the visual target.

Auditory and Visual Stimuli

Auditory stimuli were generated in a digital-signal-processing environment based on the AP2 DSP card (Tucker Davis Technologies). The stimuli were a series of 100 ms amplitude-modulated sound bursts (Fig. 2; Example 3, Tucker-Davis-Technologies DSP manual). Each sound burst had a different fundamental frequency that was a multiple factor (between 1 and 13) of 0.116 kHz. During each trial, ‘fresh’ stimuli were generated by randomly combining sound bursts with different fundamental frequencies. The frequency resolution of the spectrum of each sound burst was 0.003 kHz and each sound burst had an onset time of 2 ms and offset time of 50 ms. Each auditory stimulus was presented at 78 dB sound pressure level (SPL, relative to 20 µPa, A-weighting). An auditory stimulus was presented through a D/A converter (DA1, Tucker Davis Technologies), an anti-aliasing filter (FT6-92; Tucker Davis Technologies), an amplifier (MPA-250; Radio Shack), an attenuator (PA4; Tucker Davis Technologies) and finally transduced by one of the Audax speakers.

Visual stimuli at the peripheral locations were produced by illuminating the LED on a stimulus assembly. The visual stimulus used for central fixation was produced by illuminating the central LED. Each LED subtended <0.2° of visual angle. The luminous of each LED was 12.6 cd/m2.

We used this rather complex auditory stimulus, because, during early training sessions, we determined that it provided more robust and reliable behavior than band-limited white noise. We speculate that the complex frequency spectrum and the variation in the spectrum from trial to trial may have made these stimuli more salient (Kusunoki et al., 2000) than white noise. It is important to note that, while the auditory and visual stimuli had different spectral complexities, their effect during the predictive-cueing task was similar (see Results).

Recording Procedures

Single-unit extracellular recordings were obtained with varnish-coated tungsten microelectrodes (∼2MΩ impedance at 1 kHz; Frederick Haer & Co.) that were seated inside a stainless-steel guide tube. The electrode and guide tube were advanced into the brain with a hydraulic microdrive (MO-95; Narishige). The electrode signal was amplified by a factor of 104 (MDA-4I; Bak Electronics) and band-pass filtered (model 3700; Krohn-Hite) between 0.6–6.0 kHz. Single-unit activity was isolated using a variable-delay, two-window, time-voltage discriminator (Model DDIS-1; Bak Electronics). Neural events that passed through both of the adjustable time-voltage windows were classified as originating from a single neuron. When a neural event passed through both windows, the discriminator produced a TTL pulse. This pulse was relayed to a counting circuit on a data-acquisition board (PXI-6052E; National Instruments Inc.) that recorded the time, with an accuracy of 0.01 ms, of the pulse occurrence. The time of occurrence of each action potential was stored for on- and off-line analyses. To help to ensure that recordings were made from area LIP, rather than area 7a which lies on the brain surface (Andersen et al., 1990; Lewis and Van Essen, 2000), the electrode was advanced 3.0 mm below the dura at the start of each recording session.

Recording Strategy

Any LIP neuron that was isolated was examined. Once a neuron was isolated, a monkey participated in a block of trials of the visually guided saccade task. Since LIP neurons code comparable regions of auditory and visual space and since visual responses predict the presence of auditory responses (Mazzoni et al., 1996; Linden et al., 1999; Mullette-Gillman et al., 2002; Gifford and Cohen, 2004), this procedure did not bias us against finding LIP neurons sensitive to auditory stimuli. Within a block of trials, the stimulus location varied in a balanced, pseudorandom order. Typically, at least five trials per stimulus location were collected. Neural activity collected prior to the monkey’s saccade to the visual target or during the saccade were correlated with visual-target location to form a response field; response fields constructed from neural activity during these two periods are highly correlated (Barash et al., 1991a; Mazzoni et al., 1996; Linden et al., 1999; Mullette-Gillman et al., 2002).

If the response profile indicated that the neuron’s firing rate was modulated by the visually guided saccade task, we recorded the activity of this neuron during the predictive-cueing task or the memory-guided saccade task. The locations of the cues and targets during the predictive-cueing task were dependent on a neuron’s response during the visual-saccade task: the location that elicited the maximum response during the visual-saccade task was the IN location and the location 180° contralateral was the OUT location. The order of the monkeys’ participation in the predictive-cueing and memory-guided saccade tasks varied randomly on a neuron-by-neuron basis. During the predictive-cueing task, data from at least 50–100 trials were collected at each of the two visual-target locations and during the memory-guided saccade task, data from at least five trials were collected at each of the eight stimulus locations.

After a block of trials of the predictive-cueing task or the memory-guided saccade task was complete, the monkey participated in another block of trials. While the isolation of the neuron was stable, this procedure was repeated until monkeys participated in the auditory and visual predictive-cueing tasks and in the auditory and visual memory-guided saccade tasks.

Data Analysis

Behavioral Data

For each correct trial of the predictive-cueing task, the monkeys’ saccadic latency, relative to onset of the visual target, was calculated. After the eye-position data was smoothed, a computer algorithm detected saccade onset as the time when eye velocity exceeded 40°/s. A t-test examined whether mean saccadic latency varied as a function of the predictive nature of the cue or cue modality.

Neural Data

Neural data were divided into different task periods. The cue period was the 300 ms period that began with cue onset; the time of occurrence of action potentials was aligned relative to sensory-cue onset. During the predictive-cueing task, the response period began following target onset and ended prior to saccade onset; the time of occurrence of action potentials was aligned relative to visual-target onset.

A modulation index examined, on a neuron-by-neuron basis, how the predictive-cueing task modulated LIP activity. The general formula for this modulation index was (AB)/(A + B). A and B were the mean firing rate (number of action potentials divided by the duration of the task period; spikes/s) of a neuron. During the cue period, A was the mean firing rate of a LIP neuron when a sensory cue was presented at the IN location and B was the mean firing rate when a sensory cue was presented at the OUT location. During the response period, A was the mean firing rate of a LIP neuron when the visual target was at the IN location and a sensory cue was predictive and B was the mean firing rate when the visual target was at the IN location and a sensory cue was non-predictive.

In another series of analyses, for each LIP neuron recorded during the predictive-cueing task, on a trial-by-trial basis, we examined the relationship between cue-period or response-period firing rate and saccadic latency. To account for neuron-by-neuron differences in the magnitude and variance of the firing-rate and saccadic-latency values, firing rate and saccadic latency were normalized with a Z-score transformation. These normalized values were then correlated with a Spearman-rank test.

Verification of Recording Locations

Recording locations were confirmed by visualizing the recording microelectrode in the lateral bank of the intraparietal sulcus of each monkey through magnetic-resonance images. These images were obtained at the Dartmouth Brain Imaging Center using a GE 1.5 T scanner (3-D T1-weighted gradient echo pulse sequence with a 5 in. receive-only surface coil) (Groh et al., 2001).

Results

Behavioral Data

The predictive nature of the auditory cue and visual cue affected the mean saccadic latencies of the two monkeys in a similar manner. Figure 3 summarizes the behavioral data that were collected while we recorded LIP activity during the visual predictive-cueing task or the auditory predictive-cueing task. Data from different visual-target locations were combined, because visual-target location did not systematically affect saccadic latency.

The behavioral data in Figure 3 are displayed as a function of the ‘predictive effect’ for each block of trials of the predictive-cueing task. The predictive effect is the difference between the mean saccadic latency during non-predictive trials and the mean saccadic latency during predictive trials. If the predictive effect is >0 ms, it suggests that the monkeys’ mean saccadic latency was faster during predictive trials than during non-predictive trials. In contrast, if the predictive effect is <0 ms, it suggests that the monkeys’ mean saccadic latency was faster during non-predictive trials than during predictive trials.

For monkey Z, during the visual predictive-cueing task, her mean saccadic latency during predictive trials was 217.9 ms (SD = 30.5) and her mean saccadic latency during non-predictive trials was 236.7 (SD = 24.1). The mean predictive effect was 18.6 ms (SD = 8.0). A t-test indicated that this difference was significantly greater than 0 ms (t = 16.4, df = 49, P < 0.05). A similar analysis during the auditory predictive-cueing task indicated that her mean saccadic latency during predictive trials was 215.7 ms (SD = 23.6) and her mean saccadic latency during non-predictive trials was 232.1 ms (SD = 18.4). The mean predictive effect during this task was 16.7 ms (SD = 8.8), a distribution significantly greater than 0 ms (t = 13.6, df = 49, P < 0.05). The distributions of predictive-effect values during the visual and the auditory predictive-cueing tasks were not reliably different (t = 1.1, df = 98, P > 0.05).

A similar pattern was observed for monkey E. During the visual predictive-cueing task, her mean saccadic latency during predictive trials was 261.3 ms (SD = 34.5) and her mean saccadic latency during non-predictive trials was 275.6 ms (SD = 32.2). Monkey E’s mean predictive effect was 14.1 ms (SD = 9.4) during the visual predictive-cueing task. During the auditory predictive-cueing task, her mean saccadic latency during predictive trials was 264.8 ms (SD = 26.5) and her mean saccadic latency during non-predictive trials was 275.6 ms (SD = 19.7). During this task, her mean predictive effect was 10.1 ms (SD = 6.2). Both of these distributions of predictive-effect values were significantly greater than 0 ms (visual predictive-cueing task, t = 10.1, df = 45, P < 0.05; auditory predictive-cueing task, t = 9.8, df = 32, P < 0.05). Also, the distributions of predictive-effect values during the visual predictive- and the auditory predictive-cueing tasks were not reliably different (t = –1.7 df = 76, P > 0.05).

Two points are important to address. First, the mean saccadic latencies of monkeys Z and E were in the range of mean values reported previously in the literature (Li et al., 1999; Powell and Goldberg, 2000; Wardak et al., 2002). Secondly, while the mean predictive-effect values of monkey E and Z were somewhat different, both parametric (t-test: t = –1.5; df = 176, P > 0.05) and non-parametric (Wilcoxon rank-sum test: rank sum = 6890.5, P > 0.05) tests indicated that these differences were not reliable.

As a control, the monkeys participated in blocks of trials of the neutral version of the predictive-cueing task (see Behavioral Tasks). To analyze the monkeys’ mean saccadic latency during the neutral predictive-cueing task, 80% of the successful trials were randomly assigned to be ‘predictive’ and 20% of the successful trials were randomly assigned to be ‘non-predictive’. As expected, when the monkeys participated in the neutral predictive-cueing task, the sensory cues did not systematically affect the monkeys’ responses. During the visual predictive-cueing task, the mean predictive-effect time was 0.65 ms (SD = 3.6), which did not differ reliably (t = 0.8, df = 19, P > 0.05) from 0 ms. Similarly, during the auditory predictive-cueing task, the mean predictive-effect time was –1.1 ms (SD = 2.9). This mean value also did not differ reliably (t = –1.3, df = 13, P > 0.05) from 0 ms.

In summary, these results indicate that both monkey Z and monkey E used sensory cues to predict the future location of a sensory target. As a result, their mean saccadic latency was affected by the cue locations. Moreover, for both monkeys the predictive nature of the auditory and visual cues modulated their mean saccadic latencies in a comparable manner.

Neural Data

Neural data were pooled for presentation, since the results were similar for both monkeys. 96 LIP neurons were recorded during the visual predictive-cueing task and 82 were recorded during the auditory predictive-cueing task. Thirty LIP neurons were recorded during the visual memory-guided saccade task and 40 LIP neurons were recorded during the auditory memory-guided saccade task.

Figure 4 shows a magnetic-resonance image of the recording microelectrode in the cortex of monkey Z. The tip of the microelectrode was halfway down the lateral bank of the intraparietal sulcus. This microelectrode’s location and the location of microelectrodes from monkey E (data not shown) are in agreement with previously published reports of area LIP (Andersen et al., 1990; Platt and Glimcher, 1998; Eskandar and Assad, 1999; Grunewald et al., 1999; Linden et al., 1999; Powell and Goldberg, 2000; Shadlen and Newsome, 2001).

The recorded neurons had properties associated with area LIP. LIP neurons are modulated by the presentation of a sensory stimulus, maintain high levels of activity during the delay period of memory-guided saccades and are modulated when a monkey generates a saccade (Gnadt and Andersen, 1988; Barash et al., 1991a,b; Mazzoni et al., 1996; Platt and Glimcher, 1998; Eskandar and Assad, 1999; Linden et al., 1999; Powell and Goldberg, 2000; Shadlen and Newsome, 2001). An example of a LIP neuron with these properties is shown in Figure 5; this neuron was recorded while a monkey made memory-guided saccades to visual stimuli.

LIP Responses to Auditory and Visual Cues

Since the predictive effect was comparable for auditory and visual cues (Fig. 3), we hypothesized that LIP neurons would be modulated similarly by auditory and visual cues. An example neuron shown in Figure 6 is contrary to this hypothesis; for illustrative purposes, only predictive trials to the IN or OUT locations are shown. During the visual predictive-cueing task, this neuron responded robustly to the visual cue at the IN location (Fig. 6A) and maintained a high firing rate throughout the remainder of the trial. In contrast, during the auditory predictive-cueing task (Fig. 6B), the auditory cue did not modulate the firing rate of the neuron. This neuron was not modulated until the monkey generated a saccade to the visual target.

Figure 7 shows normalized responses of LIP activity during auditory and visual predictive-cueing tasks. Both auditory and visual cues modulated the firing rate of LIP neurons. On average, the firing rate of LIP neurons was significantly higher (t-test, P < 0.05) during the visual-cue period than during the auditory-cue period (Fig. 7A,D). These differences persisted following cue offset (t-test, P < 0.05). However, following target onset and the monkeys’ saccade to its location (Fig. 7B,E and C,F, respectively), activity during the two predictive-cueing tasks was not reliably different (t-test, P > 0.05).

Is this modality-dependent modulation of LIP activity dependent on the requirements of the behavioral task? To address this question, we examined LIP activity during auditory and visual memory-guided saccades. Comparing LIP activity during the predictive-cueing task with activity collected during the memory-guided saccade task is an appropriate comparison, because, in both tasks, sensory cues indicated the locations of subsequent eye movements and the eye movements occurred following cue offset.

The normalized response profiles of LIP activity during memory-guided saccades are shown in Figure 8. To facilitate comparison with data shown in Figure 7, we restricted this population analysis to memory-guided saccades made to the IN or OUT locations. LIP neurons were modulated by both the auditory and visual cues. However, the firing rate during the visual-cue period was significantly (t-test, P < 0.05) higher than the firing rate during the auditory-cue period. Following cue offset, LIP activity during the visual memory-guided saccade task remained higher than activity during the auditory memory-guided saccade task. But, during later portions of the delay period and during the saccade (data not shown), LIP neurons did not have reliably different mean firing rates (t-test, P > 0.05).

While LIP neurons were modulated in a modality-dependent manner during the predictive-cueing and memory-guided saccade tasks, a comparison of Figures 7 and 8 indicates that the degree of modulation was substantially greater during the memory-guided saccade task. To quantify this observation, we calculated the area (in units of ms × normalized firing rate) between the population auditory and visual response profiles during the cue period of both tasks and used a bootstrap algorithm to estimate the confidence intervals of these areal calculations; this analysis was limited to those trials when the cue was at the IN location. The area between the response profiles of the visual predictive-cueing task and the auditory predictive-cueing task was 16.5 (SD = 1.8). The area between the response profiles of the visual memory-guided saccade task and the auditory memory-guided saccade task was 53.5 (SD = 4.7), an areal difference significantly (t-test, P < 0.05) greater than that observed during the predictive-cueing task.

We also examined the spatial selectivity of LIP responses during the cue period of the predictive-cueing task. Spatial selectivity was quantified with a modulation index. This analysis was restricted to the 65 neurons from which we obtained data during both the visual and auditory predictive-cueing tasks so that we could obtain a neuron-by-neuron correlation. This correlation is shown in Figure 9. Index values were positively correlated but were systematically larger during the visual predictive-cueing task than during the auditory predictive-cueing task. Consistent with this observation, the slope of the least-mean-squares line that was fit to the data was 0.26 with a 95% confidence interval of [0.05 0.49]. Since this confidence interval does not include zero, it indicates that the slope was significant (P < 0.05). Also, since this interval did not include slope = 1, the hypothesis that the modulation-index values during the visual and auditory-predictive cueing tasks were the same can be rejected at P < 0.05. The Spearman rank correlation coefficient also indicated that these values were significantly (P < 0.05) correlated (rs = 0.44). Together, these analyses indicate that, while LIP neurons coded comparable regions of space, LIP neurons were more spatially selective for visual stimuli than for auditory stimuli.

Neural Correlates of Predictive Cue on Saccadic Latency

Since the monkeys’ mean saccadic latency differed on predictive and non-predictive trials (Fig. 3), we hypothesized that firing rates of LIP neurons may be correlated with saccadic latency. Specifically, we hypothesized a negative correlation: as firing rate increased, saccadic latency decreased. To test this hypothesis, we correlated, on a trial-by-trial basis, firing rate with saccadic latency for each recorded LIP neuron.

First, we examined data during predictive trials. The panels in Figure 10A show the distributions of correlation values for predictive trials when both the cue and target were at the IN location. During the cue period (Fig. 10Aa,Ab), the mean correlation values were shifted slightly toward negative values (Table 1) but only the mean correlation value during the auditory predictive-cueing task (Fig. 10Ab) was significantly (P < 0.05; Table 1) less than zero. During the response periods of both the auditory and visual predictive-cueing tasks (Fig. 10Ac,Ad), we noted a significant (P < 0.05; Table 1) relationship between firing rate and saccadic latency. Not surprisingly, the mean correlation values during the response period were stronger than those during the cue period: during the response period, the visual target was present and the monkeys were preparing their saccades.

The distributions of correlation values during the response periods of non-predictive trials are shown in Figure 10B. The data in this figure were from those trials when the target was at the IN location. We did not examine cue-period activity, since the cue and target were at different locations. The mean correlation values of these distributions were comparable to those observed during the response period of predictive trials at the IN location (Fig. 10Ac,d) but only the distribution generated from data during the visual predictive-cueing task (Fig. 10Ba) was significantly (P < 0.05; Table 1) less than zero.

Finally, we examined the distributions of correlation values when both the cue and the target were at the OUT location. These data are shown in Figure 10C. The mean correlation values during both the cue and response periods were not reliably different from zero (Table 1).

The observation that firing rate during the response period tended to correlate negatively with saccadic latency (Fig. 10 and Table 1) led us to consider whether the firing rate of neurons during the response period of predictive trials was higher than the firing rate during non-predictive trials. The neuron in Figure 11 is consistent with this hypothesis. During predictive trials of the visual-predictive cueing task (Fig. 11A, black raster plot and histogram), onset of the visual target increased the already elevated firing rate of this neuron. During non-predictive trials (Fig. 11A, grey raster plot and histogram), the visual target did not modulate the firing rate of this neuron until ∼200 ms later, during the approximate time when the monkey initiated her saccade to the visual-target location. A similar pattern was noted during the auditory predictive-cueing task (Fig. 11B, compare black and grey data). We should note that increased activity during the response period of predictive trials may stem from a memory trace of the cue at the IN location (Gnadt and Andersen, 1988) that persists following cue offset (compare activity before and after time equals zero).

To quantify these observations, we used a modulation index to compare differences in response-period firing rate during predictive and non-predictive trials when the target was at the IN location. The results of this analysis are shown in Figure 12 and support the hypothesis that, on average, the firing rates of LIP neurons were higher during predictive trials than during non-predictive trials. During the visual predictive-cueing task, the average modulation-index value was 0.07 (standard error of mean = 0.02). This distribution of modulation-index values was significantly greater (t = 4.3, df = 95, P < 0.05) than 0, indicating that, on average, the firing rate during predictive trials was higher than the firing rate during non-predictive trials. Similarly, during the auditory predictive-cueing task, the average modulation-index value was 0.05 (standard error of mean = 0.02), which was significantly greater (t = 3.6, df = 81, P < 0.05) than 0. Importantly, these two distributions of modulation-index values were not reliably different (t = 0.77, df = 176, P >0.05).

Discussion

In this study, we examined the modality-dependent nature of LIP activity. We recorded LIP activity while rhesus monkeys participated in the predictive-cueing task. The two advantages of this task design were that it controlled for potential factors that may underlie modality-dependent LIP activity. First, the monkeys’ behavioral response (gaze shifts to visual targets) was independent of cue modality. Secondly, differences between the task relevance (salience) of the auditory and visual stimuli could be quantified, because we could measure how the monkeys’ saccadic latencies were affected by the locations of the auditory or visual cues.

We found that monkeys’ mean saccadic latency during predictive trials was faster than their mean saccadic latency during non-predictive trials (Fig. 3) when cued with either auditory stimuli or visual stimuli (Petersen et al., 1987; Bowman et al., 1993; Robinson and Kertzman, 1995; Witte et al., 1996; Davidson and Marrocco, 2000; Bell and Munoz, 2002). Importantly, the difference in mean saccadic latency during the visual predictive-cueing task was not reliably different than the difference in mean saccadic latency during the auditory predictive-cueing task. Despite this similarity, the firing rates of LIP neurons were modality dependent: on average, visual cues elicited higher firing rates than auditory cues. When we examined LIP activity during the memory-guided saccade task, we also found that visual cues elicited higher firing rates than auditory cues. However, the difference between visual and auditory activity during the memory-guided saccade task was significantly greater than the difference during the predictive-cueing task. Together, these data indicate that modality-dependent LIP activity was due to differences in the behavioral task and not wholly due to differences in sensory processing. We also found that LIP activity was negatively correlated with mean saccadic latency. Below, we compare our result with previous work, examine the representation of sensory stimuli in area LIP, and also examine the role that area LIP may play in the predictive-cueing task.

Comparison with Previous Studies

Robinson et al. (1995) examined the role of the parietal cortex in covert attention using a task similar to our predictive-cueing task. There are several key differences between our study and the Robinson et al. study. First, while we focused on area LIP, Robinson et al. usually recorded from neurons that were posterior to the fundus of the intraparietal sulcus. Secondly, in our study, monkeys made saccades to the target location, whereas, in the Robinson et al. study, monkeys indicated the presence of a visual target by releasing a bar. Thirdly, in our study, the cue and target were at the same location, whereas, in the previous study, the cue and target were often at different locations. Finally, in our study, we did not vary the cue-target interval, whereas Robinson et al. systematically varied this interval.

Despite these differences, we can compare our data with that reported by Robinson et al. (1995) if we collapse their separate populations of parietal neurons into a single dataset to make it compatible with our analyses and if we restrict our analysis of their data to an interval that is equivalent to our response period. The average predictive-effect value from the data presented in Robinson et al. was 13.3 ms, a value comparable to the mean value (∼14 ms) that we report during the predictive-cueing tasks (Fig. 3). If we recalculate the neural data presented in Robinson et al. using our modulation index, their average value during the interval that approximates our response period was 0.12, a value that is somewhat higher than our mean values of 0.07 and 0.05 from the visual and auditory predictive-cueing tasks, respectively. These different values may be due to substantial differences in analysis techniques (e.g. subtraction of background (baseline) activity, duration of the analysis period, or normalization techniques) between the two studies. While these values differ, we emphasize that, on average, both studies reported that the firing rate of parietal neurons during the response period was higher during predictive trials than non-predictive trials. This result coupled with the similarity in the mean predictive effect confirms Robinson et al.’s important study and extends it through the use of auditory cues.

Modality-dependent Responses of LIP Neurons

LIP neurons responded differently to auditory and visual cues. On average, LIP neurons were modulated more by a visual cue than by an auditory cue. However, this difference in firing rate was not due to differences in the sensory stimuli per se: the firing rates of LIP neurons to the same auditory and visual cues were different during the predictive-cueing task and the memory-guided saccade task (see Figs 7 and 8).

These observations limit two of the factors that could contribute to modality-dependent activity in area LIP. First, modality-dependent LIP activity does not wholly reflect differences between auditory and visual sensory processing (Brown et al., 1978a,b, 1980; Wightman and Kistler, 1993; Blauert, 1997; Recanzone et al., 1998), because LIP neurons coded the sensory stimuli differently in different tasks (cf. Figs 7 and 8). Secondly, modality-dependent activity does not wholly reflect modality-dependent differences in the behavioral responses of the monkeys (Zambarbieri et al., 1982; Stein and Meredith, 1993; Baro et al., 1995; Groh and Sparks, 1996; Yao and Peck, 1997; Boucher et al., 2004), because the behavioral response of the monkeys during the predictive-cueing task was not dependent on cue modality.

Instead, it appears that a primary factor underlying modality-dependent activity may be the relationship between a stimulus and the behavioral task (behavioral context) (Linden et al., 1999; Kusunoki et al., 2000). Other LIP studies also indicate that auditory processing in area LIP is contextually dependent. For instance, in monkeys that have not been operantly trained to associate an auditory stimulus with a behavioral response, the response of a LIP neuron to an auditory stimulus depends on the presence or absence of a central fixation light (Gifford and Cohen, 2004). Similarly, Linden et al. (1999) reported that LIP neurons were modulated more by an auditory stimulus that signaled the location of a future eye movement than by an auditory stimulus that did not signal the location of a future eye movement. While data from that study suggested that visual responses were not dependent on the demands of the task, several important studies have highlighted the significant context-dependent nature of visual responses in area LIP (Colby et al., 1996; Gottlieb et al., 1998; Ben Hamed et al., 2002; Toth and Assad, 2002; Bisley and Goldberg, 2003).

LIP Neurons Code Different Quantities During a Task

Initially, LIP neurons responded differently to auditory and visual cues. But, as the task unfolded, these modality-dependent differences were minimized. These observations suggest that the quantity coded by LIP neurons during a behavioral task is dynamic and changes as the sensory, cognitive, or motor demands of a task change (Cohen et al., 2002; Cohen and Andersen, 2004).

This type of dynamic coding has been seen in other parietal studies. For instance, during delayed reaches to auditory targets, the amount of information contained in the firing rate of neurons in the parietal reach region is initially less than that observed during delayed reaches to visual targets (Cohen et al., 2002). However, as the trial evolves the amount of information during auditory trials increases and, while the monkeys are reaching, is comparable to that observed during visual trials. This pattern of modality-dependent to modality-independent coding is reminiscent of that observed in this study (Fig. 7). In another recent study, LIP neurons initially code parameters of a visual object but, in later portions of the trial, neural activity is better correlated with the direction of a planned saccade (Sabes et al., 2002). Finally, other LIP studies have demonstrated that LIP neurons initially code reward expectancy (Platt and Glimcher, 1999) or aspects of a perceptual decision (Shadlen and Newsome, 1996, 2001), but, in later epochs of the task, LIP activity codes saccade direction.

The Role of Area LIP in the Predictive-cueing Task

A relationship between LIP activity and saccadic latency is equivocal. Several studies have suggested that changes in LIP activity are not related to changes in saccadic latency or other saccade dynamics (Gottlieb and Goldberg, 1999; Powell and Goldberg, 2000; Snyder et al., 2002; Wardak et al., 2002). Other studies, however, have demonstrated a relationship between LIP function and saccadic latency (Li et al., 1999; Roitman and Shadlen, 2002).

Our data lend support to the hypothesis that LIP activity correlates with saccadic latency. During the predictive-cueing task, we found that, at a population level, firing rate correlated inversely with saccadic latency when the cue or target were located at the IN location (Fig. 10). Also, consistent with this observation, during the response period, the firing rate of LIP neurons was higher during predictive trials than during non-predictive trials (Fig. 12); though, this higher response rate may stem from a memory trace of the cue at the IN location (Gnadt and Andersen, 1988) that persists following cue offset.

The putative roles of area LIP in attentional processing (Robinson et al., 1995; Colby and Goldberg, 1999; Davidson and Marrocco, 2000; Kusunoki et al., 2000) and eye-movement planning (Snyder et al., 1997) may underlie its role in mediating differences in mean saccadic latency during the predictive-cueing task (Fig. 3). The attention hypothesis posits that neural activity reflects the location of the monkeys’ attentional focus; monkeys attend to the location of the cues, because of their relevance to the predictive-cueing task (Kusunoki et al., 2000). Saccadic latency was faster during predictive trials, because the monkeys’ attentional locus was already at the target location (Posner, 1980). In contrast, during non-predictive trials, the monkeys’ mean saccadic latency was slower, because the monkeys had to switch their attentional locus from the cue location to the target location before they could plan and generate a saccade. The eye-movement-planning hypothesis posits that neural activity reflects the location of an eye-movement plan. Saccadic latency was faster during predictive trials, because the monkeys had already formed an eye-movement plan to the visual-target location. During non-predictive trials, however, the monkeys’ mean saccadic latency was slower, because the monkeys had to cancel the eye-movement plan to the cue location and form a new plan to the target location before a saccade could be generated. Our data cannot distinguish between the attention and eye-movement-planning hypotheses, since the monkeys’ attentional locus and the eye movements were at the same location; further work is needed to determine whether these factors or others are causally related to the observed patterns of neural activity and behavioral responses.

We thank J. DiGiorgianni, C. Siegel and A. Kim for experimental assistance, C. Spence, J. Groh, H. Hersh and J. Linden for helpful discussions and comments on previous versions of the manuscript, T. Laroche for assistance with MRI and A. Underhill for her exceptional technical support. The suggestions of the two anonymous reviewers were also greatly appreciated. This work was supported by grants from the National Institutes of Health and the Whitehall Foundation and a Burke Award to Y.E.C.

Figure 1. Behavioral tasks. (A) Visually guided saccade task. The monkey first fixated a central light. Following a random delay (as indicated by the broken line) of 400–700 ms, a visual target was presented and the monkey saccaded to its location. (B) Memory-guided saccade task. The monkey first fixated a central light. Following a random delay of 400–700 ms, a sensory cue was presented for 300 ms; 400–700 ms after offset of this sensory cue, the central light was extinguished signaling the monkey to saccade to the location of the remembered cue. (C) Predictive-cueing task. The monkey first fixated a central light. Next, after a random delay, a 300 ms sensory cue, either an auditory stimulus or a visual stimulus, was presented. After a random delay of 200–250 ms, the visual target was introduced and the monkey saccaded to the visual-target location. (D) Schematics of different configurations of the predictive-cueing task. The locations of the sensory cues and the visual targets are schematized relative to the IN location of a neuron’s response field. The location of the response field that elicited the maximal response was designated as the IN location. The location that was 180° contralateral to the IN location was designated as the OUT location. During predictive trials (LEFT column), a sensory cue and the visual target are at the same location. They are either (a) at the IN location or (b) at the OUT location. During non-predictive trials (RIGHT column), a sensory cue and the visual target are at different locations: (c) a sensory cue at the IN location and the visual target at the OUT location or (d) a sensory cue at the OUT location and the visual target at the IN location.

Figure 1. Behavioral tasks. (A) Visually guided saccade task. The monkey first fixated a central light. Following a random delay (as indicated by the broken line) of 400–700 ms, a visual target was presented and the monkey saccaded to its location. (B) Memory-guided saccade task. The monkey first fixated a central light. Following a random delay of 400–700 ms, a sensory cue was presented for 300 ms; 400–700 ms after offset of this sensory cue, the central light was extinguished signaling the monkey to saccade to the location of the remembered cue. (C) Predictive-cueing task. The monkey first fixated a central light. Next, after a random delay, a 300 ms sensory cue, either an auditory stimulus or a visual stimulus, was presented. After a random delay of 200–250 ms, the visual target was introduced and the monkey saccaded to the visual-target location. (D) Schematics of different configurations of the predictive-cueing task. The locations of the sensory cues and the visual targets are schematized relative to the IN location of a neuron’s response field. The location of the response field that elicited the maximal response was designated as the IN location. The location that was 180° contralateral to the IN location was designated as the OUT location. During predictive trials (LEFT column), a sensory cue and the visual target are at the same location. They are either (a) at the IN location or (b) at the OUT location. During non-predictive trials (RIGHT column), a sensory cue and the visual target are at different locations: (c) a sensory cue at the IN location and the visual target at the OUT location or (d) a sensory cue at the OUT location and the visual target at the IN location.

Figure 2. Auditory stimulus. The amplitude waveform (A) and the spectrogram (B) of one of the sound bursts used to create the auditory stimulus.

Figure 2. Auditory stimulus. The amplitude waveform (A) and the spectrogram (B) of one of the sound bursts used to create the auditory stimulus.

Figure 3. The effect of the sensory cue on mean saccadic latency. The difference between mean saccadic latency during non-predictive trials and predictive trials (the ‘predictive effect’) is plotted in grey for the visual predictive-cueing task and in black for the auditory predictive-cueing task. The distributions in this figure are based on the behavioral data that was collected while recording from 96 LIP neurons during the visual predictive-cueing task and 82 LIP neurons during the auditory predictive-cueing task. The dotted line indicates zero predictive effect.

Figure 3. The effect of the sensory cue on mean saccadic latency. The difference between mean saccadic latency during non-predictive trials and predictive trials (the ‘predictive effect’) is plotted in grey for the visual predictive-cueing task and in black for the auditory predictive-cueing task. The distributions in this figure are based on the behavioral data that was collected while recording from 96 LIP neurons during the visual predictive-cueing task and 82 LIP neurons during the auditory predictive-cueing task. The dotted line indicates zero predictive effect.

Figure 4. Verification of recording area. A magnetic resonance image shows the recording cylinder from monkey Z above the intraparietal sulcus. The arrow also indicates the tip of a recording electrode that is along the lateral bank of the intraparietal sulcus (IPS). Inset shows the coronal plane of the MRI image.

Figure 4. Verification of recording area. A magnetic resonance image shows the recording cylinder from monkey Z above the intraparietal sulcus. The arrow also indicates the tip of a recording electrode that is along the lateral bank of the intraparietal sulcus (IPS). Inset shows the coronal plane of the MRI image.

Figure 5. Responses of a LIP neuron during the memory-guided saccade task. Rasters and spike-density time histograms are arranged as a function of cue location. The histograms and rasters are aligned relative to the onset of the 300 ms visual cue; the solid grey line in each panel indicates stimulus onset. The histograms were generated by binning spike times into 40 ms bins.

Figure 5. Responses of a LIP neuron during the memory-guided saccade task. Rasters and spike-density time histograms are arranged as a function of cue location. The histograms and rasters are aligned relative to the onset of the 300 ms visual cue; the solid grey line in each panel indicates stimulus onset. The histograms were generated by binning spike times into 40 ms bins.

Figure 6. Response of a LIP neuron during (A) the visual predictive-cueing task and (B) the auditory predictive-cueing tasks. In each panel, the two schematics above the neural data show the locations of the sensory cue and the visual target relative to the IN location of the response field. In both A and B, the neural activity in the LEFT column was generated during predictive trials when both the sensory cue and the visual target were located at the IN location of the response field. The neural activity in the RIGHT column was generated during predictive trials when both the sensory cue and the visual target were located at the OUT location. In each panel, the rasters and spike-density histograms are aligned relative to the onset of the sensory cue. The grey rectangle in each panel spans the duration of the sensory cue.

Figure 6. Response of a LIP neuron during (A) the visual predictive-cueing task and (B) the auditory predictive-cueing tasks. In each panel, the two schematics above the neural data show the locations of the sensory cue and the visual target relative to the IN location of the response field. In both A and B, the neural activity in the LEFT column was generated during predictive trials when both the sensory cue and the visual target were located at the IN location of the response field. The neural activity in the RIGHT column was generated during predictive trials when both the sensory cue and the visual target were located at the OUT location. In each panel, the rasters and spike-density histograms are aligned relative to the onset of the sensory cue. The grey rectangle in each panel spans the duration of the sensory cue.

Figure 7. Population response profiles of LIP activity during the predictive-cueing task. The data in the TOP row were generated during predictive trials when both the cue and the target were at the IN location. The data in the BOTTOM row were generated during predictive trials when both the cue and the target were at the OUT location. Data are aligned relative to different epochs of the task: (A, D) cue onset, (B, E) target onset, and (C, F) saccade onset. In each panel, each of these alignment points is indicated by a dotted line. As indicated by the legend in B, the response profiles and error bars in grey were generated from LIP activity recorded during the visual predictive-cueing task, and the response profiles and error bars in black were generated from LIP activity recorded during the auditory predictive-cueing task. Error bars are standard errors of the mean. The black rectangle in A indicates cue duration. Data are normalized within each panel.

Figure 7. Population response profiles of LIP activity during the predictive-cueing task. The data in the TOP row were generated during predictive trials when both the cue and the target were at the IN location. The data in the BOTTOM row were generated during predictive trials when both the cue and the target were at the OUT location. Data are aligned relative to different epochs of the task: (A, D) cue onset, (B, E) target onset, and (C, F) saccade onset. In each panel, each of these alignment points is indicated by a dotted line. As indicated by the legend in B, the response profiles and error bars in grey were generated from LIP activity recorded during the visual predictive-cueing task, and the response profiles and error bars in black were generated from LIP activity recorded during the auditory predictive-cueing task. Error bars are standard errors of the mean. The black rectangle in A indicates cue duration. Data are normalized within each panel.

Figure 8. Population response profiles of LIP activity during memory-guided saccades. The two panels illustrate data collected when the cue was (A) at the IN location and (B) the OUT location. Data are aligned relative to cue onset; in each panel, each of these alignment points is indicated by a dotted line. As indicated by the legend in B, the response profiles and error bars in grey were generated from LIP activity recorded during the visual memory-guided saccade task and the response profiles and error bars in black were generated from LIP activity recorded during the auditory memory-guided saccade task. Error bars are standard errors of the mean. The black rectangle in A indicates cue duration.

Figure 8. Population response profiles of LIP activity during memory-guided saccades. The two panels illustrate data collected when the cue was (A) at the IN location and (B) the OUT location. Data are aligned relative to cue onset; in each panel, each of these alignment points is indicated by a dotted line. As indicated by the legend in B, the response profiles and error bars in grey were generated from LIP activity recorded during the visual memory-guided saccade task and the response profiles and error bars in black were generated from LIP activity recorded during the auditory memory-guided saccade task. Error bars are standard errors of the mean. The black rectangle in A indicates cue duration.

Figure 9. Correlation analysis of LIP neurons during the cue period of the visual and auditory predictive-cueing task for 65 LIP neurons. On a neuron-by-neuron basis, the modulation-index value from the visual predictive-cueing task is plotted on the abscissa and the modulation-index value from the auditory predictive-cueing task is plotted on the ordinate. The dotted grey line represents the expected relationship if the neurons had the same modulation-index values from the visual and auditory predictive-cueing tasks. The solid black line represents the slope of the two-dimensional least-mean-squares line that relates the modulation-index values from the visual predictive-cueing task with the modulation-index values from the auditory predictive-cueing task.

Figure 9. Correlation analysis of LIP neurons during the cue period of the visual and auditory predictive-cueing task for 65 LIP neurons. On a neuron-by-neuron basis, the modulation-index value from the visual predictive-cueing task is plotted on the abscissa and the modulation-index value from the auditory predictive-cueing task is plotted on the ordinate. The dotted grey line represents the expected relationship if the neurons had the same modulation-index values from the visual and auditory predictive-cueing tasks. The solid black line represents the slope of the two-dimensional least-mean-squares line that relates the modulation-index values from the visual predictive-cueing task with the modulation-index values from the auditory predictive-cueing task.

Figure 10. Population histograms of the correlation between firing rate and saccadic latency during the visual and auditory predictive-cueing tasks. (A) Distributions of correlation coefficients during predictive trials when the cue and the target were at the IN location. The distribution relating visual cue-period activity and saccadic latency is shown in a, the distribution relating auditory cue-period activity and saccadic latency is shown in b, the distribution relating response-period activity during the visual predictive-cueing task is shown in c and the distribution relating response-period activity during the auditory predictive-cueing task is shown in d. (B) Distributions of correlation coefficients during non-predictive trials when the target was at the IN location. The distribution relating response-period activity during the visual predictive-cueing task is shown in a and the distribution relating response-period activity during the auditory predictive-cueing task is shown in b. (C) Distributions of correlation coefficients during predictive trials when the cue and the target were at the OUT location. The distribution relating visual cue-period activity and saccadic latency is shown in a, the distribution relating auditory cue-period activity and saccadic latency is shown in b, the distribution relating response-period activity during the visual predictive-cueing task is shown in c and the distribution relating response-period activity during the auditory predictive-cueing task is shown in d. Each of the distributions in this figure is based on the behavioral and neural data collected while recording from 96 LIP neurons during the visual predictive-cueing task and 82 LIP neurons during the auditory predictive-cueing task. The grey line in each panel indicates the mean of each distribution. Panels with asterisks indicate distributions with mean values that are significantly (P < 0.05) less than zero.

Figure 10. Population histograms of the correlation between firing rate and saccadic latency during the visual and auditory predictive-cueing tasks. (A) Distributions of correlation coefficients during predictive trials when the cue and the target were at the IN location. The distribution relating visual cue-period activity and saccadic latency is shown in a, the distribution relating auditory cue-period activity and saccadic latency is shown in b, the distribution relating response-period activity during the visual predictive-cueing task is shown in c and the distribution relating response-period activity during the auditory predictive-cueing task is shown in d. (B) Distributions of correlation coefficients during non-predictive trials when the target was at the IN location. The distribution relating response-period activity during the visual predictive-cueing task is shown in a and the distribution relating response-period activity during the auditory predictive-cueing task is shown in b. (C) Distributions of correlation coefficients during predictive trials when the cue and the target were at the OUT location. The distribution relating visual cue-period activity and saccadic latency is shown in a, the distribution relating auditory cue-period activity and saccadic latency is shown in b, the distribution relating response-period activity during the visual predictive-cueing task is shown in c and the distribution relating response-period activity during the auditory predictive-cueing task is shown in d. Each of the distributions in this figure is based on the behavioral and neural data collected while recording from 96 LIP neurons during the visual predictive-cueing task and 82 LIP neurons during the auditory predictive-cueing task. The grey line in each panel indicates the mean of each distribution. Panels with asterisks indicate distributions with mean values that are significantly (P < 0.05) less than zero.

Figure 11. Response of a LIP neuron during (A) the visual predictive-cueing task and (B) the auditory predictive-cueing task. In both A and B, the two schematics to the right of the neural data show the locations of a sensory cue and the visual target relative to the IN location of the response field during predictive (top) and non-predictive trials (bottom); the data in this figure represent only those trials in which the visual target was located at the IN location. In each panel, the rasters and spike-density histograms in black were data collected during predictive trials, whereas those in grey were data collected during non-predictive trials. The rasters and spike-density histograms in each panel are aligned relative to the onset of the visual target; the dashed line indicates visual-target onset.

Figure 11. Response of a LIP neuron during (A) the visual predictive-cueing task and (B) the auditory predictive-cueing task. In both A and B, the two schematics to the right of the neural data show the locations of a sensory cue and the visual target relative to the IN location of the response field during predictive (top) and non-predictive trials (bottom); the data in this figure represent only those trials in which the visual target was located at the IN location. In each panel, the rasters and spike-density histograms in black were data collected during predictive trials, whereas those in grey were data collected during non-predictive trials. The rasters and spike-density histograms in each panel are aligned relative to the onset of the visual target; the dashed line indicates visual-target onset.

Figure 12. Frequency histogram for the modulation index during the response period. Data from neurons recorded during the visual predictive-cueing task (n = 96) are shown in A and data from neurons recorded during the auditory predictive-cueing task (n = 82) are shown in B. The grey line in each panel indicates an index value = 0.

Figure 12. Frequency histogram for the modulation index during the response period. Data from neurons recorded during the visual predictive-cueing task (n = 96) are shown in A and data from neurons recorded during the auditory predictive-cueing task (n = 82) are shown in B. The grey line in each panel indicates an index value = 0.

Table 1


 Summary of mean correlation values (and t-test summary statistics) relating firing rate with saccadic latency

 Cue period   Response period  
 Visual PCTa Auditory PCTa  Visual PCTa Auditory PCTa 
Predictive trials at the IN location –0.017 [t(96) = –0.49, P > 0.05] –0.025 [t(82) = –1.86, P < 0.05]  –0.061 [t(96) = –3.64, P < 0.05] –0.05 [t(82) = –2.75, P < 0.05] 
Non-predictive trials when the target was at the IN location    –0.08 [t(96) = –2.15, P < 0.05] –0.05 [t(82) = –1.54, P > 0.05] 
Predictive trials at the OUT location  0.005 [t(96) = 0.79, P > 0.05]  0.017 [t(82) = –0.67, P > 0.05]  –0.012 [t(96) = 176, P > 0.05] –0.005 [t(82) = 0.42, P > 0.05] 
 Cue period   Response period  
 Visual PCTa Auditory PCTa  Visual PCTa Auditory PCTa 
Predictive trials at the IN location –0.017 [t(96) = –0.49, P > 0.05] –0.025 [t(82) = –1.86, P < 0.05]  –0.061 [t(96) = –3.64, P < 0.05] –0.05 [t(82) = –2.75, P < 0.05] 
Non-predictive trials when the target was at the IN location    –0.08 [t(96) = –2.15, P < 0.05] –0.05 [t(82) = –1.54, P > 0.05] 
Predictive trials at the OUT location  0.005 [t(96) = 0.79, P > 0.05]  0.017 [t(82) = –0.67, P > 0.05]  –0.012 [t(96) = 176, P > 0.05] –0.005 [t(82) = 0.42, P > 0.05] 

aPredictive-cueing task.

References

Andersen RA (
1997
) Multimodal integration for the representation of space in the posterior parietal cortex.
Philos Trans R Soc Lond B Biol Sci
 
352
:
1421
–1428.
Andersen RA, Asanuma C, Essick G, Siegel RM (
1990
) Corticocortical connections of anatomically and physiologically defined subdivisions within the inferior parietal lobule.
J Comp Neurol
 
296
:
65
–113.
Barash S, Bracewell RM, Fogassi L, Gnadt JW, Andersen RA (
1991
) Saccade-related activity in the lateral intraparietal area. II., Spatial properties.
J Neurophysiol
 
66
:
1109
–1124.
Barash S, Bracewell RM, Fogassi L, Gnadt JW, Andersen RA (
1991
) Saccade-related activity in the lateral intraparietal area. I. Temporal properties; comparison with area 7a.
J Neurophysiol
 
66
:
1095
–1108.
Baro JA, Hughes HC, Peck CK (
1995
) Express saccades in cat: effects of task and target modality.
Exp Brain Res
 
102
:
209
–217.
Batista AP, Buneo CA, Snyder LH, Andersen RA (
1999
) Reach plans in eye-centered coordinates.
Science
 
285
:
257
–260.
Bell AH, Munoz DP (
2002
) Activity in the primate superior colliculus related to performance in predictive vs. non-predictive multimodal cued-saccade tasks.
Soc Neurosci Abstr
 
28
:
560
.565.
Ben Hamed S, Duhamel JR, Bremmer F, Graf W (
2002
) Visual receptive field modulation in the lateral intraparietal area during attentive fixation and free gaze.
Cereb Cortex
 
12
:
234
–245.
Bisley JW, Goldberg ME (
2003
) Neuronal activity in the lateral intraparietal area and spatial attention.
Science
 
299
:
81
–86.
Blauert J (
1997
) Spatial hearing: The psychophysics of human sound localization. Cambridge, MA: MIT Press.
Boucher L, Lee A, Cohen YE, Hughes HC (
2004
) Ocular tracking as a function of auditory motion perception.
J Physiol (Paris)
  (in press).
Bowman EM, Brown VJ, Kertzman C, Schwarz U, Robinson DL (
1993
) Covert orienting of attention in macaques. I. Effects of behavioral context.
J Neurophysiol
 
70
:
431
–443.
Bremmer F, Klam F, Duhamel JR, Ben Hamed S, Graf W (
2002
) Visual-vestibular interactive responses in the macaque ventral intraparietal area (VIP).
Eur J Neurosci
 
16
:
1569
–1586.
Brotchie PR, Andersen RA, Snyder LH, Goodman SJ (
1995
) Head position signals used by parietal neurons to encode locations of visual stimuli.
Nature
 
375
:
232
–235.
Brown CH, Beecher MD, Moody DB, Stebbins WC (
1978
) Localization of primate calls by Old World monkeys.
Science
 
201
:
753
–754.
Brown CH, Beecher MD, Moody DB, Stebbins WC (
1978
) Localization of pure tones by old world monkeys.
J Acoust Soc Am
 
63
:
1484
–1492.
Brown CH, Beecher MD, Moody DB, Stebbins WC (
1980
) Localization of noise bands by Old World monkeys.
J Acoust Soc Am
 
68
:
127
–132.
Cohen YE, Andersen RA (
2000
) Reaches to sounds encoded in an eye-centered reference frame.
Neuron
 
27
:
647
–652.
Cohen YE, Andersen RA (
2002
) A common reference frame for movement plans in the posterior parietal cortex.
Nat Rev Neurosci
 
3
:
553
–562.
Cohen YE, Andersen RA (
2004
) Multimodal spatial representations in the primate parietal lobe. In: Crossmodal space and crossmodal attention (Spence C, Driver J, eds), pp.
154
–176. Oxford: Oxford University Press.
Cohen YE, Batista AP, Andersen RA (
2002
) Comparison of neural activity preceding reaches to auditory and visual stimuli in the parietal reach region.
Neuroreport
 
13
:
891
–894.
Colby CL, Goldberg ME (
1999
) Space and attention in parietal cortex.
Annu Rev Neurosci
 
22
:
319
–349.
Colby CL, Duhamel JR, Goldberg ME (
1996
) Visual, presaccadic, and cognitive activation of single neurons in monkey lateral intraparietal area.
J Neurophysiol
 
76
:
2841
–2852.
Davidson MC, Marrocco RT (
2000
) Local infusion of scopolamine into intraparietal cortex slows covert orienting in rhesus monkeys.
J Neurophysiol
 
83
:
1536
–1549.
Duhamel JR, Colby CL, Goldberg ME (
1991
) Congruent representations of visual and somatosensory space in single neurons of monkey ventral intra-parietal cortex (area VIP). In: Brain and space (Paillard J, ed.), pp.
223
–236. Oxford: Oxford University Press.
Duhamel JR, Colby CL, Goldberg ME (
1998
) Ventral intraparietal area of the macaque: congruent visual and somatic response properties.
J Neurophysiol
 
79
:
126
–136.
Eskandar EN, Assad JA (
1999
) Dissociation of visual, motor and predictive signals in parietal cortex during visual guidance.
Nat Neurosci
 
2
:
88
–93.
Gifford GW III, Cohen YE (
2004
) The effect of a central fixation light on auditory spatial responses in area LIP.
J Neurophysiol
  (in press).
Gnadt JW, Andersen RA (
1988
) Memory related motor planning activity in posterior parietal cortex of macaque.
Exp Brain Res
 
70
:
216
–220.
Gottlieb J, Goldberg ME (
1999
) Activity of neurons in the lateral intraparietal area of the monkey during an antisaccade task.
Nat Neurosci
 
2
:
906
–912.
Gottlieb JP, Kusunoki M, Goldberg ME (
1998
) The representation of visual salience in monkey parietal cortex.
Nature
 
391
:
481
–484.
Groh JM, Sparks DL (
1996
) Saccades to somatosensory targets. I. Behavioral characteristics.
J Neurophysiol
 
75
:
412
–427.
Groh JM, Trause AS, Underhill AM, Clark KR, Inati S (
2001
) Eye position influences auditory responses in primate inferior colliculus.
Neuron
 
29
:
509
–518.
Grunewald A, Linden JF, Andersen RA (
1999
) Responses to auditory stimuli in macaque lateral intraparietal area. I. Effects of training.
J Neurophysiol
 
82
:
330
–342.
Judge SJ, Richmond BJ, Chu FC (
1980
) Implantation of magnetic search coils for measurement of eye position: an improved method.
Vision Res
 
20
:
535
–538.
Kusunoki M, Gottlieb J, Goldberg ME (
2000
) The lateral intraparietal area as a salience map: the representation of abrupt onset, stimulus motion, and task relevance.
Vision Res
 
40
:
1459
–1468.
Lewis JW, Van Essen DC (
2000
) Corticocortical connections of visual, sensorimotor, and multimodal processing areas in the parietal lobe of the macaque monkey.
J Comp Neurol
 
428
:
112
–137.
Li CS, Mazzoni P, Andersen RA (
1999
) Effect of reversible inactivation of macaque lateral intraparietal area on visual and memory saccades.
J Neurophysiol
 
81
:
1827
–1838.
Linden JF, Grunewald A, Andersen RA (
1999
) Responses to auditory stimuli in macaque lateral intraparietal area. II. Behavioral modulation.
J Neurophysiol
 
82
:
343
–358.
Mazzoni P, Bracewell RM, Barash S, Andersen RA (
1996
) Spatially tuned auditory responses in area LIP of macaques performing delayed memory saccades to acoustic targets.
J Neurophysiol
 
75
:
1233
–1241.
Mullette-Gillman OA, Cohen YE, Groh JM (
2002
) Assessing the spatial alignment of auditory and visual responses in the inferior parietal sulcus. Society for Neuroscience Program No. 57.19. Abstract Viewer/Itinerary Planner.
Petersen SE, Robinson DL, Morris JD (
1987
) The contribution of the pulvinar to visual spatial attention.
Neuropsychologia
 
25
:
97
–105.
Platt ML, Glimcher PW (
1998
) Response fields of intraparietal neurons quantified with multiple saccadic targets.
Exp Brain Res
 
121
:
65
–75.
Platt ML, Glimcher PW (
1999
) Neural correlates of decision variables in parietal cortex.
Nature
 
400
:
233
–238.
Posner MI (
1980
) Orienting of attention.
Q J Exp Psychol
 
32
:
3
–25.
Powell KD, Goldberg ME (
2000
) Response of neurons in the lateral intraparietal area to a distractor flashed during the delay period of a memory-guided saccade.
J Neurophysiol
 
84
:
301
–310.
Recanzone GH, Makhamra SD, Guard DC (
1998
) Comparison of relative and absolute sound localization ability in humans.
J Acoust Soc Am
 
103
:
1085
–1097.
Robinson DL, Kertzman C (
1995
) Covert orienting of attention in macaques. III. Contributions of the superior colliculus.
J Neurophysiol
 
74
:
713
–721.
Robinson DL, Bowman EM, Kertzman C (
1995
) Covert orienting of attention in macaques. II. Contributions of parietal cortex.
J Neurophysiol
 
74
:
698
–712.
Roitman JD, Shadlen MN (
2002
) Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task.
J Neurosci
 
22
:
9475
–9489.
Sabes PN, Breznen B, Andersen RA (
2002
) Parietal representation of object-based saccades.
J Neurophysiol
 
88
:
1815
–1829.
Schlack A, Sterbing S, Hartung K, Hoffman K-P, Bremmer F (
2000
) Spatially congruent auditory and visual responses in macaque area VIP.
Soc Neurosci Abstr
 
26
:
1064
.
Schlack A, Hoffmann KP, Bremmer F (
2002
) Interaction of linear vestibular and visual stimulation in the macaque ventral intraparietal area (VIP).
Eur J Neurosci
 
16
:
1877
–1886.
Shadlen MN, Newsome WT (
1996
) Motion perception: seeing and deciding.
Proc Natl Acad Sci USA
 
93
:
628
–633.
Shadlen MN, Newsome WT (
2001
) Neural basis of a perceptual decision in the parietal cortex (area LIP) of the rhesus monkey.
J Neurophysiol
 
86
:
1916
–1936.
Snyder LH, Batista AP, Andersen RA (
1997
) Coding of intention in the posterior parietal cortex.
Nature
 
386
:
167
–170.
Snyder LH, Grieve KL, Brotchie P, Andersen RA (
1998
) Separate body- and world-referenced representations of visual space in parietal cortex.
Nature
 
394
:
887
–891.
Snyder LH, Batista AP, Andersen RA (
2000
) Intention-related activity in the posterior parietal cortex: a review.
Vision Res
 
40
:
1433
–1441.
Snyder LH, Calton JL, Dickinson AR, Lawrence BM (
2002
) Eye-hand coordination: saccades are faster when accompanied by a coordinated arm movement.
J Neurophysiol
 
87
:
2279
–2286.
Stein BE, Meredith MA (
1993
) The merging of the senses. Cambridge, MA: MIT Press.
Stricanne B, Andersen RA, Mazzoni P (
1996
) Eye-centered, head-centered, and intermediate coding of remembered sound locations in area LIP.
J Neurophysiol
 
76
:
2071
–2076.
Toth LJ, Assad JA (
2002
) Dynamic coding of behaviourally relevant stimuli in parietal cortex.
Nature
 
415
:
165
–168.
Wardak C, Olivier E, Duhamel JR (
2002
) Saccadic target selection deficits after lateral intraparietal area inactivation in monkeys.
J Neurosci
 
22
:
9877
–9884.
Wightman FL, Kistler DJ (
1993
) Sound localization. In: Human psychophysics (Yost WA, Popper AN, Fay RR, eds), pp.
155
–192. New York: Springer.
Witte EA, Villareal M, Marrocco RT (
1996
) Visual orienting and alerting in rhesus monkeys: comparison with humans.
Behav Brain Res
 
82
:
103
–112.
Yao L, Peck CK (
1997
) Saccadic eye movements to visual and auditory targets.
Exp Brain Res
 
115
:
25
–34.
Zambarbieri D, Schmid R, Magenes G, Prablanc C (
1982
) Saccadic responses evoked by presentation of visual and auditory targets.
Exp Brain Res
 
47
:
417
–427.