Young and old adults underwent positron emission tomographic scans while encoding pictures of objects and words using three encoding strategies: deep processing (a semantic living/nonliving judgement), shallow processing (size judgement) and intentional learning. Picture memory exceeded word memory in both young and old groups, and there was an age-related decrement only in word recognition. During the encoding tasks three brain activity patterns were found that differentiated stimulus type and the different encoding strategies. The stimulus-specific pattern was characterized by greater activity in extrastriate and medial temporal cortices during picture encoding, and greater activity in left prefrontal and temporal cortices during encoding of words. The older adults showed this pattern to a significantly lesser degree. A pattern distinguishing deep processing from intentional learning of words and pictures was identified, characterized mainly by differences in prefrontal cortex, and this pattern also was of significantly lesser magnitude in the old group. A final pattern identified areas with increased activity during deep processing and intentional learning of pictures, including left prefrontal and bilateral medial temporal regions. There was no group difference in this pattern. These results indicate age-related dysfunction in several encoding networks, with sparing of one specifically involved in more elaborate encoding of pictures. These age-related changes appear to affect verbal memory more than picture memory.
Memory problems are a common complaint of older people. Over the years research has shown that older individuals have particular difficulty with episodic memory, defined as the conscious recollection of particular episodes or events that have occurred in a person's experience (Tulving, 1983). Age-related difficulties in episodic memory may be the result of a reduced ability to encode new material (Craik and Byrd, 1982). Some have suggested that these encoding difficulties arise because older individuals are less able to spontaneously initiate adequate encoding strategies, or to organize material in their attempt to learn it (Hultsch, 1969; Sanders et al., 1980). On the other hand, older people may also have less efficient retrieval strategies (Burke and Light, 1981). This is suggested by the finding that the memory performance of older people is most affected when memory is tested using free recall of studied items, with smaller age-related reductions when memory is tested using recognition paradigms (Schonfield and Robertson, 1966; Rabinowitz, 1984; Craik and McDowd, 1987). Recall requires more effort and places a greater demand on search strategies (Macht and Buschke, 1983; Craik and McDowd, 1987), whereas recognition allows the memory decision to be based on familiarity with the stimuli as well as recollection of past experience with the stimuli (Tulving, 1985; Jacoby, 1991). Familiarity is generally unchanged or even increased in older subjects, in contrast to recollection, which is impaired with age (Parkin and Walter, 1992; Jennings and Jacoby, 1997).
Although little is known about the brain mechanisms that accompany these age-related changes in memory, numerous studies in the past few years have used neuroimaging techniques to examine the neural correlates of episodic memory in young adults (Buckner, 1996; Cabeza and Nyberg, 1997; Grady, 1999). These experiments have shown that left prefrontal activation often occurs during encoding and that right prefrontal activation is seen preferentially during retrieval, whether by recall or recognition (Kapur et al., 1994; Shallice et al., 1994; Tulving et al., 1994; Haxby et al., 1996; Rugg et al., 1996). Medial temporal activation has been found during encoding, mostly of non-verbal stimuli (Haxby et al., 1996; Stern et al., 1996; Tulving et al., 1996), and during retrieval (Squire et al., 1992; Klingberg et al., 1994; Schacter et al., 1995, 1996a; Nyberg et al., 1996). Posterior regions of cortex also are active during episodic memory, such as inferior temporal cortex during encoding (Haxby et al., 1996) and temporoparietal cortex during retrieval (Nyberg et al., 1995).
In the few studies that have examined brain activity in older adults during memory tasks, a number of differences in comparison to young adults have emerged. One experiment involving encoding and recognition of faces in young and old adults found poorer recognition performance and reduced activation in a number of brain areas in the older group (Grady et al., 1995). During encoding, reduced activation in old individuals was seen in right hippocampal gyrus, and left prefrontal and temporal regions. During recognition of the faces, both groups showed equivalent activation of right prefrontal cortex, but young adults showed greater activation of parietal and ventral occipital cortices. A more recent study (Cabeza et al., 1997a) that examined memory for word-pairs found different patterns of brain activity in old and young adults during both encoding and retrieval. During encoding, old adults showed less activation in left prefrontal and temporal cortex, but activated bilateral insula to a greater extent than did young adults. During cued recall, the old group had increased activity bilaterally in prefrontal cortex, in contrast to the young adults, who showed activation only in right prefrontal cortex. In addition, the old adults were able to recall as many words as the young adults, probably because they were given an adequate strategy for encoding the pairs (semantic association). Bilateral prefrontal activation in older adults also has been found during recognition of word lists, although in this experiment the older people had poorer memory scores (Madden et al., 1999). Schacter et al. reported that young and old adults, who were engaged in cued recall of previously studied words, showed equivalent activation in medial temporal regions (Schacter et al., 1996b). Both groups also had right prefrontal activation, but the prefrontal areas active in the older adults were posterior to those seen in young adults. In addition, the elderly group had poorer recall than did the younger adults.
These experiments, although few in number, indicate that the neural correlates of age-related changes in memory are quite complex. Old adults can show either equivalent brain activity, reduced activity or increased activity, compared to young adults, depending on the task, the brain regions under consideration and perhaps on how well they can perform the task. In addition, functional interactions among brain areas during perceptual and memory tasks differ between young and old adults (McIntosh et al., 1994; Horwitz et al., 1995; Cabeza et al., 1997b). These results have led us to suggest that these changes may reflect a functional reorganization of the brain areas participating in these cognitive processes in older individuals. These differences in brain activity could reflect different behavioral strategies in the two age groups or perhaps different ‘brain strategies' even if the behavioral strategies were ostensibly the same. The outcome of this reorganization might not always be the same, however. For example, recruitment of frontally mediated monitoring of responses (Petrides, 1994) might interfere with task performance by the elderly under some conditions, but in other cases might aid cognitive performance, thus acting in a compensatory role to ameliorate the effects of reduced function in the networks that normally operate during the task.
The aim of the current experiment was to utilize measures of cerebral blood flow (rCBF) obtained with positron emission tomography (PET) to examine the use of alternative brain activity patterns by the elderly during episodic encoding. We manipulated encoding and subsequent recognition performance by varying both the type of material to be encoded and the strategy used during encoding. We used words and pictures of common objects as the stimuli since previous work has shown better memory for pictures in both young and old adults (Shepard, 1967; Standing et al., 1970; Paivio, 1971; Winograd et al., 1982; Park et al., 1983). In addition, older adults seem to have little or no reduction in picture recognition ability compared to young subjects as long as the pictures depict a meaningful object or scene (Craik and Byrd, 1982; Park et al., 1984; Park et al., 1986; Chalfonte and Johnson, 1996). Encoding strategy was manipulated using the well-known levels of processing effect, which consists of better memory after deep encoding, such as semantic processing of items, than after shallow encoding, such as purely perceptual judgements (Craik and Lockhart, 1972; Craik and Tulving, 1975). The effect of deep versus shallow processing during encoding is usually as large in older adults as it is in younger adults (Rankin and Collins, 1985; Park et al., 1986), and may be even larger under some conditions (Craik and Simon, 1980; Backman, 1986; Park et al., 1990). Our hypothesis was that in those encoding conditions, such as encoding of pictures or use of the semantic encoding task, that are associated with better memory performance, older adults would show altered brain activity patterns involving recruitment of brain areas not utilized to any great extent by young adults, or use of memory-related brain regions in novel ways. On the other hand, we expected that those encoding conditions that did not adequately support recognition in the elderly would be characterized solely by reduced activity in memory-related regions.
Materials and Methods
Twelve young right-handed adults (6 males, 6 females, mean age ± SD = 23.0 ± 3.5 years, range 19–28 years) and 12 older right-handed adults (6 males, 6 females, mean age ± SD = 66.2 ± 4.2 years, range 58–73 years) participated in the PET experiment. All subjects were screened to rule out any disease or medication that might affect brain function. The two groups had equivalent years of education (young 16 ± 2 years; old 15 ± 4 years) and mental status scores [young 29 ± 1, old 29 ± 2 (Folstein et al., 1975)]. Subjects needing correctional lenses to view the stimuli wore their own glasses during the experiment. An additional group of subjects, comparable to the PET group in age and years of education (12 young and 12 older adults), was given the behavioral tests only. The data from this group was included with that of the PET group in the analysis of recognition accuracy. This experiment was conducted with the understanding and written consent of each participant.
The stimuli used in the experiment were line drawings of familiar objects, taken from the stimulus set developed by Snodgrass and Vanderwart (Snodgrass and Vanderwart, 1980), or words corresponding to objects in this set of stimuli (none of the words appeared also as a picture and vice versa). All stimuli were presented on a computer monitor in black with a white background. There were three encoding tasks for both words and pictures, requiring three lists of pictures and three lists of words. All lists were matched for word frequency, word length, familiarity and complexity of the picture regardless of whether the list was presented as words or pictures. For two of the encoding conditions, subjects were instructed to make certain decisions about the stimuli, but were not explicitly asked to remember them. One such condition involved nonsemantic or shallow processing of the stimuli (size of picture or case of letters), and the other required semantic or deep processing of the stimuli (living/nonliving decision). During the third condition, subjects were instructed to memorize the pictures or words and were told that they would be tested on these items (intentional learning). For each condition, stimuli were presented for 2 s each, with a 2 s interval between stimuli. Following the scans, subjects completed two recognition memory tasks: one for stimuli encoded as words and one for stimuli encoded as pictures. Each recognition test consisted of 10 targets from each of the three encoding conditions for words or pictures and 30 distracters (60 items total). All stimuli in the recognition tasks were presented as words, regardless of whether they were originally presented as words or pictures. This was done to prevent ceiling effects for picture recognition.
Six PET scans, with injections of 40 mCi of H215O each and separated by 11 min, were performed on all subjects. Scans were performed on a GEMS PC2048–15B tomograph, which has a reconstructed resolution of 6.5 mm in both transverse and axial planes. This tomograph allows 15 planes, separated by 6.5 mm (center to center), to be acquired simultaneously. Emission data were corrected for attenuation by means of a transmission scan obtained at the same levels as the emission scans. Head movement during the scans was minimized with a thermoplastic mask that was molded to each subject's head and attached to the scanner bed. Each task started 20 s prior to isotope injection and continued throughout the 1 min scanning period. For the six encoding scans the three lists for words or pictures were assigned to the three encoding conditions in a counterbalanced fashion, and the order of conditions also was counterbalanced across subjects. During all scans subjects pressed a button with the right index or middle finger to either indicate their decision about the stimulus or, during the intentional learning condition task, to simply make a motor response.
Accuracy of performance during the encoding tasks (percent correct) was analyzed for the deep and shallow conditions in which a decision had to be made about the stimuli, and for the recognition tests (percent hits minus percent false alarms). Behavioral data were analyzed using a repeated-measures ANOVA with stimulus type and encoding condition as the repeated measures and group as the independent factor.
PET scans were registered using AIR (Woods et al., 1992), spatially normalized [to the Talairach and Tournoux atlas coordinate system (Talairach and Tournoux, 1988)], and smoothed (using a 10 mm filter to reduce the effects of spatial autocorrelation) using SPM95 (Frackowiak and Friston, 1994). Ratios of rCBF to global CBF within each scan for each subject were computed and analyzed using partial least squares (PLS) [this tecnhique is described more fully elsewhere (McIntosh et al., 1996; Grady et al., 1998a)]. PLS is a multivariate analysis that identifies groups of brain regions that together covary with some aspect of the experimental design, such as a difference in rCBF between encoding of pictures and encoding of words. This technique operates on the covariance between brain voxels and the experimental design to identify a new set of variables (so-called latent variables or LVs). Each LV extracted represents an experimental effect, either a main effect of task or an interaction of task and group (Grady et al., 1998a), and identifies both the pattern of task differences and the brain voxels showing that pattern. Each brain voxel has a weight on each LV, known as a salience, that indicates how that voxel is related to the LV. A salience can be positive or negative, depending on whether the voxel shows a positive or negative relation with the pattern identified by the LV. Multiplying the rCBF value in each brain voxel for each subject by the salience for that voxel, and summing across all voxels, gives a ‘brain’ score for each subject for each task condition on a given LV. The brain scores are analogous to factor scores in a factor analysis, and are an indication of how much each subject expresses the brain activity pattern for a given LV in each condition.
We carried out three analyses: separate within-group analyses for young and old adults and an analysis of the two groups combined which allowed direct examination of task-by-group interaction effects. Prior to the group analysis we removed the effects of any global rCBF differences between the young and old groups by regressing out the group main effect from each voxel for each subject, leaving only the residual variance that was due to the tasks. For each analysis, the significance of the LVs was assessed using a permutation test (Edgington, 1980; McIntosh et al., 1996). Post hoc contrasts to determine the main effect of task identified by each LV also were included in the permutation test. In addition to the permutation test, we determined the reliability of the saliences for the brain voxels characterizing each pattern. To do this, all saliences in each analysis were submitted to a bootstrap estimation of the standard errors (Efron and Tibshirani, 1986; Sampson et al., 1989). Voxels with a salience/standard error ratio (referred to as the reliability ratio) that was ≥2.0 on a given LV were considered to make a reliable contribution to that LV. Local maxima for the reliable brain areas on each LV were defined as the voxel with a ratio higher than any other voxel in a 2 cm cube centered on that voxel. Locations of these maxima are reported in terms of brain region, or gyrus, and estimated Brodmann area (BA) as defined in the Talairach and Tournoux atlas.
To look for interactions involving group and/or stimulus type on the LVs from the combined-group analysis, we entered the brain scores into an ANOVA. However, since the original brain scores from this analysis were optimized to identify separate and orthogonal patterns for each LV, including those that specify contrasts for interaction effects, these scores were biased. To obtain unbiased brain scores we repeated the combined-group analysis coding only for the contrasts of the three encoding strategies, thereby allowing us to assess interactions of stimulus type and group. It is important to note that since the ANOVAs were carried out on the brain scores, which reflect activity across the whole image, any between-group differences apply to the entire pattern of activity and not just to any single region.
Results will be presented from the combined-group analysis, with comparisons to the within-group analyses to highlight the contribution of each age group to the combined patterns. These comparisons were made by extracting the reliability ratios from the within-group analyses for each of the regions identified by the group analysis to determine if these areas also contributed to the within-group patterns. Thus, although the task-by-group interactions calculated via the ANOVA mentioned in the previous paragraph provided the statistical significance of the between-group differences, the comparison of reliability ratios shed light on whether one group contributed primarily to the combined-group pattern or if both groups contributed equally.
Accuracy for size and living/nonliving judgements was high during the shallow and deep encoding conditions respectively for both age groups, and there was no group difference on these tasks. Pictures were encoded more accurately than words (percent correct for pictures 95.6 ± 8.0, words 92.1 ± 10.9, F = 5.0, P < 0.05). The accuracy data (percent hits – false alarms) for young and old adults during recognition of pictures and words are shown in Table 1. All three main effects of group (F = 13.0, P < 0.001), stimulus type (F = 31.5, P < 0.001) and encoding condition (F = 73.8, P < 0.001) on recognition accuracy were significant. In addition, the group by stimulus type interaction, was significant (F = 4.1, P = 0.05), as was the stimulus by encoding interaction (F = 8.0, P < 0.001). Separate repeated-measures ANOVAs on words and pictures revealed a significant group effect for words (F = 16.3, P < 0.01), but not for pictures (F = 1.1, P > 0.05). Mean recognition for words differed only between the shallow and the other two conditions (F = 88.1, P < 0.001). Recognition of shallowly encoded pictures differed from the other two conditions (F = 36.2, P < 0.001), and recognition of learned pictures exceeded that of deeply encoded pictures (F = 12.9, P < 0.01).
The combined-group analysis of the six encoding conditions resulted in six significant patterns or LVs. Although LVs 4–6 were significant by the permutation test, very few brain areas showed a reliable contribution to these patterns, so only the first three LVs from the group analysis will be presented here (along with the corresponding significant LVs from the within-group analyses). Three main patterns of brain activity were found that characterized the encoding tasks: a stimulus-specific pattern that distinguished picture encoding from word encoding; a strategy specific pattern that differentiated the deep processing condition from intentional learning; and a pattern with a task by stimulus type interaction that distinguished deep encoding and intentional learning of pictures from the other encoding conditions. The first two patterns showed age-related reductions in how strongly they were expressed in young versus old adults, whereas the third was equivalent in the two groups.
The first activity pattern in the combined-group analysis (P < 0.0001) identified an overall difference between pictures and words (post hoc contrast, P < 0.0001), and a group by stimulus type interaction (F = 7.4, P < 0.01), indicating a less distinct pattern in the older group. This stimulus-specific pattern of brain activity is illustrated in Figure 1, along with the corresponding patterns from the separate analyses of young and old adults. Table 2 lists the maxima from the regions that showed differential activity in picture and word encoding. Both the within- and between-group patterns consisted of increased activity during the picture encoding conditions in ventral and dorsal extrastriate regions that was bilateral, but more extensive in the right hemisphere (Table 2). However, it is clear from the reliability ratios in the two groups (Table 2) and from Figure 1 that the difference in activity between picture and word encoding was more pronounced in young adults. In addition, the contribution of the medial temporal regions to the combined-group pattern during picture encoding was accounted for primarily by increased activity in these areas in young adults (Fig. 1 and Table 2).
Encoding of words, compared to pictures, resulted in greater activity in prefrontal, premotor, temporoparietal and cingulate regions that was bilateral but more prominent in the left hemisphere (Fig. 1 and Table 2). Young adults had more extensive increases in activity during word encoding compared to the older adults. Also, not all of the regions identified in the group analysis as being more active during word encoding were found in the within-group pattern for the old adults (Table 2). Thus, this brain activity pattern from the group analysis identified two encoding networks, one that was more active during picture encoding and another that was more active during word encoding. Activity in these stimulus-specific networks was significantly less in the older adults compared to the younger adults.
Another group pattern (P < 0.0001) identified brain regions whose activity differentiated the deep encoding condition from the other two conditions, primarily intentional learning, for both pictures and words (post hoc contrast: P < 0.0001). There also was a group by encoding interaction on this LV (F = 6.5, P < 0.01), indicating a greater strategy-specific difference in the young adults (Fig. 2). Areas with increased activity during the deep encoding conditions were exclusively in the left hemisphere, including anterior prefrontal regions, cingulate, hippocampus and thalamus. The regions with increased activity during the intentional learning conditions, compared to deep processing, were in bilateral motor/premotor cortex, bilateral inferior parietal regions, and posterior cingulate, occipital and mid-dorsolateral prefrontal cortex in the right hemisphere (Table 3). Differential activity in these regions was seen during the deep encoding and learning conditions for both pictures and words, and to a greater degree in the young adults. In fact, the older adults showed very little in the way of activity on this pattern, as can be seen clearly in Figure 2 and from the fact that none of the maxima identified by this group pattern showed a reliable contribution to the within-group LV in the older adults (Table 3).
A third activity pattern (P < 0.0001) identified differences related to the levels-of-processing (LOP) effect (i.e. shallow versus learn and deep, post hoc contrast: P < 0.05). However, the encoding by stimulus type interaction also was significant (F = 6.2, P < 0.01). Figure 3 shows that the LOP effect was seen only for picture encoding; in effect, this pattern differentiated the deep encoding and intentional learning conditions for pictures from all the other conditions. This was the only LV from the combined-group analysis that had no significant effect of age. The regions from the combined-group pattern that showed increased activity during the deep encoding and learning conditions for pictures were left inferior prefrontal cortex, bilateral extrastriate cortex, both ventral and posterior regions, bilateral medial temporal cortices, including the left hippocampus, and left dorsal anterior cingulate (Table 4). The extrastriate and medial temporal regions identified by this LV partially overlapped those identified in young adults during picture encoding by LV1 (compare Figs 1 and 3). Both young and old adults contributed to this group pattern and showed increased activity in left prefrontal and occipitotemporal areas in their separate patterns (Fig. 3). In addition, the medial temporal regions were identified in both the young and old adults when analyzed separately as well as in the group pattern (Fig. 3 and Table 4). Also identified were a number of areas with greater activity during shallow picture encoding and the word conditions, compared to the two more elaborate picture-encoding conditions. Most of these were in the right hemisphere, including prefrontal and parietal cortices (Table 4). This pattern of brain activity therefore distinguished an effect of deeper processing of pictures, an effect that was of equal magnitude in young and old adults.
To summarize the imaging results, the group analysis showed a brain activity pattern that characterized an overall difference between pictures and words, involving greater activity in extrastriate and medial temporal regions during picture encoding and greater activity in frontotemporal regions during word encoding. This distinction based on stimulus type was significantly smaller in the older adults. The group analysis also revealed a pattern that differentiated the deep encoding and intentional learning conditions from one another, for both pictures and words. This pattern involved activity in right prefrontal, premotor and parietal regions during learning and left anterior prefrontal and hippocampal activity during deep processing. The older adults again showed a reduction in the magnitude of this pattern compared to the young adults. Finally, the only pattern that did not show a group difference was one that distinguished a levels-of-processing effect for pictures. This pattern identified a set of prefrontal and medial temporal areas that were active selectively during the deeper picture encoding conditions in both young and old groups.
The behavioral results of this experiment are consistent with reports in the literature that both young and old adults have better recognition for pictures than for words, even, as in this case, when the stimuli were presented as pictures during encoding but tested in their word form. In addition, both deep and shallow encoding were carried out more accurately for pictures than for words. These two findings support the idea that superior memory for pictures occurs because pictures automatically engage multiple representations and associations with other knowledge about the world, thus encouraging a more elaborate encoding than occurs with words (Paivio, 1971; Craik and Tulving, 1975; Nelson, 1979). This characteristic of pictures could account for greater accuracy both at encoding and recognition, although in the case of the current experiment the advantage could only have occurred at the encoding stage since words were seen during the recognition tasks. This encoding advantage afforded by pictures may also underlie the finding that learned pictures were remembered better by both groups than pictures processed in the deep encoding condition, whereas these two conditions resulted in equivalent performance for words. That is, the extra information provided by pictures can be utilized more effectively when subjects are instructed to intentionally learn them than when making a semantic decision. The benefit of pictorial material for memory found in this experiment was actually larger for the older adults, due to fact that they were able to remember the pictures as well as young adults, but were less able to remember the words.
Our data also show that this group of elderly adults was capable of utilizing a semantic strategy provided to them to learn either pictures or words, and the benefit of this strategy over a shallow, perceptual encoding approach was the same in the older group as in the younger group. Perhaps surprisingly, the old adults also were capable of generating effective strategies during the intentional learning condition that were at least as effective as the semantic strategy. This is a somewhat unexpected finding given previous evidence of better memory after semantic processing than after intentional learning in older adults (Bower and Karlin, 1974; Craik and Simon, 1980). This difference could be due to the inherent variability in older adults' performance on cognitive tasks or to stricter health screening criteria used in this study.
The behavioral finding of superior memory for pictures was accompanied by a brain pattern that differentiated picture encoding from word encoding. Both young and old adults showed this pattern of brain activity, although the distinction was smaller in magnitude in the old group. These patterns are essentially identical to those previously reported in the young group (Grady et al., 1998b), and similar to those found in other neuroimaging studies of picture and word processing (Menard et al., 1996; Martin et al., 1997). The rCBF pattern of extrastriate activity during picture encoding and frontal, parietal and temporal activity during word encoding undoubtedly reflects the basic difference between the complex visual nature of pictures and the verbal/linguistic nature of words. One interesting difference between old and young adults in the picture-encoding pattern was the reduction of medial temporal activation in the old group. If, as we suggested previously (Grady et al., 1998b), activity in medial temporal cortex during encoding of pictures accounts at least in part for the superior memory for pictures in the young adults, then it would appear that the absence of this activity in the old adults was not detrimental to their ability to remember the pictures. The visual processing of the pictures carried out by extrastriate cortex, which was found in both groups, may be sufficient for maintaining picture memory in older adults despite reduced activity in the picture-encoding system as a whole. The age-related reduction in the word-encoding network was most apparent in temporoparietal regions that are important for semantic and other types of linguistic processing (Wise et al., 1991; Demonet et al., 1992; Bookheimer et al., 1995). The reduced influence of these areas in the activity of the word-encoding network may reduce encoding efficiency enough, in conjunction with other changes enumerated below, to affect later recognition of the words.
Another brain pattern differentiated the deep encoding condition primarily from intentional learning of both words and pictures, and also was smaller in magnitude in the old adults. There was increased activity in left hemisphere regions including anterior and medial prefrontal cortex during semantic processing and in right dorsolateral prefrontal, motor and parietal regions during learning. This pattern was similar to one reported previously for the young adults (Grady et al., 1998b), which is not surprising given that this pattern in the combined-group analysis is accounted for mainly by the young adults. This result indicates some specialization within prefrontal cortex for the particular way in which stimuli are processed during semantic processing and intentional learning, which are distinct despite the similar improvement in recognition performance after these encoding strategies compared to recognition after shallow encoding. The specific role of left anterior prefrontal cortex in deep processing of words and pictures in unknown, although other studies of semantic processing of words (Mummery et al., 1998), and processing of words in relation to the self (Craik et al., 1999) have found activity in similar regions of left anterior prefrontal cortex. Deep encoding also was accompanied by activity in the left hippocampus, consistent with previous findings of an association between activity in this region and retrieval of verbal material after deep encoding (Nyberg et al., 1996) and with reports of activation in this region during semantic processing of words (Mummery et al., 1998) or encoding of meaningful stimuli compared to meaningless stimuli (Martin et al., 1997). The increased activity during learning in motor/premotor and parietal cortices may reflect subvocal rehearsal (Smith and Jonides, 1997), consistent with subject reports of their use of this tactic during this condition. The fact that this strategy-specific pattern characterized both pictures and words in the young adults suggests that, aside from the overall difference between the brain systems that mediate encoding for different types of material (seen in the first LV), young adults use different networks for different types of processing but that these networks work on more than one type of material. The finding that older adults showed a significant reduction in the pattern that differentiated deep encoding from learning suggests that the distinction between these two encoding strategies is no longer pronounced in the elderly.
The pattern of brain activity that differentiated deeper processing of pictures from the other tasks was the only one that was equivalent in the young and old adults. More elaborate picture encoding was accompanied by activity in left prefrontal, medial temporal and extrastriate cortices. These areas all appear to be critical for perceiving and recognizing visual information, as indicated by other imaging studies (Haxby et al., 1996; Stern et al., 1996; Martin et al., 1997; Kelley et al., 1998), and experiments involving nonhuman primates (Ungerleider and Mishkin, 1982; Desimone and Ungerleider, 1989; Eichenbaum et al., 1992; Squire, 1992; Petrides, 1994). It is therefore not surprising to find these areas involved in more effortful encoding of complex visual stimuli like the pictures used in this experiment. An interesting aspect of this pattern of brain activity is that it bears a similarity to multiple systems found in the young adults, but to only one in the old adults. This group pattern was similar to a previously described pattern in the young adults that differentiated the learning condition for both pictures and words from the deep and shallow conditions (Grady et al., 1998b). There also was some overlap between the ventral occipital and medial temporal regions found on this pattern and those that distinguished overall picture encoding in the young adults. This is in contrast to the single pattern from the older adults that differentiated an LOP effect only for pictures. Thus, the network of regions supporting elaborate picture encoding in older adults may represent a rearrangement of multiple networks that function independently in young adults to form a single system that mediates only picture encoding and only when this encoding is relatively effortful. That such a rearrangement does not exist for words but only for pictures may help to explain the preserved picture memory in the old group. In addition, the fact that older adults utilize the medial temporal areas primarily during more effortful and elaborate picture processing, whereas younger adults use these areas during shallow encoding as well as during deeper processing, suggests that medial temporal regions function with visual regions in young adults to carry out a perceptually mediated memory process, whereas in older adults these areas perform a more associative or contextual function.
The implications of these age-related differences in brain activity patterns are intriguing. The fact that the increase of rCBF during picture encoding in general compared to rCBF during word encoding was less in the older adults seems not to have affected their ability to adequately encode the pictures, at least as reflected by their recognition performance. This is in contrast to the reduced activity found in the older adults during word encoding that was followed by reduced recognition of words. How then do we reconcile the behavioral and the rCBF results? One interpretation is that the data reflect a general dedifferentiation in the distinction between brain mechanisms used for encoding of various types of material in the elderly (Baltes et al., 1980; Baltes and Lindenberger, 1997) that has little impact on later recognition ability for complex but meaningful visual stimuli, such as pictures of objects, but does affect word memory. Another possibility is that increased activity in other memory systems, such as that identified during deep encoding and memorization of pictures, is able to compensate for reductions during picture encoding but not word encoding. It may be that engagement of critical encoding areas, such as prefrontal and medial temporal regions, can occur under some conditions in the elderly and facilitate memory function, although perhaps under different conditions from those that activate these areas in young adults or in conjunction with different brain areas. In the present experiment the conditions that engage these regions are those that provide the most support for memory performance, i.e. pictorial information encoded with strategies that encourage elaboration. This is consistent with the notion that environmental support for memory lessens age-related changes (Craik and Bosman, 1992). In addition, this type of result highlights the importance of assessing the impact of age on various brain areas mediating cognitive processing in the context of co-occurring activity elsewhere in the brain. That is, activity in regions such as the medial temporal lobes cannot be assessed in isolation but must be considered in the context of both the experimental conditions that elicit its activity as well as the other areas that show a similar covarying activity pattern. Future experiments will need to address the issue of brain activity during the recognition process, where age-related changes no doubt also have an effect, as well as determine which aspect of picture stimuli accounts for the superior memory for these stimuli.
|Values are mean ± SE of percent hits – percent false alarms. Data are from subjects from behavioral and PET study (see Materials and Methods): young n = 23 (one PET subject had missing data) and old n = 24.|
|Shallow||49 ± 3||41 ± 4||28 ± 5||9 ± 3|
|Deep||57 ± 4||55 ± 3||55 ± 3||40 ± 4|
|Learn||68 ± 3||64 ± 4||58 ± 4||42 ± 5|
|Values are mean ± SE of percent hits – percent false alarms. Data are from subjects from behavioral and PET study (see Materials and Methods): young n = 23 (one PET subject had missing data) and old n = 24.|
|Shallow||49 ± 3||41 ± 4||28 ± 5||9 ± 3|
|Deep||57 ± 4||55 ± 3||55 ± 3||40 ± 4|
|Learn||68 ± 3||64 ± 4||58 ± 4||42 ± 5|
This work was supported by the Ontario Mental Health Foundation and the Medical Research Council of Canada (grant no. MT14036).