Abstract

Deficits in the ability to recognize emotions in others have been noted in a wide variety of disorders, ranging from the psychiatric to the neurologic. Emotions are vital to social interactions, yet there are currently few standardized neuropsychological measures in common use to assess emotion perception abilities. This study examined the effects of age on performance of the Comprehensive Affect Testing System, a new assessment battery designed to measure perception of emotion via facial affect, prosody, and semantic content. Age was not associated with a significant decline in performance on facial tasks, although there was a significant age effect when discrete emotions were examined. Age was strongly associated with a decline in performance on prosody and cross-modal tasks, and this decline was independent of the decline in fluid ability that also accompanies the aging process. The results underscore the need for standardized instruments to assess emotion recognition abilities.

Introduction

The ability to decipher emotional information, to monitor one's own and others' emotions, and to use emotional cues to guide one's thinking and behavior has proven to be as important a determinant of life success as traditional measures of intelligence (Goleman, 1995). Impairment in emotion perception has been implicated as a negative consequence of a variety of psychiatric and neurologic brain disorders, including schizophrenia, depression, bipolar disorder, alcoholism, Parkinson's Disease, Huntington's Disease, Alzheimer's dementia, frontotemporal dementia, vascular dementia, traumatic brain injury, herpes encephalitis, epilepsy, attention-deficit/hyperactivity disorder, autism, Asperger's disorder, and cortical and subcortical brain damage due to cerebrovascular accidents (Broks et al., 1998; Heilman, Blonder, Bowers, & Crucian, 2000; Heimberg, Gur, Erwin, Shtasel, & Gur, 1992; Gur, Erwin, Gur, Zwil, Heimberg, & Kraemer, 1992; Keane, Calder, Hodges, & Young, 2002; Perry & Miller, 2001; Shimokawa et al., 2001; Stefanatos & Wasserstein, 2001; Townshend & Duka, 2003). Brain-damaged patients who exhibit impairment in emotion processing but are otherwise intact neuropsychologically show serious impairment in social behavior and interpersonal relationships (Damasio, 1994). Given the extent to which emotion perception abilities are impaired across patient populations, such abilities should be assessed during standard neuropsychological evaluations. However, there are currently no emotion perception batteries widely available to clinicians that include normative data (i.e., the effect of age on performance). Hence, reliable conclusions are impossible.

The relationship between advancing age and cognitive decline is well documented in the literature (Raz, Gunning-Dixon, Head, Dupuis, & Acker, 1998). The greatest age-related volume reduction has been found in the prefrontal cortex, although the fusiform gyrus, the visual (pericalcarine) cortex, and the striatum also show significant volume loss with age (Raz et al., 1998; Raz, Rodrigue, Kennedy, Head, Gunning-Dixon, & Acker, 2003). Diffusion tensor magnetic resonance imaging (MRI) has shown reduced white matter tract integrity for older adults, which fell linearly with increasing age (O'Sullivan, Jones, Summers, Morris, Williams, & Markus, 2001). Post-mortem studies have found moderate age-associated loss of neurons and neuronal atrophy in temporo-limbic regions including the amygdala, hippocampus, and entorhinal cortex (Kemper, 1994; West, 1993).

Striatal regions and frontal regions have been strongly implicated in emotion perception abilities. Adolphs (2002) reviewed and summarized the literature on the neural components of emotion processing. The brain structures that contribute to the integrated processing of facial expressions include the occipito-temporal cortices, amygdala, orbitofrontal cortex, basal ganglia, and right parietal cortex. Prosodic processing seems to draw on multiple bilaterally distributed brain regions, the roles of which are not equal. Despite the distributed nature of these processes, the right hemisphere, especially right inferior frontal regions, appear to be the most critical component of this system (in conjunction with more posterior regions of the right hemisphere, left frontal regions, and subcortical structures). Each structure appears to process specific auditory features that provide cues for recognizing the emotion. There have been relatively fewer investigations into the brain structures underlying the processing of emotional lexical material.

Age and Emotion Perception

Given the overlap between the structures involved in emotion perception and those that have been shown to decline with age, one would expect to see a decline in emotion perception abilities with advancing age. While emotional processing has been examined in a variety of disease processes, far fewer studies have addressed the role of aging in emotional processing. Most studies that have been conducted investigated changes in emotional expression with age, and have ignored aspects of emotion perception (Grunwald et al., 1999). Hall (2001) examined age-related changes of emotion perception across three communication channels including facial affect recognition, prosody recognition, and recognition of verbal content. Strong inverse relations were seen between age and all three channels.

Although many studies have not found an overall age effect for facial affect recognition, examination of individual emotions have revealed a different pattern (Phillips, MacLean, & Allen, 2002). Several studies have found that the ability to identify negative emotions in faces is susceptible to age differences, with recognition of anger, sadness, and fear being the most affected (Calder et al., 2003; McDowell, Harrison, & Demaree, 1994; Moreno, Borod, Welkowitz, & Alpert, 1993; Phillips et al., 2002; Sullivan & Ruffman, 2004). Few studies have shown an effect for positive emotions. There is evidence that the amygdala of older adults might be less sensitive to negative facial expressions, in favor of positive ones (Mather et al., 2004). Interestingly, Calder and coworkers found that the recognition of disgust was not only preserved in older adults (up to 70 years), but that it might actually be enhanced with age. It has been proposed that a decline in the perception of negative facial expressions with age might be related to a decrease in the experience of negative emotions that has been found in older adults (Gross, Carstensen, Pasupathi, Tsai, Goetestam-Skorpen, & Hsu, 1997).

Age-related differences in activation patterns in response to facial expressions of emotion have been identified using functional MRI (Fischer, Sandblom, Gavazzeni, Fransson, Wright, & Backman, 2005; Gunning-Dixon et al., 2003). Gunning-Dixon and coworkers (2003) found age differential activation patterns during affect discrimination tasks. Under the emotional discrimination condition, younger adults showed increased activation of the right temporo-limbic cortex (particularly the amygdala), whereas older adults failed to activate limbic regions and instead recruited the anterior cingulate, bilateral prefrontal regions (more extensively than their younger counterparts), and right parietal regions. Older adults did not activate prefrontal cortices during the age discrimination task. Given these findings, in combination with those that show that frontal cortices appear to be most adversely affected by age, it stands to reason that age-related declines in performance on facial affect recognition tasks will become greater with increased task complexity (i.e., tasks with greater executive demands).

A significant decline in recognition of prosody has also been found to accompany the normal aging process (Kiss & Ennis, 2001). On a prosody identification task, participants over 60 made more than twice as many errors and performed more than 2 standard deviations below 29–39-year olds. Orbelo, Testa, and Ross (2003) found that the pattern of performance across affective prosody comprehension tasks in older participants resembled the pattern found after right-hemisphere damage. Results from a more recent study further supported a “right-hemisphere aging” hypothesis (Prodan, Orbelo, & Ross, 2008). Grunwald et al. (1999) found that accuracy in the perception of emotional and non-emotional lexical stimuli declines with age. Performance was stable from the 20 s to the 50 s with a notable decline in performance in participants who were older than 60 years. Older participants also evaluated non-emotional lexical stimuli as significantly more intense than did younger participants.

Purpose of the Present Study

The purpose of the current study was to examine the effects of age on performance of the abbreviated version of the Comprehensive Affect Testing System (CATS), a new assessment battery that measures emotion perception across three channels (Froming, Levy, Schaffer, & Ekman, 2006). As with any measure, it is crucial to know how performance on this measure is affected by normal aging, particularly as many of the disorders that affect these abilities are primarily seen in older adults (e.g., dementia and stroke). Failure to understand the effects of normal aging on CATS performance would lead to faulty clinical inferences and might exaggerate the severity of any deficits exhibited.

Materials and Methods

Participants

Sixty healthy, adult participants between the ages of 20 and 79 were recruited between October 2003 and December 2004 in both the San Francisco Bay Area (October 2003–July 2004) and the New York Metropolitan Area (September 2004–December 2004). Ten participants were recruited from within each of six age ranges: 20–29, 30–39, 40–49, 50–59, 60–69, and 70–79. Efforts were made to obtain a representative sample of the general population. These efforts involved gathering information about gender, age, and educational attainment during a telephone screening session. Additional screening instruments were administered to ensure that volunteers met inclusion and exclusion criteria. It was decided before data collection began that there would not be enough data points to draw meaningful conclusions regarding ethnicity. Therefore, in order to avoid any unwanted variance, only Caucasian respondents were eligible to participate.

General inclusion and exclusion criteria.

Inclusion in this study depended on results from a two-tiered screening process (telephone screen and additional in-person screening). Inclusion/exclusion criteria included: being fluent in English; vision capabilities adequate enough to read a newspaper (with glasses/lenses); hearing capabilities adequate enough to engage in normal conversation; no history or current diagnosis of significant mental illness; no history or current diagnosis of medical disorders or medications with known neuropsychological consequences; and no current or past drug or alcohol abuse or dependence. A telephone questionnaire was employed as an initial attempt to screen out persons who did not meet the aforementioned inclusion/exclusion criteria.

Characteristics of the total sample.

Frequency data were calculated for gender, education level, and handedness for the total sample and for each decade age group. These data are presented in Table 1. Means and standard deviations for the total sample and for each of the six decade age groups were calculated for the following variables: age, years of education, and estimated Verbal IQ, PIQ, and FSIQ (Table 2). One-way analyses of variance (ANOVAs) were performed for each of these five interval variables, and except for age, group means did not differ significantly on any of the variables examined. All IQ means fell in the high average range. Results of the ANOVAs that examined IQ scores across the six age groups yielded no significant differences. Although the IQ scores for the sample were skewed towards the higher end of the IQ spectrum, it was more important for the purposes of this study that the age groups did not differ significantly on these variables.

Table 1

Participant characteristics for total sample and by age group: gender, education, and handedness

 Total (N = 60) Age decade
 
  20–29 30–39 40–49 50–59 60–69 70–79 
Gender        
 Female 33 
 Male 27 
Education        
 High school Diploma 19 
 Some college 20 
 College graduate 14 
 Post-graduate 
Handedness        
 Right 50 10 
 Left 
 Mixed 
 Total (N = 60) Age decade
 
  20–29 30–39 40–49 50–59 60–69 70–79 
Gender        
 Female 33 
 Male 27 
Education        
 High school Diploma 19 
 Some college 20 
 College graduate 14 
 Post-graduate 
Handedness        
 Right 50 10 
 Left 
 Mixed 

Note: Values represent number of participants with specified characteristic.

Table 2

Participant characteristics for total sample and by age group: age, years of education and estimated IQ

 Age decade
 
 Total (n = 10)
 
20–29 (n = 10)
 
30–39 (n = 10)
 
40–49 (n = 60)
 
50–59 (n = 10)
 
60–69 (n = 10)
 
70–79 (n = 10)
 
 Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD 
Age 49 16.7 26 3.2 35 3.4 44 2.8 55 2.6 65 2.6 74 3.0 
Years of Education 14 2.1 15 2.0 14 2.6 14 2.1 15 1.8 15 2.1 14 2.9 
Estimated IQ               
 Verbal IQ 112 14 118 11 113 13 105 20 109 11 111 11 115 17 
 Performance IQ 113 19 115 23 117 17 114 21 105 21 115 15 111 22 
 Full Scale IQ 113 15 118 15 116 14 109 21 110 11 113 23 113 20 
 Age decade
 
 Total (n = 10)
 
20–29 (n = 10)
 
30–39 (n = 10)
 
40–49 (n = 60)
 
50–59 (n = 10)
 
60–69 (n = 10)
 
70–79 (n = 10)
 
 Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD 
Age 49 16.7 26 3.2 35 3.4 44 2.8 55 2.6 65 2.6 74 3.0 
Years of Education 14 2.1 15 2.0 14 2.6 14 2.1 15 1.8 15 2.1 14 2.9 
Estimated IQ               
 Verbal IQ 112 14 118 11 113 13 105 20 109 11 111 11 115 17 
 Performance IQ 113 19 115 23 117 17 114 21 105 21 115 15 111 22 
 Full Scale IQ 113 15 118 15 116 14 109 21 110 11 113 23 113 20 

Instruments

Screening instruments.

Demographic information was collected during the telephone screening session. A structured telephone-screening interview elicited information about age, brief medical and mental health history, educational attainment, occupation, ethnicity, and information regarding alcohol use. Additional screening measures administered immediately prior to the onset of CATS testing further identified people who did not meet inclusion/exclusion criteria. Screening measures included: brief mood measures (Beck Depression Inventory-II and Beck Anxiety Inventory); screening items from the Structured Clinical Interview for the DSM-IV Axis I Disorders (SCID-I) to further exclude those with histories of major mental illnesses; IQ estimators (Information and Matrix Reasoning subtests of the Wechsler Adult Intelligence Scales-III (WAIS-III); and the Mini-Mental State Examination (MMSE; to exclude participants with neurocognitive deficits).

The Comprehensive Affect Testing System.

The CATS is a new assessment battery designed to evaluate emotion perception abilities (Froming et al., 2006). The CATS can be administered via two formats: paper and pencil, and computerized administration. The current study was limited to the computerized administration.

The CATS utilizes facial emotion poses created by Ekman and Friesen (1976). The 13 CATS subtests are comprised of the same set of male and female faces that are posed in six emotional expressions (happy, sad, angry, surprised, fearful, and disgusted) as well as neutral. These faces were chosen because they have been shown to be recognized universally across cultures (Ekman, 1994). The subtests are designed to test facial and prosodic discrimination and identification, facial affect matching with and without verbal denotation, and prosodic processing with and without verbal denotation (and with conflicting or congruent semantic content). A male actor speaks on the prosodic subtests. The sets of facial affect expressions are counterbalanced for serial position and number of repetitions of each face and affective state. Facial stimuli are of a uniform size (4.5 cm in height), regardless of the number of stimuli presented on the screen. Data collected as part of the current study were used to develop an abbreviated version of the CATS (CATS-A). The methods involved in the development of this version, including reliability data for the two versions, are discussed in the CATS manual (Schaffer, Gregory, Froming, Levy, & Ekman, 2006), which is available on the publisher's website: http://www.psychologysoftware.com/CATS.htm.

All analyses for the current study are based on CATS-A data, as this version will likely be more widely used in clinical settings. The following is a list and descriptions of the 13 subtests in their order of administration (item totals reflect the number of items on the CATS-A):

Identity Discrimination (12 items).

Two faces are shown at midline of the examinee's field of view. The examinee must decide if the faces are the same or different actors. The portraits are presented in same sex pairs. Each member of a pair expresses the same emotion.

Affect Discrimination (12 items).

The examinee is shown the same actor but the emotional expression may be the same or different. The orientation of the visual stimuli and response options are the same as in Subtest 1.

Nonemotional Prosody Discrimination (6 items).

No faces are shown. A pair of non-affective sentences (e.g., The boy opened the window) is heard on each trial. They are either both said as simple declarative sentences, as questions, or one of each. The examinee indicates whether the sentences are the same or different.

Emotional Prosody Discrimination (6 items).

No faces are shown. A pair of non-affective sentences are read by an actor exhibiting either happiness, sadness, anger, fright, or neutrality in his voice. The examinee indicates whether the sentences reflect the same or different emotions.

Name Affect (6 items).

The examinee is asked to denote the affect expressed within the single face presented. A verbal label (happy, sad, angry, fear, disgust, or neutral) to the emotional face is required.

Name Emotional Prosody (12 items).

No faces are shown. One sentence is read at a time by the actor. The examinee selects which emotion (happiness, sadness, anger, fright or neutrality) his voice expresses.

Match Affect (12 items).

One face is shown at the top of the screen, above five others each of which expresses a different emotion. The examinee selects one among the five faces which expresses the same emotion as the top face.

Select Affect (6 items).

Five portraits of the same individual are presented on each trial, each portraying a different emotion. The name of the target emotion is displayed at the top of the screen and announced orally (e.g., “Which face is angry?”).

Conflicting Prosody/Meaning—Attend to Prosody (12 items).

No faces are shown. The examinee is instructed to ignore the affective meaning represented in the sentence and to focus on the emotion expressed by the voice.

Conflicting Prosody/Meaning—Attend to Meaning (12 items).

No faces are shown. The same sentences are presented as in Subtest 10, but the examinee is instructed to focus on the affective meaning represented by the sentence and to ignore the emotion expressed by the voice.

Match Emotional Prosody to Emotional Face (12 items).

A single sentence is read by the actor on each trial. The examinee clicks on the face that exhibits the corresponding emotion. The sentences are identical to those used in Subtest 8 (Name Prosody).

Match Emotional Face to Emotional Prosody (12 items).

An emotional face is presented at the top of the screen. The examinee indicates which of three sentences expresses the same emotion as shown on the face.

Three Faces Test (24 items).

A trio of portraits is displayed, each depicting the same gender. Two portraits show the same individual expressing different emotions. The examinee must select the two portraits that express the same emotion.

Procedure

Data collection.

After approval was obtained from the Institutional Review Board of the Pacific Graduate School of Psychology, volunteers were recruited using referrals, flyers, Internet postings, and announcements at community and senior centers. Data were collected at the Pacific Graduate School of Psychology, and at local libraries and community centers in San Francisco and New York. Prior to the administration of any in-person screening measures, informed consent was obtained. After the consent form was reviewed and signed by both the participant and the investigator, the following instruments were administered in this order: health-screening questionnaire (developed by the primary investigator), Annett Handedness Survey (Annett, 1967), Beck Depression Inventory-Second Revision (BDI-II; Beck, Steer, & Brown, 1996), Beck Anxiety Inventory (BAI; Beck & Steer, 1993), screening items from the Structured Clinical Interview for the DSM-IV Axis I Disorders (SCID-I; First, Spitzer, Gibbon, & Williams, 1997), MMSE (Folstein, Folstein, & McHugh, 1975), Information and Matrix Reasoning Subtests of the WAIS-III (Wechsler, 1997), and the CATS. Participants who either finished all the testing or were found ineligible after their performance on the second round of screening measures were compensated for their time and effort to an amount of $25.00.

Design.

This was a correlational study that examined age as the predictor variable and performance on CATS-A composite scales as the criterion variables. Other factors considered as possible moderating variables included IQ and gender. To ensure that 60 participants was an adequate number for confident rejection or acceptance of the hypotheses of this study, a power analysis was conducted. Based on this analysis and review of the literature, it was determined that in order to have adequate power (.80) a total of 60 participants would be sufficient for a multiple regression correlation analysis. A medium to large effect size was expected for this study.

Results

Preliminary Analyses

As there are 11 emotional subtests on the CATS-A and there were data from only 60 participants, the criterion variables were reduced from 11 subtests to five composite scales (one of which was comprised of only one subtest). Combining subtests into composite scales also served to increase the reliability of these variables relative to individual subtest reliabilities (see User's Manual for reliability data). The five composite scales were deduced through a combination of conceptual reasoning and empirical data analysis. Subtests that correlated with at least one other subtest in a scale at or above r = .400, but not less than r = .300 with any other subtest in a scale were combined. As expected, cross-modal tasks correlated significantly with many of the “simpler” (i.e., single channel) subtests, as well as with each other. In order to draw conclusions regarding discrete channels, these subtests were excluded from single-channel scales on conceptual grounds, despite meeting the aforementioned criteria (and vice versa with respect to inclusion of single-channel subtests in the “Cross-Modal Scale”). Subtest 10 (Conflicting Prosody/Meaning–Attend Meaning) did not meet criteria for inclusion in any other scale, and as a result it was included as its own “Lexical Scale”. Using the aforementioned criteria five scales emerged, which are reported in Table 3. Scores for the five scales were calculated by adding the total number correct for each subtest in a scale and dividing this by the total possible correct. This resulted in a percent correct for each scale. Descriptive data for each of the five scales are presented in Table 4.

Table 3

Comprehensive Affect Testing System: abbreviated composite scales used in hypothesis testing

Subtest
 
Composite scale
 
Name Simple facial Complex facial Prosody Lexical Cross-modal 
Facial affect discrimination     
Name affect     
Select affect     
Match affect     
13 3 Faces     
Emotional prosody discrimination     
Identify emotional prosody     
Conflict—attend prosody     
10 Conflict—attend meaning     
11 Match prosody to face     
12 Match face to prosody     
Subtest
 
Composite scale
 
Name Simple facial Complex facial Prosody Lexical Cross-modal 
Facial affect discrimination     
Name affect     
Select affect     
Match affect     
13 3 Faces     
Emotional prosody discrimination     
Identify emotional prosody     
Conflict—attend prosody     
10 Conflict—attend meaning     
11 Match prosody to face     
12 Match face to prosody     
Table 4

Means (% correct) and standard deviations for composite scales for total sample and by age decade

Composite scale Age decade
 
 Total (n = 60)
 
20–29 (n = 10)
 
30–39 (n = 10)
 
40–49 (n = 10)
 
50–59 (n = 10)
 
60–69 (n = 10)
 
70–79 (n = 10)
 
 Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD 
Simple facial 82.7 12.8 84.4 14.1 73.3 10.4 83.9 8.1 86.7 9.9 85.0 12.8 82.8 17.7 
Complex facial 75.0 12.3 83.3 10.9 75.5 11.8 75.5 11.3 78.1 10.3 73.3 9.6 64.1 13.5 
Prosody 77.6 15.8 86.0 4.9 91.3 5.0 81.0 13.5 77.0 13.8 66.0 16.3 64.0 17.0 
Lexical 81.5 21.2 91.7 5.6 84.2 19.8 73.3 26.6 84.2 21.3 84.2 19.4 71.7 25.8 
Cross-modal 74.9 14.4 87.5 6.5 82.1 9.2 79.6 10.7 72.9 11.2 67.9 15.1 59.2 13.2 
Composite scale Age decade
 
 Total (n = 60)
 
20–29 (n = 10)
 
30–39 (n = 10)
 
40–49 (n = 10)
 
50–59 (n = 10)
 
60–69 (n = 10)
 
70–79 (n = 10)
 
 Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD 
Simple facial 82.7 12.8 84.4 14.1 73.3 10.4 83.9 8.1 86.7 9.9 85.0 12.8 82.8 17.7 
Complex facial 75.0 12.3 83.3 10.9 75.5 11.8 75.5 11.3 78.1 10.3 73.3 9.6 64.1 13.5 
Prosody 77.6 15.8 86.0 4.9 91.3 5.0 81.0 13.5 77.0 13.8 66.0 16.3 64.0 17.0 
Lexical 81.5 21.2 91.7 5.6 84.2 19.8 73.3 26.6 84.2 21.3 84.2 19.4 71.7 25.8 
Cross-modal 74.9 14.4 87.5 6.5 82.1 9.2 79.6 10.7 72.9 11.2 67.9 15.1 59.2 13.2 

Pearson correlations were obtained for the age variable and Subtests 1 and 3, which are non-emotional “control” tasks that measure facial identity discrimination and non-emotional prosody recognition (descriptions provided earlier). Although the relationships between age and performance on the control tasks were not statistically significant (r1 = −.210, p = .108; r3 = −.239, p = .066), these relationships approached significance; however, it should be noted that no participant missed more than 3 of 12 items on the facial identity task or more than one of six items on the non-emotional prosody task. Nevertheless, these tasks were included as covariates in the main analyses.

Correlations between the five new composite scales and the demographic/descriptive variables (gender, estimated Full Scale IQ, and Matrix Reasoning raw score [MR Raw]) were examined to identify additional covariates to be used during hypothesis testing (Table 5). There was a significant correlation between gender and performance on the Simple Facial Scale (p < .05) and the Complex Facial Scale (p < .01), with women outperforming men on these scales. Fluid ability, as defined by WAIS-III MR raw, correlated strongly with the Prosody and Cross-Modal Scales (p < .001), and less so with the Simple Facial and Complex Facial Scales (p < .05 and p < .01, respectively). There was not a significant association between MR raw and the Lexical Scale. While the Matrix reasoning subtest has been described as a measure of fluid ability, it should be noted that it also has a strong visuospatial component to it. Therefore, the correlations between this test and performance on the facial scales might reflect not only an association between facial affect perception and fluid abilities, but also facial affect perception and visuospatial abilities. It is noteworthy, however, that MR raw score was more strongly associated with performance on the Prosody Scale, which does not involve visual demands. Nevertheless, using performance on Matrix Reasoning as a covariate might also serve to partial out the relative contribution of visuospatial reasoning to performance on facial affect tasks. Although the correlation between FSIQ and the Simple Facial Scale was significant, further examination indicated that this was due to one data point that was an extreme outlier on both variables (i.e., ZSimple Facial Scale = −3.0 and FSIQ = 154). When this individual was removed from this analysis, the correlation was no longer significant (r = −.192, p = .144). Therefore, FSIQ was not included as a covariate in the analysis for this scale. This participant's data was not excluded from further analyses.

Table 5

Pearson correlations between composite scales and demographic/descriptive variables

 Gender FSIQ MR score Simple facial Complex facial Prosody Lexical 
Gender –       
Full scale IQ .179 –      
Matrix reasoning score −.036 .704*** –     
Simple facial −.292* −.300* −.277* –    
Complex facial −.346** .206 .363** .327* –   
Prosody −.037 .154 .431*** .057 .421*** –  
Lexical scale −.214 .026 .094 .178 .425*** .356** – 
Cross-modal .087 .140 .416*** .221 .503*** .726*** .319* 
 Gender FSIQ MR score Simple facial Complex facial Prosody Lexical 
Gender –       
Full scale IQ .179 –      
Matrix reasoning score −.036 .704*** –     
Simple facial −.292* −.300* −.277* –    
Complex facial −.346** .206 .363** .327* –   
Prosody −.037 .154 .431*** .057 .421*** –  
Lexical scale −.214 .026 .094 .178 .425*** .356** – 
Cross-modal .087 .140 .416*** .221 .503*** .726*** .319* 

Notes: *p < .05; **p < .01; ***p < .001.

Forced entry hierarchical regression analyses were conducted on all five scales. For each regression analysis, performance on the non-emotional tasks and the covariate(s) were entered into the regression equation first, followed by age. Of note, only Subtest 3 (Non-emotional Prosody Discrimination) was entered as a covariate in the analysis for the Lexical Scale. For the non-emotional tasks, Subtest 1 (Facial Identity Discrimination) was entered as a covariate for the facial scales and the Cross-Modal Scale, and Subtest 3 (Non-emotional Prosody Discrimination) was used for the Prosody Scale, Lexical Scale, and Cross-Modal Scale. Results of the regression analyses are reported in Table 6.

Table 6

Forced entry hierarchical multiple regression analyses for simple facial scale, complex facial scale, prosody scale, and cross-modal scale using age, gender, and fluid ability as predictors

Criterion Predictors  Beta t Sigma t F Sigma F R2/R2Δ 
Simple facial scale Step 1     2.136 .149 .036/.036 
  Subtest 1 .190 1.461 .149    
 Step 2     4.377 .017 .135/.099 
  Subtest 1 .246 1.949 .056    
  MR raw −.320 −2.533 .014    
 Step 3     5.319 .003 .225/.090 
  Subtest 1 −.243 2.015 .049    
  MR raw −.330 −2.736 .008    
  Gender −.300 −2.523 .015    
 Step 4        
  Subtest 1 .245 1.993 .051 3.922 .007 .225/.000 
  MR raw −.323 −2.356 .022    
  Gender −.299 −2.493 .016    
  Age .017 .121 .904    
Complex facial scale Step 1     12.333 .001 .178/.178 
  Subtest 1 .422 3.512 .001    
 Step 2     10.045 .000 .264/.086 
  Subtest 1 .370 3.175 .002    
  MR raw .298 2.560 .013    
 Step 3     10.901 .000 .373/.109 
  Subtest 1 .366 3.378 .001    
  MR raw .287 2.641 .011    
  Gender −.330 −3.090 .003    
 Step 4     9.483 .000 .413/.040 
  Subtest 1 .336 3.141 .003    
  MR raw .183 1.531 .131    
  Gender −.338 −3.240 .002    
  Age −.229 −1.911 .061    
Prosody scale Step 1     2.534 .117 .043/.043 
  Subtest 3 .206 1.592 .117    
 Step 2        
  Subtest 3 .046 .351 .727 6.446 .003 .187/.145 
  MR raw .413 3.156 .003    
 Step 3        
  Subtest 3 .013 .114 .910 11.689 .000 .389/.202 
  MR raw .182 1.436 .157    
  Age −.512 −4.268 .000    
Cross-modal scale Step 1     4.055 .023 .126/.126 
  Subtest 1 .343 2.573 .013    
  Subtest 3 .031 .233 .816    
 Step 2     6.651 .001 .266/.140 
  Subtest 1 .326 2.638 .011    
  Subtest 3 −.121 −.913 .365    
  MR raw .406 3.236 .002    
 Step 3     13.270 .000 .496/.230 
  Subtest 1 .258 2.471 .017    
  Subtest 3 −.133 −1.203 .234    
  MR raw .161 1.386 .171    
  Age −.551 −4.957 .000    
Criterion Predictors  Beta t Sigma t F Sigma F R2/R2Δ 
Simple facial scale Step 1     2.136 .149 .036/.036 
  Subtest 1 .190 1.461 .149    
 Step 2     4.377 .017 .135/.099 
  Subtest 1 .246 1.949 .056    
  MR raw −.320 −2.533 .014    
 Step 3     5.319 .003 .225/.090 
  Subtest 1 −.243 2.015 .049    
  MR raw −.330 −2.736 .008    
  Gender −.300 −2.523 .015    
 Step 4        
  Subtest 1 .245 1.993 .051 3.922 .007 .225/.000 
  MR raw −.323 −2.356 .022    
  Gender −.299 −2.493 .016    
  Age .017 .121 .904    
Complex facial scale Step 1     12.333 .001 .178/.178 
  Subtest 1 .422 3.512 .001    
 Step 2     10.045 .000 .264/.086 
  Subtest 1 .370 3.175 .002    
  MR raw .298 2.560 .013    
 Step 3     10.901 .000 .373/.109 
  Subtest 1 .366 3.378 .001    
  MR raw .287 2.641 .011    
  Gender −.330 −3.090 .003    
 Step 4     9.483 .000 .413/.040 
  Subtest 1 .336 3.141 .003    
  MR raw .183 1.531 .131    
  Gender −.338 −3.240 .002    
  Age −.229 −1.911 .061    
Prosody scale Step 1     2.534 .117 .043/.043 
  Subtest 3 .206 1.592 .117    
 Step 2        
  Subtest 3 .046 .351 .727 6.446 .003 .187/.145 
  MR raw .413 3.156 .003    
 Step 3        
  Subtest 3 .013 .114 .910 11.689 .000 .389/.202 
  MR raw .182 1.436 .157    
  Age −.512 −4.268 .000    
Cross-modal scale Step 1     4.055 .023 .126/.126 
  Subtest 1 .343 2.573 .013    
  Subtest 3 .031 .233 .816    
 Step 2     6.651 .001 .266/.140 
  Subtest 1 .326 2.638 .011    
  Subtest 3 −.121 −.913 .365    
  MR raw .406 3.236 .002    
 Step 3     13.270 .000 .496/.230 
  Subtest 1 .258 2.471 .017    
  Subtest 3 −.133 −1.203 .234    
  MR raw .161 1.386 .171    
  Age −.551 −4.957 .000    

Facial Affect

Gender and MR raw (i.e., fluid ability and visuospatial reasoning) each explained a significant amount of variance in performance on the Simple Facial Scale (p < .05), while the contribution of performance on Subtest 1 approached significance (p = .051). Age did not explain any additional variance in performance (p = .904). With respect to the Complex Facial Scale, performance on Subtest 1, MR raw, and gender each accounted for a significant amount of the variance in performance on this scale (p < .001, p < .05, and p < .01 respectively). When entered into the model, age only accounted for an additional 4% of the variance (p = .06). While the amount of variance explained by performance on Matrix Reasoning was no longer significant when age was accounted for, the amount of unique variance explained by performance on Subtest 1 and gender remained significant (p < .01). In summary, while age explained almost no variance on the simple facial tasks, its unique contribution to performance on the complex tasks approached significance. The results suggest that the ability to detect non-emotional differences in faces contributes relatively more to performance on complex facial tasks than it does on the simpler affect recognition tasks. Further, it explained more unique variance than did age. Gender accounted for a significant amount of variance in performance on both facial scales.

Emotional Prosody and Cross-Modal

Performance on the non-emotional prosody discrimination task alone did not explain a significant amount of variance in performance on the Prosody Scale, which suggests that the effects seen were independent of perceptual difficulties. Fluid ability explained an additional 14% of the variance in performance (p < .01) when entered into the model, yet age explained an additional 20.2% of the variance in performance (p < .0001), and when age was entered into the model the effect for fluid ability was no longer significant (despite the strong correlation between age and MR raw score). With respect to the Cross-Modal Scale, performance on Subtest 1 explained a significant amount of variance in performance (p < .05), while performance on Subtest 3 did not (p = .816). Fluid ability accounted for an additional 14% of the variance in performance on this scale (p < .01); however, when age was entered into the model, the amount of unique variance explained by fluid ability was no longer significant (p = .234), whereas age accounted for an additional 23% of the variance in performance (p < .0001). The variance accounted for by the facial identity task remained significant in the final model (p < .05). Together, these three variables explained approximately 50% of the variance in performance on the Cross-Modal Scale.

Given the strong correlations seen between the single channel scales and the Cross-Modal scale, further analyses were conducted to see how much of the variance in performance on the Cross-Modal Scale was explained by performance on each of the “simpler” (i.e., single channel) scales, and whether age explained any additional variance in performance. The two facial scales, the prosody scale and age, were entered into a regression equation (Table 7). When both facial scales were entered into the model, the Simple Facial Scale did not explain a significant amount of the variance in performance, whereas the Complex Facial Scale explained 20.7% of the variance (p < .0001); however, in the final model this effect was reversed, and in fact the Complex Facial Scale was the only scale that did not explain a significant amount of unique variance in performance. In the final model of this analysis, the Simple Facial Scale, the Prosody Scale, and age each explained a significant amount of unique variance in performance (p < .05, p < .0001, and p < .001, respectively). It is important to note that although performance on the Prosody Scale explained the greatest amount of variance in performance for the Cross-Modal Scale, age explained a large amount of additional variance in performance (i.e., the age decline in performance was not only due to the strong inverse correlation between age and prosody recognition, but also the increased complexity of these tasks).

Table 7

Forced Entry Hierarchical Multiple Regression Analyses for Cross-Modal Scale using performance on single-channel scales as predictors

Criterion Predictors Beta t Sigma t F Sigma F R2/R2Δ 
Cross-modal scale Step 1    2.988 .089 .049/.049 
  Simple facial scale .221 1.729 .089    
 Step 2    9.824 .000 .256/.207 
  Simple facial scale .064 .529 .599    
  Complex facial scale .482 3.987 .000    
 Step 3    26.507 .000 .587/.330 
  Simple facial scale .121 1.328 .190    
  Complex facial scale .195 1.938 .058    
  Prosody scale .637 6.692 .000    
 Step 4    26.927 .000 .662/.075 
  Simple facial scale .205 .368 .021    
  Complex facial scale .108 .134 .262    
  Prosody scale .450 .415 .000    
  Age −.363 −3.498 .001    
Criterion Predictors Beta t Sigma t F Sigma F R2/R2Δ 
Cross-modal scale Step 1    2.988 .089 .049/.049 
  Simple facial scale .221 1.729 .089    
 Step 2    9.824 .000 .256/.207 
  Simple facial scale .064 .529 .599    
  Complex facial scale .482 3.987 .000    
 Step 3    26.507 .000 .587/.330 
  Simple facial scale .121 1.328 .190    
  Complex facial scale .195 1.938 .058    
  Prosody scale .637 6.692 .000    
 Step 4    26.927 .000 .662/.075 
  Simple facial scale .205 .368 .021    
  Complex facial scale .108 .134 .262    
  Prosody scale .450 .415 .000    
  Age −.363 −3.498 .001    

Lexical

Only Subtest 3 (Non-emotional Prosody Discrimination) was used as a covariate for the Lexical Scale (Subtest 10 – Conflicting Prosody/Meaning-Attend Meaning). It did not explain a significant amount of variance in performance on this task (p = .808). Age did not explain any additional variance in performance on this task (p = .905). Advancing age does not appear to be associated with a decline in performance on this lexical task. Given the robust results that show a decline in the ability to recognize emotional prosody with age, it can be concluded that the prosodic component of this task (sentences were read in either congruent or incongruent prosodic tone) did not interfere with the ability to understand the emotional meaning of the sentence (i.e., participants were able to ignore the prosodic tone in favor of attending to the lexical content).

Fig. 1 shows the amount of unique variance in performance explained by the non-emotional tasks, gender, fluid ability, and age for all five composite scales.

Fig. 1

Amount of unique variance in performance on composite scales that is explained by age and covariates.

Fig. 1

Amount of unique variance in performance on composite scales that is explained by age and covariates.

Post hoc Analyses

Facial affect.

Previous research has indicated that age effects for facial affect processing may be moderated by discrete emotion type, as well as emotional valence. Given that the age effects for the facial scales were less robust than for the prosody and cross-modal scales, additional analyses were conducted to examine whether stronger effects existed with respect to discrete emotion types. Individual items were classified based on emotion type, and these items were grouped into emotion categories (e.g., discrete emotions and valence). Facial stimuli were taken from Subtests 5, 7, 8, and 13, and were categorized as: angry, disgusted, fearful, sad, happy, or surprised (eight items in each category). Broad valence was also examined; the negative category included angry, disgusted, and fearful faces, while the positive category included happy and surprised faces. Correlations were obtained between age and each emotion category. It should be noted that the “All Positive” variable had fewer items than the “All Negative” variable. A weak correlation was seen between age and performance on negative facial items (r = −.304, p < .05), while the correlation between age and positive items approached significance (r = −.247, p = .057). When individual emotions were examined, significant trends emerged. Age was associated with performance on sadness (p < .001), fearful (p < .01), and surprised (p < .05) facial items, but it was not associated with performance on angry, disgusted, or happy facial items.

Response latencies.

The computerized version of the CATS calculates the response latencies for both correct and incorrect responses for each subtest. Given potential cohort effects that might have confounded the results for response latencies (e.g., unfamiliarity with computers and/or reduced psychomotor speed of older adults), the examiner was responsible for physically clicking on the response that was indicated by the examinee for all of the participants in the study, regardless of age. Due to scoring errors in the version of the CATS that was used in collecting the current data, accurate latencies were not available for two of the subtests administered (Subtests 10 and 11). Analyses were undertaken to examine the association between age and mean response latencies on the remaining 11 subtests. Mean response latencies for incorrect items were not available for the non-emotional discrimination tasks, as very few people responded incorrectly on these tasks.

Pearson correlations were used to examine the association between age and correct/incorrect response latencies for each subtest. There was a significant positive correlation between age and time to respond on the non-emotional facial discrimination task (Subtest 1; r = .311, p = .016), yet the same was not true for the non-emotional prosody task. With respect to correct response latencies on emotional tasks, significant positive correlations (p < .01) were seen between age and response time for the first two facial affect tasks (Subtests 2 and 5), yet none of the remaining seven correlations were significant (i.e., complex facial, prosody, or cross-modal tasks). Given that Subtests 1, 2, and 5 were among the earliest (and simplest) tasks administered, these positive findings might represent the relative difficulty for older participants to “get into set” on the facial tasks, particularly as similar results were not seen for more complex facial tasks and for prosody and cross-modal tasks. Therefore, age does not appear to be a factor in the amount of time it takes for adults to respond to items correctly. These results are particularly striking when one considers the robust age effects for accuracy that were seen for the prosody and cross-modal tasks. In contrast to the results regarding correct response latencies, several significant correlations were seen between age and response latencies when items were responded to incorrectly; however, these findings were limited to tasks for which prosody was a main component (Subtests 6, 11, and 12; p < .01). Total mean latencies for correct and incorrect responses on these tasks were compared using paired sample t-tests, and the results invariably showed that response latencies for incorrect items were longer than those for correct items, regardless of age (p = .000). Taken together, these results indicate that the longer participants spent responding to prosody items, the more likely they were to be incorrect, yet this effect was more pronounced in older adults. This might speak to the greater working memory demands involved in prosody tasks (i.e., while the stimuli and response choices remain on the screen during facial tasks, the prosodic stimuli were presented once and were not available again unless a repetition was requested). The implications of these results will be discussed subsequently.

Discussion

Age was not found to be associated with performance on lexical or facial affect tasks (although the age effect for the Complex Facial Scale approached significance). However, advancing age was strongly associated with a decline in performance on prosody and cross-modal tasks. The age effects seen were independent of fluid ability and visuospatial reasoning (as measured by the Matrix Reasoning subtest of the WAIS-III) or performance on the control tasks, as these were accounted for in the analyses.

Facial Affect

In a study that examined emotion recognition abilities in unilaterally brain-damaged individuals, there were significant differences in performances between groups on identification tasks but not discrimination tasks (Borod et al., 1998). The authors surmised that discrimination tasks likely activated perceptual processing versus emotional processing. The current study found that performance on both facial scales activated perceptual processes, as both were found to be associated with performance on the non-emotional facial identity discrimination task. In fact, performance on the non-emotional discrimination task accounted for a greater amount of unique variance in performance than age on both the simple and complex facial tasks. Given previous research that has identified age effects for affect recognition tasks, it stands to reason that the lack of any age effect for the Simple Facial resulted from a lack of power in the current study (either due to small sample size or due to the tasks themselves). Both of the tasks that are included in the Simple Facial Scale provide cues (perceptual and verbal) to aid the examinee, which might have made them too easy (i.e., insensitive) to identify effects that do in fact exist.

Although the age effect for the Complex Facial Scale did not reach statistical significance, the complete lack of an effect for the Simple Facial Scale warrants further discussion regarding the differences between these two scales. The three tasks that are included in the Complex Facial Scale (Select Affect, Match Affect, and 3 Faces) are more ambiguous and have greater problem-solving demands than those for the Simple Facial Scale. As the same stimuli were used for all facial tasks, it is safe to assume that the greater contribution of age to performance on the Complex Facial Scale can be attributed to the nature of these tasks. Specifically, these tasks involved choosing one of five emotional faces to match a label (Select Affect), matching an emotional face to one of the five emotional faces (Subtest 5), or matching two of three faces to each other (Subtest 13). The response choices in these tests often involved “distractor” choices that look very similar to the correct response choice. For example, if the correct response was a fearful face, there would be a surprised face among the choices. If the examinee only gave a cursory glance at the response choices and saw the distractor before the correct answer, s/he might have chosen this response. Hence, these tasks appear to require a greater amount of decision-making capacity than the simpler discrimination and identification tasks. Further, because these tasks appear to be somewhat easy, some participants might not have taken the time that was actually necessary to detect subtle differences (i.e., they responded impulsively). Given research findings regarding reduced integrity of frontal regions in older adults, it is possible that the decision-making demands of these tasks were more difficult for these participants. To summarize, it is unlikely that the greater age effects on the Complex Facial Scale resulted from difficulty recognizing facial affect per se, given the lack of age effects for the Simple Facial Scale, but rather from the increased complexity of the task demands. It is important to note that these more complex tasks might actually approximate real-life interactions better than the discrimination and identification tasks (e.g., emotion labels are not provided as cues in everyday interactions), and these results provide information regarding the mechanism behind the age effect seen. As previously stated, the age effects described for the Complex Facial Scale were independent of fluid ability and/or perceptual skills, although perceptual abilities do appear to contribute significantly to performance.

Post hoc analyses revealed significant age effects for facial affect recognition when emotional valence and discrete emotion categories were examined. Previous research has suggested that an age effect exists for negative, but not for positive facial expressions of emotion (Mather et al., 2004; McDowell et al., 1994). The current findings for broad valence categories are consistent with these findings; however, prior studies have found recognition of anger to be one of the negative emotions affected (Fischer et al., 2005), whereas recognition of angry faces was not affected by age in the current study. Recognition of disgust also was not affected by age, which is consistent with previous research (Calder et al., 2003). Despite the absence of age effects for the Simple Facial Scale (and the non-significant effect for the Complex Facial Scale), the ability to recognize sad and fearful faces was found to be strongly affected by age, which is also consistent with previous research (Ruffman, Henry, Livingstone, & Phillips, 2008). No age effect was seen for happy faces. From an evolutionary standpoint, the pattern of age effects seems somewhat intuitive. It stands to reason that the ability to recognize sadness and fear are minimized as friends and partners become ill and pass away, as a way to defend against inevitable truths in one's own life. Conversely, anger, happiness, and disgust are still quite adaptive in old age.

Prosody and Cross-Modal

Regarding the robust age effects seen for the prosody scale, these results are consistent with past research that has shown a strong relationship between advancing age and a decline in performance on prosody tasks (Hall, 2001; Kiss & Ennis, 2001). In fact, Orbelo et al. (2003) found age-related impairments in emotional prosody recognition that resembled those seen in right-brain damaged patients, and Kiss and Ennis (2001) reported performances by adults over 60 that were 2 SD below those of younger participants. The greater robustness of the findings for the Prosody Scale versus that of the Complex Facial Scale was also consistent with results from a study conducted by Borod and coworkers (1998). They investigated emotion recognition abilities across all three communication channels in brain-damaged individuals, and their results indicated that the prosody identification task was the most difficult one for right-hemisphere damaged individuals. Adolphs (2002) also noted the relative difficulty in recognizing emotions from prosody in his review of the neural pathways associated with such processing. He surmised that the greater difficulty associated with emotional prosody processing results in lower statistical power in the ability to detect possible impairments, the consequence of which is fewer studies with significant findings (and subsequently inferior knowledge into the mechanism behind these processes). This view underscores the need to have a standardized measure to detect such impairments. It should again be noted that the strong age effect for the Prosody Scale was independent of fluid abilities.

In his review, Adolphs (2002) emphasized the critical roles of the fronto-parietal cortex and basal ganglia in emotional prosody processing. Raz and coworkers (2003) reported a relationship between age and decreased volume of the striatal nuclei, and these structural changes might explain the larger age-decline seen for prosody tasks. Furthermore, volume reduction and white matter changes have been found in the anterior regions of older adults' brains, which have been associated with decreased executive functioning (including working memory demands), and the prosodic stimuli seem to involve greater working memory demands than the facial stimuli. Although unlimited repetition of the prosodic stimuli was allowed (and in fact encouraged when long latencies were observed), the examinee was required to hold the sentence in mind while making a decision regarding response choices. This theory is consistent with results from imaging studies that have shown greatest activation in the right frontal regions during prosody tasks (George et al., 1996). Given the strong relationship between performance on the prosody and cross-modal scales, the strong age effect seen for the Cross-Modal Scale was not surprising. However, results showed that age explained a significant amount of additional variance in performance on this scale, which likely can be attributed to the greater complexity of these tasks.

Lexical

Although the lack of age effect for the lexical task contradicts previous research that found an age-related decline in the processing of emotional lexical material (Grunwald et al., 1999), this can be explained by the greater complexity of the lexical tasks used in the study of Grunwald and coworkers. The lexical task on the CATS is quite simple in that it only requires the examinee to make a distinction between positive and negative lexical content. The most difficult aspect of this task was the fact that statements were read with both congruent and incongruent prosodic intonation. In contrast, Grunwald and coworkers used stimuli that involved a wide variety of tasks and a greater number of discrete emotions. In addition, participants were required to make both accuracy and intensity ratings. Although significant age-declines have been seen with respect to emotional prosody, the prosodic component of Subtest 10 did not appear to affect performance on this task, which suggests that the ability to ignore prosody is separate from the ability to comprehend prosody (i.e., participants did not appear to get distracted by the incongruent prosodic intonation).

Latencies

The association between age and correct response latencies were significant for simple (and early) facial tasks, yet none of the remaining correlations were significant, and these results are believed to represent the relative difficulty for older adults to “get into set” on the facial tasks. None of the remaining seven correlations between age and correct response latency were significant, despite significant effects for accuracy. In contrast, it was found that although age was not related to correct response time on prosody tasks, the same was not true for incorrect items, where it was shown that older adults took significantly longer than younger adults to respond. Given that all participants were found to respond more slowly on incorrect items, this age effect might reflect the relative difficulty in older adults to cope with the working memory demands of these tasks. In other words, the results suggested that when emotional prosody was not recognized immediately upon presentation, there was a greater likelihood that the response would be incorrect, with a greater effect seen in older adults. Many studies have shown that perception of emotion takes place within milliseconds of stimulus exposure, and the current results provide evidence that additional exposure (and thus time allowed for responding) does not improve accuracy. This makes sense from an evolutionary standpoint, as social interactions are usually dynamic, and decisions about non-verbal cues must therefore be split-second. As there was not a significant age effect regarding incorrect responses to facial stimuli, and given that there were fewer age effects for accuracy on facial recognition tasks, it stands to reason that facial cues may be more helpful to older adults during social interactions; however, this might depend on the nature of the interaction, given that some expressions were recognized more easily than others.

Conclusions

There are a number of explanations for the differences in results between channels. The most parsimonious explanation is that the facial and lexical tasks were not sensitive enough, and/or the sample size was too small, to detect an effect that does in fact exist (i.e., there was not enough power in this study). Results of post hoc analyses showed age effects for some discrete emotions but not others; hence, the facial tasks are not comprised of homogenous items, and those items for which there was no age effect in a sense “diluted” an effect that did in fact exist. Again, this speaks to the power of the current study. The highly robust effects seen for the prosody channel (and cross-modal tasks) warrant further explanation. Overall, the greater working memory demands required during prosody recognition, in conjunction with age-related decrease in the volumes of brain regions associated with such demands, might help explain the large age effects that were seen on prosody tasks. Furthermore, age-related volume reduction of the striatal nuclei, which have been strongly implicated in emotional prosody processing, might also account for the declines in performance that were seen among older participants. A greater number of brain structures have been implicated in facial emotion processing (in part due to the fact that different structures have been implicated in the perception of discrete emotions), which might make compensatory strategies more efficient when processing facial stimuli versus prosodic stimuli. However, such strategies might not be sufficient when the decision-making demands of the task are increased, as was the case for the Complex Facial Scale. Performance on a simple lexical emotion task did not appear to be affected by age, although this task might not have enough power to detect subtle difficulties in this ability. Examination of response latencies indicated that for both facial and prosody tasks, age was not related to the amount of time required for individuals required to respond correctly. However, the longer participants spent on prosody items, the less likely they were to be correct, with a greater effect seen in older adults. In combination, the current findings suggest that in the absence of brain pathology, older adults might be better able to use facial than prosodic cues to discern non-verbal communication during social interactions, although the degree to which they are able to rely on facial cues would appear to depend on the type of interaction under question, given that some expressions are recognized more easily than others.

Limitations and Strengths of the Study

Results from this study provide useful information regarding age effects on emotion recognition abilities as measured by the items used in CATS-A and underscores the need for normative data on emotion perception tasks when used in clinical populations. There are a few limitations of the current study, including the small sample size and lack of representation at the lower end of the IQ spectrum. Although the power analysis indicated that 60 participants would be sufficient for a regression analysis, this was not an optimal number. Some of the results were so robust that sound conclusions could still be made despite the small sample size. In addition, past research, as well as post hoc analyses from the current study, suggests that age effects do in fact exist for “simple” facial affect recognition; hence, it is possible that the simpler facial tasks from the CATS were not sensitive enough to detect effects that do in fact exist. However, these tasks might be useful in detecting differing degrees of impairment in patient populations. Finally, despite attempts to recruit participants spanning the entire IQ spectrum, the final sample was skewed towards the high average range. Although preliminary analyses concluded that there was not a significant difference in IQ between the age groups, it is as yet unclear whether below average IQ is correlated with decreased performance on the CATS. It is unlikely that this limitation weakened the power of the current study; however, normative data should be collected on a sample that is normally distributed with respect to IQ.

Summary

The current study provides robust evidence that advancing age is associated with a significant decline in the ability to recognize the emotional expressions of others via prosody and on tasks that incorporated multiple communication channels; this decline in performance appears to be independent of the decline in fluid ability that also accompanies the aging process. In general, participants did not appear to benefit from additional time on prosody tasks, although older adults appeared to be more adversely affected by longer response latencies (possibly due to increased working memory demands). The relationship between advancing age and facial affect perception appears to be moderated by the complexity of task demands. In the absence of cues, older adults seem to have more difficulty than their younger counterparts with this channel (possibly due to increased decision-making demands, which would rely on the integrity of prefrontal brain regions). Furthermore, older adults found it more difficult than younger adults to identify sad, fearful, and surprised expressions, but this effect did not extend to happy, angry, and disgusted expressions. There were fewer relationships between age and response speed on facial tasks, and those that were seen were believed to represent the relative difficulty for older adults to become acclimated to the task demands. Age was not associated with a significant decline in performance on a simple lexical task, although it is possible that the task administered was not sensitive enough to detect effects that do in fact exist. When interpreted together, results regarding response accuracy and response latency during prosody and facial affect recognition tasks suggest that, in the absence of brain pathology, emotional cues expressed through facial affect may be more useful to older adults than prosodic cues. Future studies should be undertaken that focus on this issue.

From a clinical standpoint, the current findings have important implications, particularly for psychotherapists who treat geriatric patients. Psychotherapeutic interventions could be better honed if referring neuropsychologists were able to make specific recommendations regarding a patient's unique strengths and weaknesses in social communication, which can only be done via objective testing of these skills. Most tools assess experiential aspects of emotion (Lezak, Howieson, & Loring, 2004; Spreen & Strauss, 1998). Neuropsychologists who point out the need for assessment procedures to evaluate emotional processing indicate that methodological and psychometric issues are problematic in the standardization of potential measures in this area (Nelson & Cicchetti, 1995). In a more recent review of emotion assessment batteries, Borod, Tabert, Santschi, and Strauss (2000) stated that psychometric information including standardization, norms, reliability, and validity for individual batteries are predominantly absent. While there have been multiple attempts to develop tools to assess emotion perception, few incorporate assessment across the three communication channels, and none have achieved widespread use. Those who show promising results on preliminary studies are not readily available to clinicians. There is a need for a reliable and valid assessment tool that would elucidate the specific nature of impairments in emotion processing. The CATS shows promise as an instrument that can be used by neuropsychologists to help guide treatment recommendations.

Conflict of Interest

S.G.S. and K.B.F. are both authors on the Comprehensive Affect Testing System (CATS & CATS-A). S.G.S. will not benefit in any way from future sales of the CATS. K.B.F. will receive financial remuneration from net profits obtained via sales of CATS software.

Acknowledgements

The authors would like to thank William Froming, PhD, for his invaluable advice regarding methodology and statistics, and William Barr, PhD, for his editorial contributions. This study was carried out at the Pacific Graduate School of Psychology.

References

Adolphs
R.
Neural systems for recognizing emotion
Current Opinion in Neurobiology
 , 
2002
, vol. 
12
 (pg. 
169
-
177
)
Annett
M.
The binomial distribution of right, mixed and left handedness
Quarterly Journal of Experimental Psychology
 , 
1967
, vol. 
19
 (pg. 
327
-
333
)
Beck
A.
Steer
R.
The Beck Anxiety Inventory manual
 , 
1993
San Antonio, TX
Psychological Corporation
Beck
A.
Steer
R.
Brown
G.
Beck Depression Inventory-II
 , 
1996
San Antonio, TX
The Psychological Corporation
Borod
J. C.
Cicero
B. A.
Obler
L. K.
Welkowitz
J.
Erhan
H. M.
Santschi
C.
, et al.  . 
Right hemisphere emotional perception: Evidence across multiple channels
Neuropsychology
 , 
1998
, vol. 
12
 (pg. 
446
-
458
)
Borod
J. C.
Tabert
M. H.
Santschi
C.
Strauss
E. H.
Borod
J. C.
Neuropsychological assessment of emotional processing in brain-damaged patients
The neuropsychology of emotion
 , 
2000
New York
Oxford University Press
(pg. 
80
-
103
)
Broks
P.
Young
A. W.
Maratos
E. J.
Coffey
P. J.
Calder
A. J.
Isaac
C. L.
, et al.  . 
Face processing impairments after encephalitis: Amygdala damage and recognition of fear
Neuropsychologia
 , 
1998
, vol. 
36
 (pg. 
59
-
70
)
Calder
A. J.
Keane
J.
Manly
T.
Sprengelmeyer
R.
Scott
S.
Nimmo-Smith
I.
, et al.  . 
Facial expression recognition across the adult life span
Neuropsychologia
 , 
2003
, vol. 
41
 (pg. 
195
-
202
)
Damasio
A.
Descarte's error: Emotion, reason and the human brain
 , 
1994
New York
Grosset/Putnam
Eckman
P.
Strong evidence for universals in facial expressions
Psychological Bulletin
 , 
1994
, vol. 
115
 (pg. 
268
-
287
)
Ekman
P.
Friesen
W. V.
Pictures of facial affect
 , 
1976
Palo Alto, CA
Consulting Psychologist's Press
First
M. B.
Spitzer
R. L.
Gibbon
M.
Williams
J. B.
Structured clinical interview for the DSM-IV Axis I Disorders, Clinical Version
 , 
1997
New York, NY
American Psychiatric Press
Fischer
H.
Sandblom
J.
Gavazzeni
J.
Fransson
P.
Wright
C. I.
Backman
L.
Age-differential patterns of brain activation during perception of angry faces
Neuroscience Letters
 , 
2005
, vol. 
386
 (pg. 
99
-
104
)
Folstein
M. F.
Folstein
S. E.
McHugh
P. R.
Mini-mental state: A practical method for grading the cognitive state of patients for the clinician
Journal of Psychiatric Research
 , 
1975
, vol. 
12
 (pg. 
189
-
198
)
Froming
K.
Levy
M.
Schaffer
S.
Ekman
P.
The Comprehensive Affect Testing System
 , 
2006
Psychology Software, Inc
George
M. S.
Parekh
P. I.
Rosinsky
N.
Ketter
T. A.
Kimbrell
T. A.
Heilman
K. M.
, et al.  . 
Understanding emotional prosody activates right hemisphere regions
Archives of Neurology
 , 
1996
, vol. 
53
 (pg. 
665
-
670
)
Goleman
D.
Emotional intelligence
 , 
1995
New York
Bantam
Gross
J. J.
Carstensen
L. L.
Pasupathi
M.
Tsai
J.
Goetestam-Skorpen
C.
Hsu
A.
Emotion and aging: Experience, expression, and control
Psychology and Aging
 , 
1997
, vol. 
12
 (pg. 
590
-
599
)
Grunwald
I. S.
Borod
J. C.
Obler
L. K.
Erhan
H. M.
Pick
L. H.
Welkowitz
J.
, et al.  . 
The effects of age and gender on the perception of lexical emotion
Applied Neuropsychology
 , 
1999
, vol. 
6
 (pg. 
226
-
238
)
Gunning-Dixon
F. M.
Gur
R. C.
Perkins
A. C.
Schroeder
L.
Turner
T.
Turetsky
B. I.
, et al.  . 
Age-related differences in brain activation during emotional face processing
Neurobiology of Aging
 , 
2003
, vol. 
24
 (pg. 
285
-
295
)
Gur
R. C.
Erwin
R. J.
Gur
R. E.
Zwil
A. S.
Heimberg
C.
Kraemer
H. C.
Facial emotion and discrimination: II. Behavioral findings in depression
Psychiatry Research
 , 
1992
, vol. 
42
 (pg. 
241
-
251
)
Hall
S. J.
The perception of facial, prosodic, and lexical emotion across the life span
Dissertation Abstracts International: Section B: the Sciences and Engineering
 , 
2001
, vol. 
61
 pg. 
6736
 
Heilman
K. M.
Blonder
L. X.
Bowers
D.
Crucian
G. P.
Borod
J. C.
Neurological disorders and emotional dysfunction
The neuropsychology of emotion
 , 
2000
New York, NY
Oxford University Press
(pg. 
367
-
412
)
Heimberg
C.
Gur
R. E.
Erwin
R. J.
Shtasel
D. L.
Gur
R. C.
Facial emotion and discrimination: III. Behavioral findings in schizophrenia
Psychiatry Research
 , 
1992
, vol. 
42
 (pg. 
253
-
365
)
Keane
J.
Calder
A. J.
Hodges
J. R.
Young
A.
Face and emotion processing in frontal variant frontotemporal dementia
Neuropsychologia
 , 
2002
, vol. 
40
 (pg. 
655
-
665
)
Kemper
T. L.
Albert
M. L.
Knoepfel
E. J. E.
Neuroanatomical and neuropathological changes during aging and in dementia
Clinical neurology of aging
 , 
1994
2nd ed.
New York, NY
Oxford University Press
(pg. 
3
-
67
)
Kiss
I.
Ennis
T.
Age-related decline in perception of prosodic affect
Applied Neuropsychology
 , 
2001
, vol. 
8
 (pg. 
251
-
254
)
Lezak
M.
Howieson
D.
Loring
D.
Neuropsychological assessment
 , 
2004
4th ed.
New York, NY
Oxford University Press
Mather
M.
Canli
T.
English
T.
Whitfield
S.
Wais
P.
Ochsner
K.
, et al.  . 
Amygdala responses to emotionally valenced stimuli in older and younger adults
Psychological Science
 , 
2004
, vol. 
15
 (pg. 
259
-
263
)
McDowell
C. L.
Harrison
D. W.
Demaree
H. A.
Is right hemisphere decline in the perception of emotion a functioning of aging?
International Journal of Neuroscience
 , 
1994
, vol. 
79
 (pg. 
1
-
11
)
Moreno
C.
Borod
J. C.
Welkowitz
J.
Alpert
M.
The perception of facial emotion across the adult life span
Developmental Neuropsychology
 , 
1993
, vol. 
9
 (pg. 
305
-
314
)
Nelson
L. D.
Cicchetti
D. V.
Assessment of emotional functioning in brain-impaired individuals
Psychological Assessment
 , 
1995
, vol. 
7
 (pg. 
404
-
413
)
Orbelo
D.
Testa
J.
Ross
E.
Age-related impairments in comprehending affective prosody with comparison to brain-damaged subjects
Journal of Geriatric Psychiatry and Neurology
 , 
2003
, vol. 
16
 (pg. 
44
-
52
)
O'Sullivan
M.
Jones
D.
Summers
P.
Morris
R.
Williams
S.
Markus
H.
Evidence for cortical “disconnection” as a mechanism of age-related cognitive decline
Neurology
 , 
2001
, vol. 
57
 (pg. 
632
-
638
)
Perry
R. J.
Miller
B. L.
Behavior and treatment in frontotemporal dementia
Neurology
 , 
2001
, vol. 
56
 (pg. 
46
-
51
)
Phillips
L. H.
MacLean
R.
Allen
R.
Age and the understanding of emotions: Neuropsychological and sociocognitive perspectives
Journal of Gerontology
 , 
2002
, vol. 
57B
 (pg. 
526
-
530
)
Prodan
C.
Orbelo
D.
Ross
E.
Processing of facial blends of emotion: Support for right hemisphere cognitive aging
Cortex
 , 
2008
, vol. 
43
 (pg. 
196
-
206
)
Raz
N.
Gunning-Dixon
F.
Head
D.
Dupuis
J.
Acker
J.
Neuroanatomical correlates of cognitive aging: Evidence from structural magnetic resonance imaging
Neuropsychology
 , 
1998
, vol. 
12
 (pg. 
95
-
114
)
Raz
N.
Rodrigue
K.
Kennedy
K.
Head
D.
Gunning-Dixon
F.
Acker
J.
Differential aging of the human striatum: Longitudinal evidence
American Journal of Neuroradiology
 , 
2003
, vol. 
24
 (pg. 
1849
-
1856
)
Ruffman
T.
Henry
J.
Livingstone
V.
Phillips
L.
A meta-analytic review of emotion recognition and aging: Implications for neuropsychological models of aging
Neuroscience and Behavioral Reviews
 , 
2008
, vol. 
32
 (pg. 
863
-
881
)
Schaffer
S. G.
Gregory
A.
Froming
K. B.
Levy
C. M.
Ekman
P.
Emotion processing: The Comprehensive Affect Testing System User's Manual
 , 
2006
Sanford, FL
Psychology Software, Inc.
Shimokawa
A.
Yatomi
N.
Anamizu
S.
Torii
S.
Isono
H.
Sugai
Y.
, et al.  . 
Influence of deteriorating ability of emotion comprehension on interpersonal behavior in Alzheimer's type dementia
Brain and Cognition
 , 
2001
, vol. 
47
 (pg. 
423
-
433
)
Spreen
O.
Strauss
E.
A compendium of neuropsychological tests
 , 
1998
New York, NY
Oxford University Press
Stefanatos
G. A.
Wasserstein
J.
Attention deficit/hyperactivity disorder as a right hemisphere syndrome: Selective literature review and detailed neuropsychological case studies
Annals of the New York Academy of Sciences
 , 
2001
, vol. 
931
 (pg. 
172
-
195
)
Sullivan
S.
Ruffman
T.
Emotion recognition deficits in the elderly
International Journal of Neuroscience
 , 
2004
, vol. 
114
 (pg. 
402
-
432
)
Townshend
J. M.
Duka
T.
Mixed emotions: Alcoholics' impairments in the recognition of specific emotional facial expressions
Neuropsychologia
 , 
2003
, vol. 
41
 (pg. 
773
-
782
)
Wechsler
D.
Wechsler Adult Intelligence Scale
 , 
1997
3rd ed.
San Antonio, TX
The Psychological Corporation
West
M. J.
Regionally specific cell loss of neurons in the aging human hippocampus
Neurobiology of Aging
 , 
1993
, vol. 
14
 (pg. 
287
-
293
)