Abstract

Patients' cognitive complaints and subsequent performance on neuropsychological tests often fail to relate. This could, in part, be caused by a Babylonic incongruence between laypeople's and experts' use of cognition words or “jargon.” The present study examined the concurrency of experts and laypeople for 18 neuropsychological tests in the cognitive domains “language,” “memory,” “attention/concentration,” “perception,” and “thinking” (executive functioning). This was done by correlating the classifications of the laypeople and experts for individual tests and within each domain. A high domain concurrency was found, indicated by domain correlations of the classifications between laypeople and experts ranging from rs=.79 to .92, with exception of the domain “attention” (rs=.32). Importantly, with respect to the classification of each individual test in a cognitive domain, large variations in correlations were found, ranging from rs=.30 up to rs=1.0. These results indicate that there is agreement between the concepts laypeople use and the theory-based concepts of the experts. Our study also offers valuable insight for the clinical practice: tests with a high correlation should be used to aid the clarity of communication in the clinical practice, for instance when giving feedback on performance.

Introduction

Traditionally, neuropsychological assessment is focused on assisting in the diagnosis of behavior and cognitive functioning in relation to cerebral dysfunction. As neuropsychological assessment typically starts with a detailed assessment of cognitive complaints of patients, much emphasis is placed on these complaints. Both laypeople and experts make use of “cognition words” to describe or verify performance on the cognitive domains, such as discussing difficulty in “remembering” or “thinking.” The specific terms that patients refer to are “translated” by experts to formulate (working-)hypotheses on cognitive impairment and to design a suitable test battery. Moreover, expert terminology such as the concepts “memory,” “attention,” or “executive functioning” is applied in communication between experts and patients. This happens for instance when the expert provides feedback on the results of a neuropsychological assessment to the patient. In doing so, it is assumed that experts and laypeople use the same language with respect to cognitive concepts. The often “simple” cognition words used to describe a cognitive concept, for instance “memory,” offer safety in apparent simplicity and could in fact be interpreted as misleading with respect to the underlying complexity of the concept. The assumption that the concepts used by experts and laypeople are the same has to our knowledge never been studied, even though there are signals that these concepts might differ. Studies designed to look into the relation between complaints of laypeople and laypeople's cognitive performance often fail to demonstrate clear associations. Explanations for the weak relationship between cognitive complaints and cognitive performance on neuropsychological assessment are lacking at present (e.g., Jungwirth et al., 2004; Mol, van Boxtel, Willems, & Jolles, 2006; Reid & Maclullich, 2006).

In the last decades, neuropsychological assessment has been used increasingly to assist in questions about everyday cognitive (dis)abilities, for example, a person's suitability for a rehabilitation program, the ability to go to work, or manage finances (e.g., Kalechstein, Newton, & van Gorp, 2003). The extent to which performance on neuropsychological tests corresponds to real-world performance has therefore been stressed (“ecological validity,” seeBurgess, Alderman, Evans, Emslie, & Wilson, 1998; Chaytor & Schmitter-Edgecombe, 2003). Test batteries, such as the Behavioral Assessment of Dysexecutive Syndrome (BADS; Wilson, Alderman, Burgess, Emslie, & Evans, 1996), the Rivermead Behavioral Memory Test (RBMT; Wilson, Cockburn, & Baddeley, 1985), and the Test of Everyday Attention (TEA; Robertson, Ward, Ridgeway, & Nimmo-Smith, 1994) have been specifically developed to fit this purpose. In order to narrow the gap between the clinical practice and performance in the real world, ecological valid tests were designed to represent real-world equivalents. Despite these efforts, a possible gap between laypeople and experts, between real world and clinical practice can persist. Insight of what neuropsychological tests measure in the eyes of laypeople has yet to be studied.

One of the questions largely overlooked with respect to the lack of concordance between complaints and subsequent test performance is whether the theoretical constructs of cognitive domains used by experts are concordant with the intuitive knowledge that laypeople have about these cognitive abilities. Based on the demands patient care nowadays has on clinicians, it is extremely valuable for the clinical practitioner to know whether the concepts experts apply are transferable to their patients. Subsequently, to aid in communicating with patients, information is needed on which tests best serve as representative examples of a certain cognitive function. For that reason, the present study investigated the concurrency between theory-based cognitive concepts and laypeople's subjective assessment of these concepts of 18 commonly employed neuropsychological tests.

Methods

Participants

Twenty-eight laypeople of Caucasian descent, 15 men, were included in the present study. They were healthy control participants in a larger study on cognition after stroke at the University Medical Center, Utrecht, the Netherlands. Recruitment was through word of mouth and advertisements. The laypeople had to be functionally independent and Dutch-speaking. Exclusion criteria were a psychiatric or neurological disorder that could influence cognitive functioning, and a history of alcohol and/or substance abuse. Mean age was 63.4 ± 7.5 years. Median level of education was 6 (range 1–7; Verhage, 1964) which corresponds to 13.7 ± 3.1 years of education. The participants received refunds for their travel expenses. An expert group of Caucasian descent was also recruited, consisting of 14 experienced clinical neuropsychologists practicing in the field of clinical neuropsychology (three men, mean age 37.2 ± 10.1, level of education 7, corresponding to at least 18 years of formal education). These two groups were not matched on demographic variables such as age, gender, and level of education, but chosen to represent actors in daily clinical practice as closely as possible.

Procedure

All participants were required to classify 18 commonly used neuropsychological tests into one or more predefined cognitive domains. To acquaint the laypeople with the tests, this group first completed the test battery, which took approximately 2 h. Identical to clinical practice, test administration was preceded by a short structured interview in which possible cognitive complaints were recorded. These questions were organized by cognitive domain and introduced as such. The experts were not required to perform the tests as they were familiar with all 18 tests. The expert group was naïve regarding the purpose of their classifications to prevent this group to anticipate on laypeople's classifications.

Eighteen small cards were designed containing a pictorial representation and two-word descriptions of each test (Fig. 1). All participants were presented with these cards one at the time and were asked to indicate for each test in which of five predefined cognitive domains (see below) they thought this test would fit. If they could not classify a test to a predefined domain, this response was recorded as “no idea.” Classification in multiple domains was permitted.

Fig. 1.

Two examples of the stimuli presented to the participants.

Fig. 1.

Two examples of the stimuli presented to the participants.

Measures

Eighteen commonly used neuropsychological tests were chosen, covering five major cognitive domains: “language,” “memory,” “attention,” “perception,” and “executive functioning” (Lezak, Howieson, & Loring, 2004). Neuropsychological tests often tap multiple cognitive domains, but it is generally accepted that each test also measures a specific or “key” cognitive function in particular. For example, a word-learning test may involve memory, language functions, and attention, but the “key” concept is memory. Moreover, tests exist that are known to tap more than one domain, such as verbal fluency tests, implicated as reflecting both language abilities and executive functioning. The five domains were subsequently explained to all participants using the terminology: “language,” “memory,” “attention or concentration,” “perception,” and “thinking.” To convey executive functioning to laypeople, many options were discussed. Each of these possessed one of the aspects of executive functioning, while one description covering executive functioning as close as possible was aimed for. In staying far from descriptives, most options discussed that were equivalent to executive functioning were ruled out. We decided on the term “thinking” because this is the term usually used by lay-people to refer to executive functioning in everyday life.

The following tests were included (domains indicated based on key concepts described by Lezak et al., 2004, or if not available on the test manual): The instructions of the test manuals were used in the administration of all tests to reflect the clinical practice as close as possible.

  • “Language”: National Adult Reading Test (Dutch version), Token Test (short form), verbal fluency using the N, A (both 1 min), and animals (2 min), Boston Naming Test (short form).

  • “Memory”: Rey Auditory Verbal Learning Test, Location Learning Task, Paired Associates of the Wechsler Memory Scale III (WMS-III), Story recall of the RBMT, Visual Association Test.

  • “Attention”: Digit Symbol (Wechsler Adult Intelligence Scale third edition [WAIS-III]), Visual Elevator of the TEA, Digit Span of the WAIS-III, Corsi Block-Tapping Test.

  • “Perception”: Star Cancellation Test, Line Bisection Test, Judgment of Line Orientation (short form)

  • “Executive functioning”: Rule Shift Cards and Key Search of the BADS.

Statistical Analysis

Raw classifications were recorded per individual for each single test and the corresponding domain or domains. This results in scores per test for each domain that range from 0 (no participant classified this test in this domain) to 28 (laypersons) or 14 (experts). The number of classifications was compared between the two groups with a Mann–Whitney U-test. To more directly compare the two groups, percentages of the classifications were also computed. Linear regression analyses were performed to examine the potential influence of age and level of education on the percentages of correct classifications (key domain was selected) based on key concepts described by Lezak and colleagues (2004), or if not available on the test manual.

Since classification in multiple domains was permitted (i.e., a person could classify a test in 0–5 cognitive domains), the classifications were subsequently weighted to correct for multiple classifications. This correction was performed by calculating the weighted fraction per individual for each test by dividing each classification by the total number of classifications that that person gave for that particular test. For instance, if a person classifies a test both in the domain “attention” and memory, this test contributes 0.5 to both domains. If a test is classified in total in three domains, it contributes 0.33 to each domain. If only one classification is given, thus a participant fits a certain test in only one domain; this classification receives a weight of 1 for that test for that domain. The total number of fractions per test were computed for each test and averaged over participants within a group. This results in a mean fraction per test, per domain, and per group.

Correlations were calculated between the mean fractions per group per domain (domain correlations, indicated by Spearman's r for each domain). Correlations between the two groups for the mean fractions per tests were also calculated (test correlations, indicated by Spearman's r for each tests).

Results

The laypeople made use of a median number of 24 (range 19–36) classifications for the 18 tasks, whereas in the expert group, the median number of classifications (Mdn=49, range 18–69) was significantly higher (U=62, p<.001, r=−.55). The laypeople, giving 700 classifications in total, classified most tests to belong to the domains “perception” (28%), “attention” (24%), or “memory” (23%). The 14 experts gave 609 classifications in total and classified the tests mostly in the domains “attention” (26%), “memory” (23%), and “perception” (22%). Within the group of laypeople, linear regression analyses per domain revealed no significant relation between age and level of education on the percentages correctly classified classifications (all ps > .1).

Table 1 shows the (unweighted) percentage of classifications per test in proportions for both groups separately. The expert group showed a tendency to classify tests as belonging to multiple domains. The experts also showed a tendency to classify most tests as belonging to (at least) the domain “attention.” This tendency was present to a lesser degree in the laypeople.

Table 1.

Un-weighted classifications of 18 neuropsychological tests for laypeople and experts

Test Language
 
Memory
 
Attention
 
Perception
 
Thinking
 
BADS: Key Search 57 50 14 100 96 
BADS: Rule Shift Cards 36 11 78 75 50 32 93 32 
Judgment of Line Orientation 57 93 96 28 14 
Line Bisection 50 11 93 89 
Star Cancellation 78 21 93 100 14 
TEA: Visual Elevator 50 78 54 57 36 50 43 
Digit Symbol (WAIS-III) 43 86 46 57 75 43 25 
Corsi Block-Tapping Test 14 93 50 86 68 57 39 28 
Digit Span (WAIS-III) 50 93 64 78 57 28 14 28 14 
Visual Association Test 14 93 54 64 14 64 68 14 
RBMT Story recall 64 18 93 89 64 43 14 28 
WMS Paired Associates 71 54 93 82 64 14 21 21 11 
Location Learning Test 93 68 57 36 71 61 21 
Rey AVLT 43 100 93 78 32 21 28 
Verbal fluency (N, A, animals) 71 43 78 36 50 36 71 39 
Boston naming test (short form) 100 43 50 21 71 57 11 
Token test (short form) 78 21 28 50 64 64 25 43 18 
National Adult Reading Test 93 96 36 11 43 18 36 36 
Test Language
 
Memory
 
Attention
 
Perception
 
Thinking
 
BADS: Key Search 57 50 14 100 96 
BADS: Rule Shift Cards 36 11 78 75 50 32 93 32 
Judgment of Line Orientation 57 93 96 28 14 
Line Bisection 50 11 93 89 
Star Cancellation 78 21 93 100 14 
TEA: Visual Elevator 50 78 54 57 36 50 43 
Digit Symbol (WAIS-III) 43 86 46 57 75 43 25 
Corsi Block-Tapping Test 14 93 50 86 68 57 39 28 
Digit Span (WAIS-III) 50 93 64 78 57 28 14 28 14 
Visual Association Test 14 93 54 64 14 64 68 14 
RBMT Story recall 64 18 93 89 64 43 14 28 
WMS Paired Associates 71 54 93 82 64 14 21 21 11 
Location Learning Test 93 68 57 36 71 61 21 
Rey AVLT 43 100 93 78 32 21 28 
Verbal fluency (N, A, animals) 71 43 78 36 50 36 71 39 
Boston naming test (short form) 100 43 50 21 71 57 11 
Token test (short form) 78 21 28 50 64 64 25 43 18 
National Adult Reading Test 93 96 36 11 43 18 36 36 

Notes: Data are presented as the percentages. E, experts; L, laypeople; BADS, Behavioral Assessment of Dysexecutive Syndrome; TEA, Test of Everyday Attention; WAIS-III, Wechsler Adult Intelligence Scale 3rd Edition; RBMT, Rivermead Behavioral Memory Test; WMS, Wechsler Memory Scale; AVLT, Auditory Verbal Learning Test.

To correct for group size and number of classifications, weighted fractions were calculated for each test for each domain for the two groups. These fractions are presented in Fig. 2 (exact fractions are given in the Appendix). In Fig. 2, Spearman's correlation coefficients were calculated between these weighted fractions of the laypeople and the experts. These correlations were calculated for each individual test and for each domain and indicate the degree of concurrency between the classification made by laypeople and expert. The correlations per domain (in the vertical direction of Fig. 2) show that for the domains “perception,” “language,” “memory,” and “thinking,” the correlation ranged from rS=.92 through rS=.79, whereas for the domain “attention”, no significant relationship was found.

Fig. 2.

Proportions of the classifications per domain per test for the experts (filled circles) and laypeople (open circles). Key, BADS Key Search; RuleCh, BADS Rule Shift Cards; JULO, Judgment of Line Orientation; LineB, Line Bisection; Star, Star Cancellation test; VisEl, TEA Visual Elevator; SymSub, WAIS-III Digit Symbol; Corsi, Corsi Block-Tapping Test; DigitSp, WAIS-III Digit Span; VAT, Visual Association Test; Stories, RBMT Story recall; WordP, WMS Paired Associates; LLT, Location Learning Task; RAVLT, Rey Auditory Verbal Learning Test; Fluency, Verbal Fluency; Boston, Boston Naming Test; Token, Token Test; NART, National Adult Reading Test. The boxes indicate the classification according to Lezak and colleagues (2004) or the test manual.

Fig. 2.

Proportions of the classifications per domain per test for the experts (filled circles) and laypeople (open circles). Key, BADS Key Search; RuleCh, BADS Rule Shift Cards; JULO, Judgment of Line Orientation; LineB, Line Bisection; Star, Star Cancellation test; VisEl, TEA Visual Elevator; SymSub, WAIS-III Digit Symbol; Corsi, Corsi Block-Tapping Test; DigitSp, WAIS-III Digit Span; VAT, Visual Association Test; Stories, RBMT Story recall; WordP, WMS Paired Associates; LLT, Location Learning Task; RAVLT, Rey Auditory Verbal Learning Test; Fluency, Verbal Fluency; Boston, Boston Naming Test; Token, Token Test; NART, National Adult Reading Test. The boxes indicate the classification according to Lezak and colleagues (2004) or the test manual.

Correlations between the two groups on the level of the individual tests (in the horizontal direction of Fig. 2) revealed that the Paired Associates Test, the Location Learning Task, and the Line Bisection Test showed the greatest inter-rater concordance (Paired Associates rS=1.00, p<.001; Location Learning rS=.98, p<.01; Line Bisection rS=.92, p<.05). The Token Test showed the lowest concordance of rS=.30. The boxes indicate the classifications based on Lezak and colleagues (2004), showing pattern similarities between the experts' classifications and the classification based on the test theory.

Discussion

In this study, we explored the concurrency between laypeople and experienced clinical neuropsychologists on the domain classifications of neuropsychological tests in the domains “memory,” “thinking,” “attention,” “perception,” and “language.” The demographics of the two groups were chosen to closely reflect the daily clinical practice, where the patient population is on average middle-aged and of average education, whereas the practitioners are younger and highly educated. Hence, the laypeople in this study were older and had a lower level of education than the expert group.

Overall, there was a high correlation (r=.79 to .92) between laypeople and experts in (both the unweighted and the weighted) classifications of the 18 neuropsychological tests. These results indicate that there is indeed concurrency between the concepts laypeople use and the theory-based concepts of the experts. This could indicate that divergences found between cognitive complaints and subsequent performance on cognitive tasks, as described in the introduction, are not due to a difference in concept utilization between the groups. This divergence could be caused by the fact that “normal” self-awareness of performance of laypeople might implicitly differ from what is assumed in the clinical practice, where normal awareness is thought to be reflected by tests results within the normal range. This issue has also been studied by our group (Schoo et al., submitted), reporting that laypeople, even though they overestimate, are capable of estimating their own performance on the specific cognitive domains after confrontation with a neuropsychological test battery. Based on these results, it is suggested that future studies look into alternative explanations for this discrepancy, such as depression (Austin, Mitchell, & Goodwin, 2001), or psychological distress in relation to pain (Hart, Wade, & Martelli, 2003).

Notably, in the aforementioned awareness study by our group, no self-awareness was found for the domain “attention.” This is in line with the findings of the current study, where the classification patterns of the two groups for the domain “attention” did not relate significantly. The current results indicates that even though both groups rated a great variety of tests as belonging to this domain (laypeople: 24%, or 168 of 700 total classifications; experts 26%, or 160 of a total of 609 classifications), there was no significant overlap in the pattern of these ratings for both groups. This supports the assumption that the cognitive concept “attention” might be hard to grasp. In neuropsychological literature, attention is considered not to be a unitary construct, but an umbrella term under which the cognitive constructs working memory, top-down sensitivity control, competitive selection, and automatic bottom-up filtering for salient stimuli are placed (Knudsen, 2007). Based on our results and this multidimensionality of “attention,” care should be taken in the clinical practice when interpreting complaints on this domain.

One could argue that undeniably all neuropsychological tests involve multiple cognitive functions, rather than the existence of specific tests tapping specific singular functions. Lezak and colleagues (2004) comment on this issue by stating that to be neuropsychologically meaningful, a test score should, in theory, represent a certain cognitive function as specifically as possible. In practice, however, tests are multidimensional per se, since even for simple in- and output demands various cognitive processes are necessary. Therefore, multidimensionality in neuropsychological assessment is common, even though a test can measure a specific cognitive domain (Lezak et al., 2004). When asked to classify in domains, this multidimensionality could therefore lead to multiple classifications. Indeed, the classifications in our study show a tendency toward categorization in multiple cognitive domains, suggesting that the experts, and to a lesser extent laypeople, are capable of discerning the multidimensionality of the tests and that laypeople are able to discern multiple cognitive aspects measured by specific tasks. This suggests that the domain-specific assessment of cognitive complaints, which is routine in everyday clinical neuropsychology, appears to be understandable for laypeople. The fact that experts classify the tests in a wider variety of domains than the laypeople do could reflect the awareness of the experts of the multidimensionality of the tests, as well as their representation of cognitive functions in broader theoretical concepts rather than specific tests.

Despite the high concurrency that was found for four of the five domains, there were also notable differences at the level of the individual tests. In both the laypeople and the expert group, a large pattern of variation between the two groups and within the individual participants was found in the extent to which tests were classified in a specific domain or several domains. As a result, concurrencies between the rating pattern of the two groups ranged from low for the Token Test to high concurrency in the case of the Paired Associates (WMS-III). Tests with high concurrency appear to be more representative for a certain cognitive domain than tests with lower concurrency. Consequently, tests for which a high concurrence of classifications is found between the experts and lay people, such as Paired Associates (WMS III), are suitable to use as an example in conveying the results of a neuropsychological assessment to patients.

The present study also shows that within the domain of “attention,” no single test or a group of tests was found with adequate concordance between the two groups. Although this may sound as a disadvantage, tests for which low concurrency is found could be particularly useful when decreased effort or limited symptom validity is suspected. Most symptom validity tests are specifically developed to detect domain-specific malingering, for instance the Test of Memory Malingering (Tombaugh, 1996). Using the concurrency information of our study, tests could be selected to check for symptom validity that in the eyes of the laypeople are low in domain specificity and do not show a clear domain load, such as the Token Test. Feigning low performance on tests with such a low domain load could, in comparison with tests of which domain specificity is more obvious, be less straightforward, reducing a chance of low symptom validity.

Limitations of the present study include the relatively small sample size. However, given the challenges the current field of neuropsychology faces, results of this study do indicate that the current issue should be studied in more detail. Since this study aimed to stay close to the current clinical practice, two groups had to be included who were relatively apart in age and education. The older laypeople could have been thought to interpret concepts such as attention in a manner that was relevant in the era they grew up in. Since the younger group of experts was raised in a different era, concept interpretation could vary between the two groups simply based on this issue. However, in the current study, neither age nor education played a role. The findings of differences in conceptualization between experts and laypeople might also have cross-cultural implications. Concepts of cognition might be influenced by cultural heritage. In line with this assumption, do individuals who have better understanding of the expert field perform better on tests than those who do not have knowledge that is in line with the experts?

The current study operates under the underlying assumption that the tests measure what they were designed for. As mentioned before, (neuro)psychologists often interpret test results for the assessment of more than one domain, which could be an indication that both sensitivity and specificity might be lacking. Even more, in the clinical practice, it frequently occurs that laypeople seek aid for a generalized cognitive deficit, while labeling this as a failure of memory or attention. The use of domains themselves is a matter of discussion within the field of experts, even though for many (neuro)psychologists cognitive tests represent operational definitions of cognitive domains.

In sum, our results show a relatively high concurrency between experts' and laypeople's subjective assessment of the domains several neuropsychological tests belong to, indicating that laypeople are capable of classifying cognition in separate domains. At the same time, on individual test level, a large divergence is found. This offers valuable information for the clinical practice in deciding which tests to select for assessment, taking its intended use into account. Based on our results, professionals should take caution when discussing the results in the domain “attention,” as well as interpreting patients' complaints in this domain.

Conflict of Interest

None declared.

Appendix

Proportions of classifications per test per domain per group.

Test Group Language Memory Attention Perception Thinking 
Key 0.02 0.23 0.18 0.56 
0.01 0.08 0.90 
RuleCh 0.02 0.10 0.27 0.14 0.46 
0.04 0.56 0.19 0.21 
JULO 0.04 0.27 0.58 0.11 
0.04 0.88 0.09 
LineB 0.29 0.71 
0.09 0.88 0.04 
Star 0.04 0.40 0.51 0.05 
0.11 0.89 
VisEl 0.17 0.36 0.24 0.22 
0.03 0.39 0.23 0.35 
SymSub 0.14 0.41 0.21 0.22 
0.02 0.28 0.56 0.15 
Corsi 0.03 0.38 0.28 0.18 0.08 
0.26 0.44 0.23 0.05 
DigitSp 0.13 0.44 0.25 0.07 0.08 
0.46 0.38 0.06 0.10 
VAT 0.03 0.48 0.21 0.21 0.03 
0.04 0.31 0.08 0.51 0.07 
Stories 0.21 0.44 0.19 0.03 0.09 
0.08 0.65 0.25 0.04 
WordP 0.21 0.43 0.19 0.05 0.06 
0.31 0.56 0.09 0.05 
LLT 0.46 0.18 0.28 0.05 
0.44 0.19 0.38 
RAVLT 0.11 0.46 0.25 0.05 0.08 
0.03 0.79 0.18 
Fluency 0.27 0.26 0.14 0.36 
0.30 0.21 0.23 0.26 
Boston 0.53 0.15 0.06 0.25 0.01 
0.38 0.38 0.52 0.05 
Token 0.40 0.08 0.14 0.23 0.15 
 0.13 0.04 0.55 0.17 0.10 
NART 0.45 0.12 0.13 0.10 0.17 
 0.87 0.09 0.04 0.06 0.07 
Test Group Language Memory Attention Perception Thinking 
Key 0.02 0.23 0.18 0.56 
0.01 0.08 0.90 
RuleCh 0.02 0.10 0.27 0.14 0.46 
0.04 0.56 0.19 0.21 
JULO 0.04 0.27 0.58 0.11 
0.04 0.88 0.09 
LineB 0.29 0.71 
0.09 0.88 0.04 
Star 0.04 0.40 0.51 0.05 
0.11 0.89 
VisEl 0.17 0.36 0.24 0.22 
0.03 0.39 0.23 0.35 
SymSub 0.14 0.41 0.21 0.22 
0.02 0.28 0.56 0.15 
Corsi 0.03 0.38 0.28 0.18 0.08 
0.26 0.44 0.23 0.05 
DigitSp 0.13 0.44 0.25 0.07 0.08 
0.46 0.38 0.06 0.10 
VAT 0.03 0.48 0.21 0.21 0.03 
0.04 0.31 0.08 0.51 0.07 
Stories 0.21 0.44 0.19 0.03 0.09 
0.08 0.65 0.25 0.04 
WordP 0.21 0.43 0.19 0.05 0.06 
0.31 0.56 0.09 0.05 
LLT 0.46 0.18 0.28 0.05 
0.44 0.19 0.38 
RAVLT 0.11 0.46 0.25 0.05 0.08 
0.03 0.79 0.18 
Fluency 0.27 0.26 0.14 0.36 
0.30 0.21 0.23 0.26 
Boston 0.53 0.15 0.06 0.25 0.01 
0.38 0.38 0.52 0.05 
Token 0.40 0.08 0.14 0.23 0.15 
 0.13 0.04 0.55 0.17 0.10 
NART 0.45 0.12 0.13 0.10 0.17 
 0.87 0.09 0.04 0.06 0.07 

Notes: E, experts; L, laypeople; Key, BADS Key Search; RuleCh, BADS Rule Shift Cards; JULO, Judgment of Line Orientation; LineB, Line Bisection; Star, Star Cancellation Test; VisEl, TEA Visual Elevator; SymSub, WAIS-III Digit Symbol; Corsi, Corsi Block-Tapping Test; DigitSp, WAIS-III Digit Span; VAT, Visual Association Test; Stories, RBMT Story recall; WordP, WMS Paired Associates; LLT, Location Learning Task; RAVLT, Rey Auditory Verbal Learning Test; Fluency, Verbal Fluency; Boston, Boston Naming Test; Token, Token Test; NART, National Adult Reading Test.

References

Austin
M. P.
Mitchell
P.
Goodwin
G. M.
Cognitive deficits in depression: Possible implications for functional neuropathology
British Journal of Psychiatry
 , 
2001
, vol. 
178
 
March
(pg. 
200
-
206
)
Burgess
P. W.
Alderman
N.
Evans
J.
Emslie
H.
Wilson
B. A.
The ecological validity of tests of executive function
Journal of the International Neuropsychological Society
 , 
1998
, vol. 
4
 (pg. 
547
-
558
)
Chaytor
N.
Schmitter-Edgecombe
M.
The ecological validity of neuropsychological tests: A review of the literature on everyday cognitive skills
Neuropsychology Review
 , 
2003
, vol. 
13
 (pg. 
181
-
197
)
Hart
R. P.
Wade
J. B.
Martelli
M. F.
Cognitive impairment in patients with chronic pain: The significance of stress
Current Pain and Headache Reports
 , 
2003
, vol. 
7
 
2
(pg. 
116
-
126
)
Jungwirth
S.
Fischer
P.
Weissgram
S.
Kirchmeyr
W.
Bauer
P.
Tragl
K. H.
Subjective memory complaints and objective memory impairment in the Vienna-Transdanube aging community
Journal of the American Geriatric Society
 , 
2004
, vol. 
52
 (pg. 
263
-
268
)
Kalechstein
A.
Newton
T.
van Gorp
W.
Neurocognitive functioning is associated with employment status: A quantitative review
Journal of Clinical and Experimental Neuropsychology
 , 
2003
, vol. 
25
 (pg. 
1186
-
1191
)
Knudsen
E.I.
Fundamental components of attention
Annual Review of Neuroscience
 , 
2007
, vol. 
30
 (pg. 
54
-
78
)
Lezak
M. D.
Howieson
D. B.
Loring
D.W.
Neuropsychological assessment
 , 
2004
4th ed.
New York
Oxford University Press
Mol
M. E. M.
van Boxtel
M. P. J.
Willems
D.
Jolles
J.
Do subjective memory complaints predict cognitive dysfunction over time? A six-year follow-up of the Maastricht aging study
International Journal of Geriatric Psychiatry
 , 
2006
, vol. 
21
 
5
(pg. 
432
-
441
)
Reid
L. M.
Maclullich
A. M.
Subjective memory complaints and cognitive impairment in older people
Dementia and Geriatric Cognitive Disorders
 , 
2006
, vol. 
22
 (pg. 
471
-
485
)
Robertson
I. H.
Ward
T.
Ridgeway
V.
Nimmo-Smith
I.
The test of everyday attention
 , 
1994
Bury St. Edmunds
Thames Valley Test Company
Schoo
L. A.
van Zandvoort
M. J. E.
Biessels
G. J.
Kappelle
L. J.
Postma
A.
Insight in Cognition: Self-Awareness of performance across cognitive domains
Applied Neuropsychology
  
in press
Tombaugh
T. N.
Test of Memory Malingering
 , 
1996
Toronto, Canada
Multi-Health Systems
Verhage
F.
Intelligentie en leeftijd [Intelligence and age]
 , 
1964
Assen, The Netherlands
Van Gorcum
Wilson
B. A.
Alderman
N.
Burgess
P. W.
Emslie
H.
Evans
J. J.
The behavioural assessment of the dysexecutive syndrome
 , 
1996
Bury St Edmund
Thames Valley Company
Wilson
B. A.
Cockburn
J.
Baddeley
A. D.
Rivermead behavioural memory test
 , 
1985
Flempton
Thames Valley Test Company