Abstract

Conation has been defined as the ability to focus and maintain intellectual energy over time. Prior research has shown that conation contributes to the magnitude of differences in test scores among brain-damaged and non-brain-damaged examinees. The purpose of the current investigation was to determine if conation might similarly account for differences in test scores among performance valid and performance invalid examinees. An archival analysis was therefore carried out on 52 examinees administered the Halstead–Reitan Neuropsychological Battery (HRNB) and several performance validity tests in a medico-legal context. Analyses revealed that conation had no impact on the magnitude of test score differences between groups and that performance invalid examinees scored worse than performance valid examinees on all but one test of the HRNB. These results support the idea that the identification of performance invalidity calls into question the reliability and the validity of all test score interpretations in an evaluation, even those with less conative load.

Introduction

Conation has been defined as the ability to focus and maintain persistent effort throughout task completion (Reitan & Wolfson, 2000, 2004, 2005). According to Reitan and Wolfson, conation is closely related to the concept of intellectual endurance insofar as both conation and intellectual endurance entail the application of intellectual energy over time. Conation, however, is distinct from motivation, which involves the interaction of needs and incentives in the pursuit of goals. Instead, conation is a non-deliberate, underlying ability to apply one's mental energy in a persistent and efficient manner to a task; as such, individuals high in conative ability are able to maintain intellectual energy throughout task completion, whereas those low in conative ability experience a dissipation of intellectual energy over time. Conation may thus be crucial for effective cognitive performance. Specifically, tasks that require longer amounts of time for completion or that involve greater levels of complexity (i.e., tasks with high conative load) may be especially difficult for individuals low in conative abilities (Reitan & Wolfson, 2000, 2004, 2005).

Conation has an extensive history within psychology. According to Boring's seminal review of the early development of psychology, A History of Experimental Psychology (Boring, 1929), conation occupied a central position in several early theoretical conceptualizations of the mind. At the turn of the century, James Ward posited that cognition, conation, and feeling were the three basic components of mental functioning. In subsequent years, Anderson (1934), Warren (1931), Burt (1960), and Kydd and Wright (1986) similarly maintained that the three fundamental aspects of mental functioning were cognition, conation, and affection. Relatively few empirical studies of conation have been conducted. Wild (1928) examined the effect of applying high and low levels of conation to a series of perceptual and cognitive tasks and found that the application of greater conation led to improvement on both simple perceptual and complex cognitive tasks. In addition, Wild (1927) found that the increased application of conation led to improvements on both muscular activity and cognitive performance. Lastly, Richardson (1929) classified children according to temperament, which he believed was a proxy for conation, and found an association between conation and performance on intelligence tests as well as educational attainment.

The study of conation had largely been ignored over the past several decades until the work of Reitan and Wolfson (2000, 2004, 2005). In a series of studies, Reitan and Wolfson examined the influence of conation on the performance of individuals with and without brain damage on several psychological and neuropsychological tests. By examining test performance and ranking the degree to which psychological (Reitan & Wolfson, 2000, 2004) and neuropsychological (Reitan & Wolfson, 2005) tests require conative ability, Reitan and Wolfson found a differential impact of conation on the two groups. Whereas brain-damaged and non-brain-damaged groups scored comparably on tests with low conative load, their performances diverged as the conative load of psychological and neuropsychological tests increased; therefore, brain-damaged groups scored worse than their non-brain-damaged counterparts on tests high in the conative load. Conation thus appears to be a significant factor contributing to the magnitude of differences in test scores among brain-damaged and non-brain-damaged groups.

Given its prominent role, the current study set out to determine whether conation might similarly account for differences in scores among groups exhibiting performance invalidity and those exhibiting a valid performance. Numerous studies have shown that performance invalidity leads to worse scores on neuropsychological tests. For instance, performance invalidity has been found to account for 53% (Green, Rohling, Lees-Haley, & Alleng, 2001) and 59% (Meyers, Volbrecht, Axelrod, & Reinsh-Boothby, 2011) of the variance in test scores on fixed batteries. Furthermore, Constantinou, Bauer, Ashendorf, Fisher, and McCaffrey (2005) found that a performance invalid group scored lower on several Wechsler Adult Intelligence Scale-Revised (Wechsler, 1981; WAIS-R) measures and Halstead–Reitan Neuropsychological Battery (HRNB) scales. In addition, Davis, McHugh, Axelrod, and Hanks (2012) found that a performance invalid group scored significantly worse on 36 of 37 measures in a flexible battery. However, to date, no studies have examined how tests' conative load might influence the discrepancy in performance among performance valid and performance invalid examinees.

One prediction stemming from Reitan & Wolfson's (2000, 2004, 2005) work is that conation and performance validity may be directly related to one another. In this case, differences in neuropsychological test scores among performance valid and performance invalid groups would be expected to increase as the conative load of tests increases. This would suggest that tests high in the conative load may be more sensitive to performance invalidity than tests low in the conative load. Alternatively, a second hypothesis is that conation may be inversely related to performance invalidity. In this way, differences between performance valid and performance invalid groups may be present on low conative load tests and absent on high conative load tests, suggesting that low conative load tests are more sensitive to performance invalidity than high conative load tests.

Several studies exploring differential test scores among performance valid and performance invalid groups support this second prediction. For instance, Heaton, Smith, Lehman, and Vogt (1978) explored test scores between performance valid and performance invalid groups on the HRNB. Heaton and colleagues (1978) found that the performance valid group scored worse on the Category Test, Trails B, and the Tactual Performance Test, whereas the performance invalid group scored worse on the Finger Tapping Test and Tactile Finger Recognition. As is seen in Table 1, each of the tests performed worse by the performance valid group tended to require greater conative ability, whereas each of the tests performed worse by the performance invalid group required less conative ability. Similarly, Binder and Willis (1991) investigated differential performance on several tests from the HRNB among mild head trauma patients seeking financial compensation. Binder and Willis grouped these patients according to whether they scored low (indicative of performance invalidity) or high (indicative of performance validity) on the Portland Digit Recognition Test (PDRT) and found that the low scoring group performed worse than the high scoring group on Tactile Finger Recognition, Finger Tip Number Writing, and the Finger Tapping Test. As can be seen in Table 1, aside from Finger Tip Number Writing, each of these tests tends to require relatively little conative ability. Importantly though, neither of these studies set out to explicitly address the contribution of conation in explaining differences in scores between performance valid and performance invalid groups. Moreover, both of these studies used college students instructed to simulate performance invalidity. Consequently, the generalizability of these studies to the role of conation in explaining differences in test scores among performance valid and performance invalid groups in a medico-legal context remains unclear.

Table 1.

Predicted HRNB rank order of tests by the degree of conative ability required

Variable Conative rank 
Finger Tapping Non-dominant 
Finger Tapping Dominant 
Bilateral Auditory Stimulation 
Bilateral Visual Stimulation 
Verbal IQ 
Bilateral Tactile Stimulation 
Tactile Finger Recognition 
Tactile Form Recognition 
Seashore Rhythm Test 
Speech sounds Perception 10 
Trail Making Test-A 11 
Performance IQ 12 
Tactual Performance Test-Memory 13 
Tactual Performance Test-Location 14 
Fingertip Number Writing 15 
Trail Making Test-B 16 
Tactual Performance Test-Time 17 
Category Test 18 
Impairment Index 19 
Variable Conative rank 
Finger Tapping Non-dominant 
Finger Tapping Dominant 
Bilateral Auditory Stimulation 
Bilateral Visual Stimulation 
Verbal IQ 
Bilateral Tactile Stimulation 
Tactile Finger Recognition 
Tactile Form Recognition 
Seashore Rhythm Test 
Speech sounds Perception 10 
Trail Making Test-A 11 
Performance IQ 12 
Tactual Performance Test-Memory 13 
Tactual Performance Test-Location 14 
Fingertip Number Writing 15 
Trail Making Test-B 16 
Tactual Performance Test-Time 17 
Category Test 18 
Impairment Index 19 

Notes: The number 1 signifies least conative ability required and 19 the greatest conative ability required. This rank ordering was first described in Reitan and Wolfson (2005a, 2005b, 2005c).

A third possibility is that conation has no impact on the magnitude of the differences in test scores between performance valid and performance invalid groups. In this case, the degree to which a performance invalid group scores worse than a performance valid group should be consistent across tests varying in the conative load. As such, group differences in test scores should be equally likely on low conative load tests and high conative load tests.

Two types of studies support this third prediction. First, several studies have documented inconsistent results regarding the conative load of tests and the likelihood that conation accounts for group differences in test scores. For instance, Mittenberg, Rotholc, Russell, and Heilbronner (1996) assessed test score differences between performance valid and performance invalid groups on the HRNB. Although the performance valid group performed worse on tests of high conative load (Trails B and the Tactual Performance Test), the performance invalid group performed worse on several low conative load (Finger Tapping Test, number of sensory suppressions, Tactile Finger Recognition), moderate conative load (Seashore Rhythm Test, Speech Sounds Perception Test), and high conative load tests (the Category Test, Finger Tip Number Writing). Similarly, Trueblood and Schmidt (1993) reported that a performance invalid group performed worse than a performance valid group on tests of low conative (Tactile Finger Recognition), moderate conative (Speech Sounds Perception Test, Seashore Rhythm Test), and high conative (Finger Tip Number Writing) load tests. Thus, no discernible influence of conation on differences in scores between performance valid and performance invalid groups emerged for either of these studies.

Second, research showing that performance invalid groups score worse than performance valid groups across a wide range of tests also demonstrate that conation does not account for the magnitude of test score differences between groups. Two studies finding consistently worse scores across batteries by performance invalid groups have been reported. First, Beetar and Williams (1994) reported that a performance invalid group instructed to fake poor performance on tests scored worse than a performance valid group across all 12 subtests of the Memory Assessment Scale (MAS; Williams, 1991). Second, Constantinou and colleagues (2005) investigated test score differences among performance valid and performance invalid litigants on the WAIS-R and the MAS and found similar results. Specifically, performance invalid litigants performed significantly worse on all composite scales and 6 of 10 subtests of the WAIS-R. Performance invalid litigants also scored worse on the remaining four WAIS-R subscales; however, differences between groups on these four subtests did not achieve statistical significance. In addition, performance invalid litigants scored worse on two summary indices of the HRNB, the General Neuropsychological Deficit Scale and the Halstead Impairment Index. Unfortunately, performance on HRNB subtests was not examined in this study. As such, no direct conclusions regarding performance across tests varying in the conative load in this study could be made.

Given the inconsistency in results regarding the influence of conation, the current study set out to directly evaluate conation's contribution to the magnitude of differences in test scores between performance valid and performance invalid groups. At the broadest level, this analysis enabled the determination of whether performance invalidity led to worse scores across a wide range of tests or whether certain tests were more sensitive to performance invalidity than others. More specifically, we were able to determine whether the conative requirements of tests accounted for the magnitude of test score differences between performance valid and performance invalid groups.

Method

Participants

Following institutional review board (IRB) approval, an archival analysis was carried out on 52 examinees seen for neuropsychological evaluation for medico-legal reasons. All but six of these cases were based on referrals for mild traumatic brain injury (mTBI; n = 46), each of which met the mTBI criteria set by the American Congress of Rehabilitation Medicine (ACRM) Mild Traumatic Brain Injury Committee of the Head Injury Interdisciplinary Special Interest Group (Committee on Mild Traumatic Brain Injury, 1993). The remaining six examinees presented with other medical conditions, diagnosed by their treating neurologist. These diagnoses included fibromyalgia (n = 2), transient ischemic attack (n = 1), sarcoidosis (n = 1), brain stem stroke and major depressive disorder (n = 1), and multiple sclerosis (n = 1).

Materials

Each examinee was administered the HRNB. Four free-standing performance validity tests (PVTs) were also administered during this battery. The PVTs administered included the Test of Memory Malingering (TOMM; Tombaugh, 1996), the Rey 15-Item Test (Rey-15; Rey, 1964), the Victoria Symptom Validity Test (VSVT; Slick, Hopp, Strauss, & Thompson, 1997), and the Word Memory Test (WMT; Green, 2003). For a detailed description of these measures, see Lezak, Howieson, Bigler, and Tranel (2012). The PVT cutoff scores implemented in this study were those commonly used in clinical practice (Table 2).

Table 2.

Cut-off scores for PVT failure

Free-standing SVT Cut-off score 
Rey-15 ≤9 correcta 
TOMM <45 on Trial 2 or Retentionb 
VSVT ≤17 Hard Items correctc 
WMT ≤82.5% correct on Immediate Recall, Delayed Recall or Consistency Indexd 
Free-standing SVT Cut-off score 
Rey-15 ≤9 correcta 
TOMM <45 on Trial 2 or Retentionb 
VSVT ≤17 Hard Items correctc 
WMT ≤82.5% correct on Immediate Recall, Delayed Recall or Consistency Indexd 

Notes: SVT = Symptom Validity Test; TOMM = Test of Memory Malingering; VSVT = Victoria Symptom Validity Test; WMT = Word Memory Test.

Not every participant was administered all four PVTs. Most examinees received either all four (n = 37; 66.07%) or three (n = 12; 26.79%) PVTs. Three participants who completed only two PVTs were included as well. Of these three participants, one passed both the WMT and the VSVT and was included, since no other examinee in this data set with a passing performance on both of these PVTs was later identified as performance invalid. The remaining two participants passed both the WMT and Rey-15; again, these two participants were included as no other participant in this data set who passed both of these PVTs was identified as performance invalid.

Procedure

Each participant was administered the HRNB as part of a medico-legal neuropsychological evaluation. Of particular importance to the purpose of this study were the 19 variables of the HRNB that reflect the level of performance. Reitan and Wolfson (2000, 2004, 2005) originally subjectively ranked these 19 variables according to their conative load on the basis of their extensive observation of the tests and the kinds of difficulties that individual patients have in performing them. In general, tests high in the conative load tend to be most sensitive to brain damage. In the current study, we ranked these 19 variables in the same order as Reitan and Wolfson, as seen in Table 1.

As indicated above, performance valid and performance invalid groups were determined based on their performance on a possible total of four PVTs—the TOMM, WMT, VSVT, and the Rey-15. Participants were administered anywhere between two and four PVTs. As both National Academy of Neuropsychology (NAN; Bush et al., 2005) and the American Academy of Clinical Neuropsychology (AACN; Heilbronner et al., 2009) recommend the use of multiple measures of performance invalidity rather than relying on a single indicator of invalid performance, examinees were identified as performance invalid on the basis of failing two or more PVTs; examinees failing less than two PVTs were identified as exhibiting a valid performance. The scores of these two groups were then compared on all 19 ranked HRNB variables to determine the influence of conation on the difference in test scores among the two groups.

Results

Demographics

In the overall sample, the average age was 44.63 (SD = 10.51), and the average education was 14.21 (SD = 2.93) years. There were 27 (52%) men and 48 (92%) examinees were right-handed. After dividing the sample into performance valid and performance invalid groups, no significant differences were found in age or handedness; however, a significant difference in gender was found between the two groups, χ² = 5.03, p < .05, with more women in the performance valid group than in the performance invalid group (see Table 3 for a complete summary of demographic data across groups). In addition, all six of the non-TBI cases were placed in the performance valid group.

Table 3.

Summary of demographic variables for performance valid and performance invalid groups

 Performance valid (n = 30; mean [SD]) Performance invalid (n = 22; mean [SD]) Test p-value 
Age 43.97 (11.22) 45.55 (9.63) t(50) = 0. 53 .60 
Education 14.57 (2.98) 13.73 (2.85) t(50) = 1.02 .31 
Gender 
 Women 18 (60%) 7 (32%) χ2 = 4.04 .04* 
 Men 12 (40%) 15 (68%)  
Handedness 
 Right 28 (93%) 20 (91%)  1.00a 
 Left 2 (7%) 2 (9%)  
 Performance valid (n = 30; mean [SD]) Performance invalid (n = 22; mean [SD]) Test p-value 
Age 43.97 (11.22) 45.55 (9.63) t(50) = 0. 53 .60 
Education 14.57 (2.98) 13.73 (2.85) t(50) = 1.02 .31 
Gender 
 Women 18 (60%) 7 (32%) χ2 = 4.04 .04* 
 Men 12 (40%) 15 (68%)  
Handedness 
 Right 28 (93%) 20 (91%)  1.00a 
 Left 2 (7%) 2 (9%)  

Notes: Age and Education were run as t-tests; Gender and Handedness were run as χ2 statistics.

aExpected cell means were below 10; therefore, Fisher's exact test was used for these analyses.

*Significant at the p < .05 level.

Comparisons among Groups

Analyses focused on the difference in test scores between performance valid and performance invalid groups across tests ranked according to the degree of conative ability required for successful completion. In order to compare scores across each of the 19 HRNB tests, it was first necessary to place the raw scores of each test on a common scale. Following Reitan and Wolfson (2000, 2004, 2005), raw scores were transformed into McCall's normalized T-scores (McCall, 1939). This was accomplished by transforming raw score distributions for the combined groups on each test into McCall's normalized T-scores and then returning the score from each subject to their original group (i.e., performance valid or performance invalid). All statistical analyses were then carried out on the McCall's normalized T-scores.

To assess the influence of conation on the discrepancy in test scores between groups, two analytic procedures were implemented. First, t-tests were carried out to determine whether the discrepancy in test scores between groups on any particular test was statistically significant. Table 4 presents the T-score means, standard deviations, t-ratios, and effect sizes comparing the performance valid and performance invalid groups across all 19 HRNB variables (see also Table 5, which presents the mean raw test scores, standard deviations, t-ratios, and effect sizes). To control for multiple comparisons, a Bonferroni corrected α level of 0.0026 was implemented. Inspection of the means from this table reveals that for all but one variable, the performance valid group performed better than the performance invalid group; however, a clear pattern of statistically significant group differences on high versus low conative demand tests did not emerge. Specifically, three tests among the lowest six tests on the conation spectrum achieved significance (Finger Tapping Dominant, Verbal IQ, Bilateral Tactile Stimulation), three tests among the middle six tests on the conation spectrum achieved significance (Seashore Rhythm Test, Trail Making Test-A, Performance IQ), and three tests among the highest seven tests on the conation spectrum achieved significance (Tactual Performance Test-Memory, Tactual Performance Test-Location, Impairment Index). Fig. 1, which presents the means for the two groups graphically, demonstrates more clearly that the discrepancy in test scores remains relatively stable across tests increasing in required conative ability. This analysis provides preliminary evidence that the performance invalid group performed worse than the performance valid group across a broad range of tests and that conation did not influence the extent to which the performance invalid group underperformed its performance valid counterpart.

Table 4.

T-score means, standard deviations, t-ratios, and p-values comparing performance valid and performance invalid groups across all 19 HRNB variables

Test Performance valid (mean [SD]) Performance invalid (mean [SD]) t-ratio (dfp-value Effect size (d
53.37 (8.71) 45.41 (9.85) 3.08 (50) .003 0.8561 
53.87 (8.81) 44.68 (9.13) 3.66 (50) .001* 1.0249 
50.38 (6.11) 47.73 (8.15) 1.29 (46) .204 0.3679 
48.08 (9.32) 50.34 (8.71) −0.86 (46) .392 −0.25 
53.80 (9.42) 44.91 (8.42) 3.51 (50) .001* 0.995 
52.97 (5.00) 45.20 (8.83) 3.66 (46) .001*a 1.0819 
52.10 (7.83) 45.69 (8.85) 2.71 (48) .009 0.7671 
52.02 (8.78) 47.09 (10.73) 1.82 (50) .075 0.5028 
53.38 (9.67) 45.34 (8.10) 3.17 (50) .003 0.9013 
10 52.13 (7.71) 46.91 (11.70) 1.94 (50) .058 0.5268 
11 53.60 (9.35) 45.30 (8.03) 3.36 (50) .002* 0.9523 
12 54.53 (9.37) 44.07 (7.45) 4.31 (49) .0001* 1.2357 
13 53.34 (8.09) 45.05 (9.66) 3.26 (47) .002* 0.9304 
14 53.71 (8.34) 45.03 (8.88) 3.49 (47) .001* 1.0076 
15 52.30 (8.92) 46.45 (9.86) 2.21 (49) .032 0.6222 
16 52.18 (10.27) 47.00 (8.95) 1.90 (50) .064 0.5377 
17 51.23 (10.45) 48.31 (9.09) 1.02 (46) 314 0.3029 
18 52.30 (10.70) 46.82 (7.98) 2.02 (50) .048 0.5816 
19 53.77 (9.11) 44.45 (6.62) 4.07 (50) .0002* 1.1704 
Test Performance valid (mean [SD]) Performance invalid (mean [SD]) t-ratio (dfp-value Effect size (d
53.37 (8.71) 45.41 (9.85) 3.08 (50) .003 0.8561 
53.87 (8.81) 44.68 (9.13) 3.66 (50) .001* 1.0249 
50.38 (6.11) 47.73 (8.15) 1.29 (46) .204 0.3679 
48.08 (9.32) 50.34 (8.71) −0.86 (46) .392 −0.25 
53.80 (9.42) 44.91 (8.42) 3.51 (50) .001* 0.995 
52.97 (5.00) 45.20 (8.83) 3.66 (46) .001*a 1.0819 
52.10 (7.83) 45.69 (8.85) 2.71 (48) .009 0.7671 
52.02 (8.78) 47.09 (10.73) 1.82 (50) .075 0.5028 
53.38 (9.67) 45.34 (8.10) 3.17 (50) .003 0.9013 
10 52.13 (7.71) 46.91 (11.70) 1.94 (50) .058 0.5268 
11 53.60 (9.35) 45.30 (8.03) 3.36 (50) .002* 0.9523 
12 54.53 (9.37) 44.07 (7.45) 4.31 (49) .0001* 1.2357 
13 53.34 (8.09) 45.05 (9.66) 3.26 (47) .002* 0.9304 
14 53.71 (8.34) 45.03 (8.88) 3.49 (47) .001* 1.0076 
15 52.30 (8.92) 46.45 (9.86) 2.21 (49) .032 0.6222 
16 52.18 (10.27) 47.00 (8.95) 1.90 (50) .064 0.5377 
17 51.23 (10.45) 48.31 (9.09) 1.02 (46) 314 0.3029 
18 52.30 (10.70) 46.82 (7.98) 2.02 (50) .048 0.5816 
19 53.77 (9.11) 44.45 (6.62) 4.07 (50) .0002* 1.1704 

Note: Please note that the numbers found in the test column refer to HRNB tests as outlined in Table 1.

aHomogeneity of variance assumption was violated; therefore, corrected t-ratios and p-values are reported.

*Significance at a Bonferroni corrected α level of 0.0026.

Table 5.

Raw score means, standard deviations, t-ratios, and p-values comparing performance valid and performance invalid groups across all 19 HRNB variables

Test Performance valid (mean [SD]) Performance invalid (mean [SD]) t-ratio (dfp-value Effect size (d
43.47 (8.99) 34.13 (12.03) 3.07 (50) .004a 0.8795 
48.24 (10.30) 35.41 (13.63) 3.86 (50) .0003* 1.0612 
0.35 (.75) 0.73 (1.08) −1.40 (46) .170 −0.411 
1.85 (2.36) 1.45 (2.32) 0.58 (46) .156 0.1730 
102.37 (10.98) 91.23 (11.20) 3.58 (50) .001* 1.0044 
0.54 (1.36) 2.86 (3.03) −3.33 (46) .002*a −0.988 
1.66 (3.04) 4.19 (3.92) −2.47 (48) .018 −1.167 
23.93 (11.80) 31.41 (17.57) −1.84 (50) .072 −0.640 
25.67 (3.73) 22.00 (4.29) 3.02 (50) .004 0.8383 
10 5.83 (3.24) 10.14 (7.97) −2.39 (50) .010 −0.708 
11 32.97 (12.28) 57.91 (69.75) −1.93 (50) .060 −0.498 
12 100.52 (12.57) 87.59 (8.23) 4.19 (49) .0001* 1.2170 
13 7.66 (1.32) 6.20 (1.77) 3.31 (47) .002* 0.9351 
14 4.62 (2.14) 2.30 (2.36) 3.57 (47) .001* 1.0298 
15 3.67 (3.13) 5.86 (4.22) −2.13 (49) .038 −0.589 
16 94.43 (66.81) 123.64 (95.93) −1.28 (50) .207 −0.349 
17 19.03 (7.78) 21.27 (8.36) −.95 (46) .346 −0.277 
18 48.00 (31.09) 65.05 (25.56) −2.10 (50) .041 −0.599 
19 0.35 (0.25) 0.63 (0.21) −4.22 (50) .0001* −1.204 
Test Performance valid (mean [SD]) Performance invalid (mean [SD]) t-ratio (dfp-value Effect size (d
43.47 (8.99) 34.13 (12.03) 3.07 (50) .004a 0.8795 
48.24 (10.30) 35.41 (13.63) 3.86 (50) .0003* 1.0612 
0.35 (.75) 0.73 (1.08) −1.40 (46) .170 −0.411 
1.85 (2.36) 1.45 (2.32) 0.58 (46) .156 0.1730 
102.37 (10.98) 91.23 (11.20) 3.58 (50) .001* 1.0044 
0.54 (1.36) 2.86 (3.03) −3.33 (46) .002*a −0.988 
1.66 (3.04) 4.19 (3.92) −2.47 (48) .018 −1.167 
23.93 (11.80) 31.41 (17.57) −1.84 (50) .072 −0.640 
25.67 (3.73) 22.00 (4.29) 3.02 (50) .004 0.8383 
10 5.83 (3.24) 10.14 (7.97) −2.39 (50) .010 −0.708 
11 32.97 (12.28) 57.91 (69.75) −1.93 (50) .060 −0.498 
12 100.52 (12.57) 87.59 (8.23) 4.19 (49) .0001* 1.2170 
13 7.66 (1.32) 6.20 (1.77) 3.31 (47) .002* 0.9351 
14 4.62 (2.14) 2.30 (2.36) 3.57 (47) .001* 1.0298 
15 3.67 (3.13) 5.86 (4.22) −2.13 (49) .038 −0.589 
16 94.43 (66.81) 123.64 (95.93) −1.28 (50) .207 −0.349 
17 19.03 (7.78) 21.27 (8.36) −.95 (46) .346 −0.277 
18 48.00 (31.09) 65.05 (25.56) −2.10 (50) .041 −0.599 
19 0.35 (0.25) 0.63 (0.21) −4.22 (50) .0001* −1.204 

Note. Please note that the numbers found in the test column refer to HRNB tests as outlined in Table 1.

aHomogeneity of variance assumption was violated; therefore, corrected t-ratios and p-values are reported.

*Significance at a Bonferroni corrected α level of 0.0026.

Fig. 1.

T-scores for performance valid and performance invalid groups on 19 tests from the HRNB arranged in order according to the predicted dependence on conative ability. Please note that the numbers found on the x-axis refer to HRNB tests as outlined in Table 1.

Fig. 1.

T-scores for performance valid and performance invalid groups on 19 tests from the HRNB arranged in order according to the predicted dependence on conative ability. Please note that the numbers found on the x-axis refer to HRNB tests as outlined in Table 1.

The second analytic procedure implemented was a rank-order coefficient of correlation between the predicted rank ordering of the conative requirement of the 19 tests and the magnitude of the t-ratios found in comparing the two groups across the 19 variables. This analysis provided a more direct test of whether conation influenced the magnitude of test score differences between groups. Analyses revealed a non-significant rank-difference correlation, rs = .01, p = .96, suggesting that conation did not influence the magnitude of differences in test scores between the two groups. Fig. 2 displays the predicted rank ordering of tests plotted against the magnitude of t-ratios.

Fig. 2.

Rank-order coefficient of correlation between the magnitude of t-ratios comparing performance valid and performance invalid groups and the predicted degree to which 19 HRNB tests require conative ability. Please note that the numbers found on the x-axis refer to HRNB tests as outlined in Table 1.

Fig. 2.

Rank-order coefficient of correlation between the magnitude of t-ratios comparing performance valid and performance invalid groups and the predicted degree to which 19 HRNB tests require conative ability. Please note that the numbers found on the x-axis refer to HRNB tests as outlined in Table 1.

Discussion

The current study examined the contribution of conation to the discrepancy in test scores among examinees exhibiting performance invalidity and a valid performance. Importantly, significant differences between performance invalid and performance valid groups were found on both low and high conative load tests, with performance invalid examinees scoring worse than performance valid examinees on all but one test. In addition, correlations between the degree of required conative ability for successful completion of a test and the magnitude of test score differences between groups revealed that conation had nearly no impact on discrepancies in test scores. Instead, the group of performance invalid examinees consistently performed worse than their performance valid counterparts, and the magnitude of the discrepancy in their scores remained stable across low and high conative load tests. Thus, the current results suggest that conation does not influence test score differences between performance invalid and performance valid groups in a medico-legal context.

These findings are largely consistent with several previous studies examining test score differences between performance valid and performance invalid groups. As stated above, both Mittenberg and colleagues (1996) and Trueblood and Schmidt (1993) found that performance invalid examinees were equally likely to score worse than performance valid examinees on both low and high conative load tests. In addition, Beetar and Williams (1994) and Contantinou and colleagues (2005) demonstrated that performance invalid examinees scored worse than performance valid examinees across whole batteries (i.e., MAS and WAIS-R, respectively). The current study extends these findings by showing that performance invalidity results in worse performance on tests tapping multiple cognitive domains across the HRNB. The performance invalid group scored worse than their performance valid counterparts on sensory, motor, memory, executive functions, and several higher-level cognitive domains.

In contrast, two previous studies reported results that conflict with the findings of the current study. Specifically, Heaton and colleagues  (1978) found that performance invalid examinees tended to perform worse relative to performance valid examinees on low conative load tests and better on high conative load tests, suggesting an inverse relation between conation and test score differences between groups. Similarly, Binder and Willis (1991) found that patients scoring low on a PVT (i.e., the PDRT) scored worse than patients who scored high on that PVT on low conative load tests.

There are two important differences between the current study and these previous studies that may explain these conflicting findings. First, neither Heaton and colleagues (1978) nor Binder and Willis (1991) specifically set out to investigate the influence of conation on test score differences; therefore, the interpretation of their studies in terms of conation is entirely post hoc and as such may represent a spurious finding. Second, Heaton and colleagues used college undergraduates who were instructed to fake bad as their performance invalid group. A number of researchers have questioned the extent to which subjects instructed to fake bad accurately replicate the performance patterns of litigants in a real-life forensic context (Lezak, Howieson, & Loring, 2004; Vickery, Berry, Imman, Harris, & Orey, 2001). Thus, the degree to which these prior studies should generalize to the current study, which used litigants in a medico-legal context, is limited.

The current study is subject to two possible limitations. First, performance valid and performance invalid groups differed in gender composition, with more women in the performance valid group than the performance invalid group. As such, test score differences among groups may be a function of either performance validity classification or gender. Given that performance invalidity has been found to account for 53% (Green, Rohling, Lees-Haley, & Alleng, 2001) and 59% (Meyers et al., 2011) of the variance in test scores on fixed batteries, it is likely that performance validity classification and not gender accounts for the majority of test score differences in the current study.

Second, the potential to find a significant correlation between conation and differential test scores between groups may have been limited by the small sample size, and thus low power, of this study. However, several features of the current analyses provide reassurance against this concern. For one, correlational analyses were supplemented with t-tests that compared group differences on specific tests. Even with the low power of this study and a Bonferroni corrected α level, these t-tests revealed significant differences on several HRNB variables. Moreover, these significant differences were found on both low and high conative load tests, supporting the conclusion that conation does not influence the magnitude of the discrepancy in test scores between performance invalid and performance valid groups.

For another, the effect size of our correlation was exceedingly small (rs = .01). Thus, even with a larger sample size and greater power, it is unlikely that our correlation would have achieved significance. This contrasts with Reitan and Wolfson's (2000, 2004, 2005) previous study on conation and the HRNB, where the effect size of conation in accounting for the magnitude of test score differences between brain-damaged and non-brain-damaged groups was quite large (rs = .84). Ultimately, the effect of conation on the discrepancy in test scores among performance valid and performance invalid groups is likely to be a trivial one that is of no clinical significance.

In summary, the current investigation demonstrates that conation does not have an impact on test score differences between performance valid and performance invalid groups in a medico-legal context. Instead, and most importantly, results suggest that performance invalid examinees score worse at a relatively consistent magnitude across nearly all tests of the HRNB. These findings support Heilbronner and colleagues (2009) and Iverson and Binder's (2000) contention that the reliability and the validity of all test score interpretations are called into question in the presence of performance invalidity. As such, clinicians should exercise caution in making judgments based on data when performance invalidity has been detected, as that data likely underestimates an examinee's true abilities.

Conflict of Interest

None declared.

Acknowledgements

The authors are grateful to Cecil R. Reynolds, former editor of Archives of Clinical Neuropsychology and the current editor of Psychological Assessment, who served as the guest action editor. This manuscript was submitted to the blind peer review process.

References

Anderson
J.
Mind as feeling
Australian Journal of Psychology and Philosophy
 , 
1934
, vol. 
12
 (pg. 
81
-
94
)
Beetar
J. T.
Williams
J. M.
Malingering response styles on the memory assessment scales and symptom validity tests
Archives of Clinical Neuropsychology
 , 
1994
, vol. 
10
 (pg. 
57
-
72
)
Binder
L. M.
Willis
S. C.
Assessment of motivation after financially compensable minor head trauma
Journal of Consulting and Clinical Psychology
 , 
1991
, vol. 
3
 (pg. 
175
-
181
)
Boring
E. G.
A history of experimental psychology.
 , 
1929
New York
D. Appleton-Century
Burt
C.
The concept of mind
Journal of Psychological Researches
 , 
1960
, vol. 
4
 (pg. 
54
-
64
)
Bush
S. S.
Ruff
R. M.
Troster
A. I.
Barth
J. T.
Koffler
S. P.
Pliskin
N. H.
, et al.  . 
Symptom validity assessment: Practice issues and medical necessity NAN policy & planning committee
Archives of Clinical Neuropsychology
 , 
2005
, vol. 
20
 (pg. 
419
-
426
)
Committee on Mild Traumatic Brain Injury, American Congress of Rehabilitation Medicine (ACRM)
Definition of mild traumatic brain injury
Journal of Head Trauma Rehabilitation
 , 
1993
, vol. 
8
 
3
(pg. 
86
-
87
)
Constantinou
M.
Bauer
L.
Ashendorf
L.
Fisher
J. M.
McCaffrey
R. J.
Is poor performance on recognition memory effort measures indicative of generalized poor performance on neuropsychological tests?
Archives of Clinical Neuropsychology
 , 
2005
, vol. 
20
 (pg. 
191
-
198
)
Davis
J. J.
McHugh
T. S.
Axelrod
B. N.
Hanks
R. A.
Performance validity and neuropsychological outcomes in litigants and disability claimants
The Clinical Neuropsychologist
 , 
2012
, vol. 
26
 
5
(pg. 
1
-
16
)
Green
P.
Green's Word Memory Test for Microsoft Windows
 , 
2003
Edmonton, Alberta
Green's Publishing
Green
P.
Rohling
M. L.
Lees-Haley
P. R.
Allen
L. M.
Effort has a greater effect on test scores than severe brain injury in compensation claimants
Brain Injury
 , 
2001
, vol. 
15
 (pg. 
1045
-
1060
)
Grote
C. L.
Kooker
E. K.
Garron
D. C.
Nyenhuis
D. L.
Smith
C. L.
Mattingly
M. L.
Performance of compensation seeking and non-compensation seeking samples on the Victoria Symptom Validity Test: Cross-validation and extension of a standardization study
Journal of Clinical and Experimental Neuropsychology
 , 
2000
, vol. 
22
 
6
(pg. 
709
-
719
)
Heaton
R. K.
Smith
H. H.
Lehman
R. A. W.
Vogt
A. T.
Prospects for faking believable deficits on neuropsychological testing
Journal of Consulting and Clinical Psychology
 , 
1978
, vol. 
46
 (pg. 
892
-
900
)
Heilbronner
R. L.
Sweet
J. J.
Morgan
J. E.
Larrabee
G. J.
Millis
S. R.
Conference Participants
American Academy of Clinical Neuropsychology consensus conference statement on the neuropsychological assessment of effort, response bias, and malingering
The Clinical Neuropsychologist
 , 
2009
, vol. 
23
 (pg. 
1093
-
1129
)
Iverson
G. I.
Binder
L. M.
Detecting exaggeration and malingering in neuropsychological assessment
Journal of Head Trauma and Rehabilitation
 , 
2000
, vol. 
15
 (pg. 
829
-
858
)
Kydd
R. R.
Wright
J. J.
Mental phenomena as changes of state in a finite-state machine
Australian and New Zealand Journal of Psychiatry
 , 
1986
, vol. 
20
 (pg. 
158
-
165
)
Lezak
M. D.
Neuropsychological assessment
 , 
1995
3rd ed.
New York
Oxford University Press
Lezak
M. D.
Howieson
D. B.
Bigler
E. D.
Tranel
D.
Neuropsychological assessment
 , 
2012
5th ed.
Oxford
Oxford University Press
Lezak
M. D.
Howieson
D. B.
Loring
D. W.
Neuropsychological assessment
 , 
2004
4th ed.
Oxford
Oxford University Press
Loring
D. W.
Lee
G. P.
Meador
K. J.
Victoria Symptom Validity Test performance in non-litigating epilepsy surgery candidates
Journal of Clinical and Experimental Neuropsychology
 , 
2005
, vol. 
27
 (pg. 
610
-
617
)
McCall
W. A.
Measurement
 , 
1939
New York
Macmillan
Meyers
J. E.
Volbrecht
M.
Axelrod
B. N.
Reinsh-Boothby
L.
Embedded symptom validity tests and overall neuropsychological test performance
Archives of Clinical Neuropsychology
 , 
2011
, vol. 
26
 (pg. 
8
-
15
)
Mittenberg
W.
Rotholc
A.
Russell
E.
Heilbronner
R.
Identification of malingered head injury on the Halstead-Reitan Battery
Archives of Clinical Neuropsychology
 , 
1996
, vol. 
11
 (pg. 
271
-
281
)
Reitan
R. M.
Wolfson
D.
The Halstead-Reitan neuropsychological test battery. Theory and clinical interpretation
 , 
1993
2nd ed.
Tucson, AZ
Neuropsychology Press
Reitan
R. M.
Wolfson
D.
Conation: A neglected aspect of neuropsychological functioning
Archives of Clinical Neuropsychology
 , 
2000
, vol. 
15
 (pg. 
443
-
453
)
Reitan
R. M.
Wolfson
D.
The differential effect of conation on intelligence test scores among brain-damaged and control subjects
Archives of Clinical Neuropsychology
 , 
2004
, vol. 
19
 (pg. 
29
-
35
)
Reitan
R. M.
Wolfson
D.
The effect of conation in determining the differential variance among brain-damaged and nonbrain-damaged persons across a broad range of neuropsychological tests
Archives of Clinical Neuropsychology
 , 
2005
, vol. 
20
 (pg. 
957
-
966
)
Rey
A.
L'examen clique en psychologie
 , 
1964
Paris
Presses Universitaires de France
Richardson
C. A.
The measurement of conative factors in children and their influence
British Journal of Psychology
 , 
1929
, vol. 
19
 (pg. 
405
-
412
)
Slick
D.
Hopp
G.
Strauss
E.
Thompson
G. B.
Victoria Symptom Validity Test
 , 
1997
Odessa, FL
Psychological Assessment Resources
Tombaugh
T. N.
The Test of Memory Malingering (TOMM)
 , 
1996
North Tonawanda, NY
Multi-Health Systems
Trueblood
W.
Schmidt
M.
Malingering and other validity considerations in the neuropsychological evaluation of mild head injury
Journal of Clinical and Experimental Neuropsychology
 , 
1993
, vol. 
15
 (pg. 
578
-
590
)
Vickery
C. D.
Berry
D. T. R.
Inman
T. H.
Harris
M. J.
Orey
S. A.
Detection of inadequate effort on neuropsychological testing: A meta-analytic review of selected procedures
Archives of Clinical Neuropsychology
 , 
2001
, vol. 
16
 (pg. 
45
-
73
)
Warren
H. C.
In defense of some discarded concepts
Psychological Review
 , 
1931
, vol. 
38
 (pg. 
392
-
405
)
Wechsler
D.
Wechsler Adult Intelligence Scale-Revised
 , 
1981
New York
The Psychological Corporation
Wild
E. H.
Influences of conation on cognition
British Journal of Psychology
 , 
1927
, vol. 
18
 (pg. 
147
-
167
)
Wild
E. H.
Influences of conation on cognition II
British Journal of Psychology
 , 
1928
, vol. 
18
 (pg. 
332
-
355
)
Williams
J. M.
Memory assessment Scales
 , 
1991
Odessa, FL
Psychological Assessment Resources