Abstract

Picture Completion (PC) indices from the Wechsler Adult Intelligence Scale, Third Edition, were investigated as performance validity indicators (PVIs) in a sample referred for independent neuropsychological examination. Participants from an archival database were included in the study if they were between the ages of 18 and 65 and were administered at least two PVIs. Effort measure performance yielded groups that passed all or failed one measure (Pass; n= 95) and failed two or more PVIs (Fail-2; n= 61). The Pass group performed better on PC than the Fail-2 group. PC cut scores were compared in differentiating Pass and Fail-2 groups. PC raw score of ≤12 showed the best classification accuracy in this sample correctly classifying 91% of Pass and 41% of Fail-2 cases. Overall, PC indices show good specificity and low sensitivity for exclusive use as PVIs, demonstrating promise for use as adjunctive embedded measures.

Introduction

Performance validity assessment has been a substantial area of growth in neuropsychological research (Sweet, King, Malina, Bergman, & Simmons, 2002). The incorporation of stand-alone and embedded measures to assess symptom validity has become a recommended practice among many neuropsychologists (Bush et al., 2005; Heilbronner, Sweet, Morgan, Larrabee, & Millis, 2009; Sharland & Gfeller, 2007). Increasing research attention has focused on the detection of noncredible performance using the standard clinical measures of neuropsychological functioning in addition to the use of stand-alone symptom validity tests. The benefits of utilizing embedded performance validity indicators (PVIs) include efficiency (i.e., no need for administration of additional tests) and, in medicolegal contexts, resiliency to coaching. Additionally, utilization of a number of embedded measures scattered throughout the neuropsychological evaluation allows for sampling a broader range of behavior across a larger time interval, which may be beneficial since effort is not a constant (Boone, 2009). A burgeoning research literature has explored many of the commonly administered neuropsychological measures in an effort to identify embedded measures, indices, and formulas to assist in the detection of noncredible performance. Excellent reviews of this broad area of research are provided in edited volumes by Boone (2007) and Larrabee (2007).

The Wechsler Adult Intelligence Scale, in both the Revised (WAIS-R; Wechsler, 1981) and Third Editions (WAIS-III; Wechsler, 1997), is consistently ranked among the most commonly used measures in neuropsychological evaluations (Camara, Nathan, & Puente, 2000; Rabin, Barr, & Burton, 2005). Not surprisingly, a large body of research has explored the development of embedded measures derived from it. A number of studies have examined the Digit Span (DS) subtest including DS scaled score (Iverson & Franzen, 1996; Trueblood & Schmidt, 1993), Vocabulary minus DS scaled score (Iverson & Tulsky, 2003; Millis, Ross, & Ricker, 1998; Mittenberg et al., 2001; Mittenberg, Theroux-Fichera, Zielinksi, & Heilbronner, 1995), and Reliable DS (RDS; Greiffenstein, Baker, & Gola, 1994), which is the sum of the longest number of digits repeated accurately on both forward and backward trials of DS. Cross-validation studies have generally supported the earlier findings (Axelrod, Fichtenberg, Millis, & Wertheimer, 2006; Babikian, Boone, Lu, & Arnold, 2006; Schwarz, Gfeller, & Oliveri, 2006; Whitney, Davis, Shepard, Bertram, & Adams, 2009). Other WAIS-III subtests and indices have also been explored as PVIs including the Processing Speed Index (Etherton, Bianchini, Heinly, & Greve, 2006) and one of its component subtests, Digit-Symbol Coding (Kim et al., 2010), as well as the Working Memory Index (Etherton, Bianchini, Greve, & Heinly, 2005).

On the WAIS-III, the Picture Completion (PC) subtest involves visual scanning and identification of missing elements in a series of colored pictures of objects and scenes from daily life. PC has been described as a potentially useful measure of premorbid functioning due to its insensitivity to brain damage (Lezak, Howieson, & Loring, 2004). Measures that are relatively insensitive to cerebral insult may be useful as PVIs (e.g., DS). To this end, Solomon and colleagues (2010) investigated PC raw and scaled scores as well as four indices based on item subsets: Most Discrepant Index (six items that showed the greatest group differences in correct responses), Most Discrepant Index-10 (10 items that showed the greatest group differences in correct responses), Rarely Missed Index (nine items that were frequently answered correctly by credible participants), and Rarely Correct Index (nine items that were infrequently answered correctly by noncredible participants). The Most Discrepant Index outperformed the other variables in differentiating noncredible and credible participants, with 65% sensitivity and 93% specificity.

Given the need for additional PVIs (Boone, 2009) and the promising initial results reported by Solomon and colleagues (2010), PC appears to be a viable candidate for inclusion into the performance validity armamentarium. Prior to recommending the clinical use of PC effort indices, however, cross-validation is important to refine the estimates of classification accuracy. The purpose of the current study was to examine the utility of the PC indices proposed by Solomon and colleagues (2010) as embedded effort measures and to cross-validate their findings in a forensic sample comprised primarily of individuals reporting a history of mild traumatic brain injury.

Method

Participants

The Wayne State University Human Investigation Committee provided institutional review board approval for this archival project. Review of records from the private practice of one of the authors identified 229 cases involving independent medical examination in the context of civil litigation or disability claims. Participants were included in the study if their ages were between 18 and 65 and if they had completed a neuropsychological evaluation including administration of at least two separate performance validity measures (e.g., RDS, etc.). Exclusion criteria were moderate–severe traumatic brain injury (n= 35), English as a second language (n= 21), lack of WAIS-III PC scaled scores (n= 14), and history of special education (n= 3). Cases involving moderate to severe traumatic brain injury, English as a second language, and history of special education were excluded in an effort to reduce potential confounds related to organic, cultural, and educational issues, respectively. The study sample (N= 156) was 55% women and 90% right-handed. In terms of ethnicity, the sample was 71% Caucasian, 28% African American, and 1% Hispanic/Latino. The average age was 43 (SD= 11) and the average educational level was 13 years (SD= 2).

Participants were grouped according to performance on freestanding and embedded effort measures; group demographic characteristics and reasons for referral are displayed in Table 1. Previously identified cut scores on PVIs yielded two groups: Those who passed all (n= 65) or failed one (n= 30) of the effort measures (Pass), and those who failed two or more effort measures (Fail-2; n= 61). The PVI cut scores and failure rate by group are shown in Table 2. Participants who failed one effort measure were grouped with those who passed all effort measures to lower the false-positive rate and to facilitate comparison of the present results with those of Solomon and colleagues (2010).

Table 1.

Demographic characteristics and reasons for referral by group

Variable Pass (n = 95) Fail-2 (n = 61) p-value d 
Age (years; M [SD]) 41.1 (11.1) 46.8 (10.4) .002 0.53 
Education (years; M [SD]) 13.4 (2.3) 12.5 (2.1) .012 0.42 
% Women 56.8 52.5 .591  
% Caucasian 74.7 65.6 .218  
% Right-handed 91.6 88.5 .528  
Referral reason (frequency) 
 Motor vehicle accident 67 42   
 Fall 17 11   
 Struck in head   
 Psychiatric disability   
 Assault   
 Electrical injury   
 Toxic exposure —   
Variable Pass (n = 95) Fail-2 (n = 61) p-value d 
Age (years; M [SD]) 41.1 (11.1) 46.8 (10.4) .002 0.53 
Education (years; M [SD]) 13.4 (2.3) 12.5 (2.1) .012 0.42 
% Women 56.8 52.5 .591  
% Caucasian 74.7 65.6 .218  
% Right-handed 91.6 88.5 .528  
Referral reason (frequency) 
 Motor vehicle accident 67 42   
 Fall 17 11   
 Struck in head   
 Psychiatric disability   
 Assault   
 Electrical injury   
 Toxic exposure —   

Notes: Pass = passed all or failed one of the performance validity measures; Fail-2 = failed two or more performance validity measures.

Table 2.

Performance validity indicator failure rate by group

Measure Cut score Pass (n= 95)
 
Fail-2 (n= 61)
 
Percentage nadm Percentage nadm 
RDS <8 (Greiffenstein et al., 199411 74 70 53 
RFIT-II <9 (Griffin, Glassmire, Henderson, & McCann, 199779 50 50 
RMT-Words <38 (Iverson & Franzen, 199893 67 60 
RMT-Faces <26 (Millis, 200294 27 60 
Tap DH Raw <29 women; <36 men (Arnold et al., 200512 94 69 54 
TMT-A >62 (Iverson, Lange, Green, & Franzen, 200295 43 60 
TMT-B >199 (Iverson et al., 200295 46 59 
WCST-FMS >3 (Larrabee, 200380 16 43 
Measure Cut score Pass (n= 95)
 
Fail-2 (n= 61)
 
Percentage nadm Percentage nadm 
RDS <8 (Greiffenstein et al., 199411 74 70 53 
RFIT-II <9 (Griffin, Glassmire, Henderson, & McCann, 199779 50 50 
RMT-Words <38 (Iverson & Franzen, 199893 67 60 
RMT-Faces <26 (Millis, 200294 27 60 
Tap DH Raw <29 women; <36 men (Arnold et al., 200512 94 69 54 
TMT-A >62 (Iverson, Lange, Green, & Franzen, 200295 43 60 
TMT-B >199 (Iverson et al., 200295 46 59 
WCST-FMS >3 (Larrabee, 200380 16 43 

Note. Pass = passed all or failed one of the performance validity measures; Fail-2 = failed two or more performance validity measures; nadm = number of participants administered the measure; RDS = Reliable Digit Span; RFIT-II = Rey Fifteen Item Test-II; RMT = Recognition Memory Test; Tap DH Raw = Finger Tapping dominant hand raw score; TMT = Trail Making Test; WCST-FMS = Wisconsin Card Sorting Test Failure to maintain set.

In the final study sample (N= 156), there was a subset of participants for whom PC item-level data were unavailable (n= 27). These participants did not differ from those with item-level data in gender, χ2(1, N= 156) = 0.81, p= .37, ethnicity, χ2(1, N= 156) = 2.25, p= .13, or handedness, χ2(1, N= 156) = 0.08, p= .77; age, t(154) = 0.20, p= .84, and education, t(154) = 0.60, p= .55, were not different. The proportion of participants who failed two or more effort measures was not different in the sample subset, χ2(1, N= 156) = 0.06, p= .81.

Measures

PC from the WAIS-III (Wechsler, 1997) was administered as part of a comprehensive neuropsychological evaluation. The raw score and the age-adjusted scaled score were computed according to the test manual. In addition, four PC effort indices identified in recent research (Solomon et al., 2010) were examined: Most Discrepant Index (items 8, 9, 10, 13, 14, and 21), Most Discrepant Index-10 (items 8, 9, 10, 13, 14, 15, 16, 19, 20, and 21), Rarely Missed Index (items 1–9), and Rarely Correct Index (items 10, 15, 17, 18, 20, 21, 22, 23, and 24).

Procedure

All participants underwent neuropsychological evaluation as part of an independent neuropsychological examination conducted by one of the authors. The present study focused on the demographic information collected at the examination, the PVIs as indicated above, and the PC subtest of the WAIS-III. PC raw scores and item responses were available for a subset of the sample (n= 129). The item-level data were used to calculate the aforementioned PC effort indices according to Solomon and colleagues (2010).

Results

The groups did not differ in gender, ethnicity, or handedness (Table 1). Significant differences were found for age, t(154) = 3.21, p= .002, and education, t(154) = 2.53, p= .012. The Pass group was almost 6 years younger (M= 41.1, SD= 11.1) than the Fail-2 group (M= 46.8, SD= 10.4), and the Pass group completed almost 1 more year of education (M= 13.4, SD= 2.3) than the Fail-2 group (M= 12.5, SD= 2.1). Significant group differences were found on PC raw and scaled scores; groups also differed on all PC effort indices. Group comparisons on PC variables are shown in Table 3.

Table 3.

PC performance by group

Variable Pass Fail-2 t p-value d 
N 95 61    
PC scaled score 9.7 (3.3) 6.2 (2.7) 6.9 <.001 1.16 
Na 78 51    
PC raw score 19.1 (4.1) 13.8 (4.8) 6.8 <.001 1.19 
Most Discrepant Indexb 5.0 (1.3) 3.4 (1.7) 5.7 <.001 1.06 
Most Discrepant Index-10b 7.8 (2.3) 4.9 (2.7) 6.2 <.001 1.16 
Rarely Correct Indexb 5.2 (2.6) 2.2 (2.4) 6.5 <.001 1.20 
Rarely Missed Indexb 8.8 (0.6) 8.1 (1.5) 3.3 .002 0.61 
Variable Pass Fail-2 t p-value d 
N 95 61    
PC scaled score 9.7 (3.3) 6.2 (2.7) 6.9 <.001 1.16 
Na 78 51    
PC raw score 19.1 (4.1) 13.8 (4.8) 6.8 <.001 1.19 
Most Discrepant Indexb 5.0 (1.3) 3.4 (1.7) 5.7 <.001 1.06 
Most Discrepant Index-10b 7.8 (2.3) 4.9 (2.7) 6.2 <.001 1.16 
Rarely Correct Indexb 5.2 (2.6) 2.2 (2.4) 6.5 <.001 1.20 
Rarely Missed Indexb 8.8 (0.6) 8.1 (1.5) 3.3 .002 0.61 

Notes: PC scores by group reported as M (SD). PC = Picture Completion; Pass = passed all or failed one of the performance validity measures; Fail-2 = failed two or more performance validity measures

aPC item-level data available for sample subset (n= 127).

bIndex scores calculated as described in Solomon and colleagues (2010).

In order to delineate the possible influence of age and education differences on outcome measures, additional analyses were conducted. Age was not associated with PC raw (r= −.102, p= .25) or scaled (r= −.018, p= .82) scores. Education was not associated with the PC raw score (r= .112, p= .21). Education showed a significant but small relationship with the PC scaled score (r= .182, p= .02); it accounted for little variance (3%). These correlations were also analyzed separately in the Pass and Fail-2 groups, and the results were similar except the relationship between education and the PC scaled score was no longer significant. Analysis of covariance was conducted using group as the independent variable and PC raw and scaled scores as dependent variables with age and education entered as covariates. Age and education were nonsignificant covariates (p> .05), and the group differences were maintained on both PC raw score, F(1, 125) = 41.1, p< .001, and PC scaled score, F(1, 152) = 43.9, p< .001.

Receiver operating characteristic (ROC) analysis was used to examine the accuracy of PC variables in classifying participants in Pass and Fail-2 groups. The area under the curve (AUC) of the PC raw score was 0.81 (95% confidence interval: 0.74–0.88); the AUC of the PC scaled score was 0.79 (0.72–0.86); the AUC of the Most Discrepant Index was 0.78 (0.69–0.86); the AUC of the Most Discrepant Index-10 was 0.80 (0.72–0.88); the AUC of the Rarely Correct Index was 0.79 (0.72–0.87); and the AUC of the Rarely Missed Index was 0.65 (0.55–0.75). The areas under the ROC curves of the last four variables were compared with the AUC of the PC raw score with Stata 9 (StataCorp, 2005) using a Sidak adjustment for multiple comparisons. The AUC of the PC raw score was significantly greater than the AUC of the Rarely Missed Index, χ2(1, N= 129) = 16.6, p< .001. In contrast, no significant differences were observed in the AUC of the PC raw score, Most Discrepant Index, Most Discrepant Index-10, and Rarely Correct Index (p= .57 to p= .96). Based on the ROC analyses, five of the six PC variables showed adequate differentiation of the Pass and Fail-2 groups. The use of the Rarely Missed Index was not supported due to poor discrimination. The obtained values for sensitivity, specificity, and likelihood ratio using a range of cut scores to classify the Pass and Fail-2 groups are shown in Table 4.

Table 4.

Sensitivity, specificity, and positive likelihood ratios for PC cut scores

Cut score Sensitivity Specificity +LR 
Raw score 
 ≤8 0.137 1.000  
 ≤10 0.255 0.936 3.98 
 ≤12 0.412 0.910 4.59 
 ≤13 0.510 0.897 4.97 
 ≤14 0.549 0.833 3.29 
Scaled score 
 ≤3 0.115 1.000  
 ≤4 0.295 0.947 5.61 
 ≤5 0.443 0.842 2.80 
Most Discrepant Index 
 ≤1 0.137 0.974 5.35 
 ≤2 0.294 0.923 3.82 
 ≤3 0.510 0.859 3.61 
Most Discrepant Index-10 
 ≤2 0.196 0.962 5.10 
 ≤3 0.373 0.910 4.15 
 ≤4 0.451 0.885 3.91 
 ≤5 0.549 0.821 3.06 
Rarely Correct Index 
 <1 0.392 0.910 4.37 
 <2 0.510 0.859 3.61 
Rarely Missed Index 
 ≤7 0.196 0.974 7.65 
 ≤8 0.412 0.872 3.21 
Cut score Sensitivity Specificity +LR 
Raw score 
 ≤8 0.137 1.000  
 ≤10 0.255 0.936 3.98 
 ≤12 0.412 0.910 4.59 
 ≤13 0.510 0.897 4.97 
 ≤14 0.549 0.833 3.29 
Scaled score 
 ≤3 0.115 1.000  
 ≤4 0.295 0.947 5.61 
 ≤5 0.443 0.842 2.80 
Most Discrepant Index 
 ≤1 0.137 0.974 5.35 
 ≤2 0.294 0.923 3.82 
 ≤3 0.510 0.859 3.61 
Most Discrepant Index-10 
 ≤2 0.196 0.962 5.10 
 ≤3 0.373 0.910 4.15 
 ≤4 0.451 0.885 3.91 
 ≤5 0.549 0.821 3.06 
Rarely Correct Index 
 <1 0.392 0.910 4.37 
 <2 0.510 0.859 3.61 
Rarely Missed Index 
 ≤7 0.196 0.974 7.65 
 ≤8 0.412 0.872 3.21 

Notes: Measures of classification accuracy provided for PC raw and scaled scores and effort indices proposed by Solomon and colleagues (2010). PC = Picture Completion; +LR = positive likelihood ratio.

Discussion

This study cross-validated the findings of Solomon and colleagues (2010) regarding the use of the WAIS-III PC subtest as an embedded PVI. The current sample was grouped according to the performance on freestanding and embedded effort measures. Consistent with Solomon and colleagues (2010), individuals who performed poorly on two or more measures of effort also performed poorly on the PC subtest. In the current sample, the Pass group outperformed the Fail-2 group on PC raw and scaled scores and on PC effort indices (Solomon et al., 2010). In general, the findings reported by Solomon and colleagues were maintained on cross-validation, suggesting that PC is useful as an adjunctive measure of effort. Particularly, of the six variables considered, five showed adequate specificity with sensitivity comparable with the other embedded effort measures.

In contrast to Solomon and colleagues (2010) finding that the Most Discrepant Index was the most diagnostically efficient, the present results showed cutoffs based on the PC raw score to have the greatest sensitivity to suboptimal effort while maintaining adequate specificity (i.e., >90%). Using the raw score as an indicator of effort may have greater clinical utility than employing an index based on a subset of items in that interpretation of performance validity can be made in real time during test administration. Embedded measures that require additional calculation increase the time required for data collection, which may deter some clinicians from using them.

The finding of differences in age and education between Pass and Fail-2 groups raises a larger question with regard to the use of embedded PVIs in cases in which educational attainment is limited. In the present sample, the observed demographic differences were interpreted as not having a clinically significant impact: Although statistically significant, there was only a 1-year difference in education and a 6-year difference in age. Furthermore, analyses with age and education entered as covariates did not find them to be significant contributors to performance on PC. These data suggest that PC may be useful in cases involving low education.

Limitations of the present study include the fact that the majority of participants were referred for evaluation following reported mild traumatic brain injury in the context of personal injury claims or litigation. These findings might not generalize to settings or clinical populations outside of the forensic arena. In particular, the current study excluded older adults, individuals with history of learning disability, and individuals for whom English was a second language. Further research may be helpful to investigate the utility of employing PC subtest performance in a broader array of clinical cases, or when paired with a particular pattern of performance on one or more separate symptom validity tests. In addition, with the release and growing use of the next edition of the WAIS, further replication using the new stimuli is recommended.

Conflict of Interest

None declared.

References

Arnold
G.
Boone
K. B.
Lu
P.
Dean
A.
Wen
J.
Nitch
S.
, et al.  . 
Sensitivity and specificity of Finger Tapping Test scores for the detection of suspect effort
Journal of Clinical and Experimental Neuropsychology
 , 
2005
, vol. 
19
 (pg. 
105
-
120
)
Axelrod
B. N.
Fichtenberg
N. L.
Millis
S. R.
Wertheimer
J. C.
Detecting incomplete effort with digit span from the Wechsler Adult Intelligence Scale-Third Edition
The Clinical Neuropsychologist
 , 
2006
, vol. 
20
 (pg. 
513
-
523
)
Babikian
T.
Boone
K.
Lu
P.
Arnold
G.
Sensitivity and specificity of various digit span scores in the detection of suspect effort
The Clinical Neuropsychologist
 , 
2006
, vol. 
20
 (pg. 
145
-
159
)
Boone
K. B.
Assessment of feigned cognitive impairment: A neuropsychological perspective
 , 
2007
New York
Guilford Press
Boone
K. B.
The need for continuous and comprehensive sampling of effort/response bias during neuropsychological examinations
The Clinical Neuropsychologist
 , 
2009
, vol. 
23
 (pg. 
729
-
741
)
Bush
S. S.
Ruff
R. M.
Troster
A. I.
Barth
J. T.
Koffler
S. P.
Pliskin
N. H.
, et al.  . 
Symptom validity assessment: Practice issues and medical necessity (NAN Policy & Planning Committee)
Archives of Clinical Neuropsychology
 , 
2005
, vol. 
20
 (pg. 
419
-
426
)
Camara
W. J.
Nathan
J. S.
Puente
A. E
Psychological test usage: Implications in professional psychology
Professional Psychology: Research and Practice
 , 
2000
, vol. 
31
 (pg. 
141
-
154
)
Etherton
J. L.
Bianchini
K. J.
Greve
K. W.
Heinly
M. T.
Sensitivity and specificity of reliable digit span in malingered pain-related disability
Assessment
 , 
2005
, vol. 
12
 (pg. 
130
-
136
)
Etherton
J. L.
Bianchini
K. J.
Heinly
M. T.
Greve
K. W.
Pain, malingering, and performance on the WAIS-III Processing Speed Index
Journal of Clinical and Experimental Neuropsychology
 , 
2006
, vol. 
28
 (pg. 
1218
-
1237
)
Greiffenstein
M. F.
Baker
W. J.
Gola
T.
Validation of malingered amnesia measures with a large clinical sample
Psychological Assessment
 , 
1994
, vol. 
6
 (pg. 
218
-
224
)
Griffin
G. A. E.
Glassmire
D. A.
Henderson
E. A.
McCann
C.
Rey II: Redesigning the Rey screening test of malingering
Journal of Clinical Psychology
 , 
1997
, vol. 
53
 (pg. 
757
-
766
)
Heilbronner
R. L.
Sweet
J. J.
Morgan
J. E.
Larrabee
G. J.
Millis
S. R.
American Academy of Clinical Neuropsychology Consensus Conference Statement on the neuropsychological assessment of effort, response bias, and malingering
The Clinical Neuropsychologist
 , 
2009
, vol. 
23
 (pg. 
1093
-
1129
)
Iverson
G. L.
Franzen
M. D.
Using multiple objective memory procedures to detect simulated malingering
Journal of Clinical and Experimental Neuropsychology
 , 
1996
, vol. 
18
 (pg. 
38
-
51
)
Iverson
G. L.
Franzen
M. D.
Detecting malingered memory deficits with the Recognition Memory Test
Brain Injury
 , 
1998
, vol. 
12
 (pg. 
275
-
282
)
Iverson
G. L.
Lange
R. T.
Green
P.
Franzen
M.
Detecting exaggeration and malingering with the Trail Making Test
The Clinical Neuropsychologist
 , 
2002
, vol. 
16
 (pg. 
398
-
406
)
Iverson
G. L.
Tulsky
D. S.
Detecting malingering on the WAIS-III. Unusual Digit Span performance patterns in the normal population and in clinical groups
Archives of Clinical Neuropsychology
 , 
2003
, vol. 
18
 (pg. 
1
-
9
)
Kim
N.
Boone
K. B.
Victor
T.
Lu
P.
Keatinge
C.
Mitchell
C.
Sensitivity and specificity of a digit symbol recognition trial in the identification of response bias
Archives of Clinical Neuropsychology
 , 
2010
, vol. 
25
 (pg. 
420
-
428
)
Larrabee
G. J.
Detection of malingering using atypical performance patterns on standard neuropsychological tests
The Clinical Neuropsychologist
 , 
2003
, vol. 
17
 (pg. 
410
-
425
)
Larrabee
G. J
Assessment of malingered neuropsychological deficits
 , 
2007
New York
Oxford University Press
Lezak
M. D.
Howieson
D. B.
Loring
D. W.
Neuropsychological assessment
 , 
2004
4th ed.
New York
Oxford University Press
Millis
S. R.
Warrington's Recognition Memory Test in the detection of response bias
Journal of Forensic Neuropsychology
 , 
2002
, vol. 
2
 (pg. 
147
-
166
)
Millis
S. R.
Ross
S. R.
Ricker
J. H.
Detection of incomplete effort on the Wechsler Adult Intelligence Scale-Revised: A cross-validation
Journal of Clinical and Experimental Neuropsychology
 , 
1998
, vol. 
20
 (pg. 
167
-
173
)
Mittenberg
W.
Theroux
S.
Aguila-Puentes
G.
Bianchini
K.
Greve
K.
Rayls
K.
Identification of malingered head injury on the Wechsler Adult Intelligence Scale-3rd Edition
The Clinical Neuropsychologist
 , 
2001
, vol. 
15
 (pg. 
440
-
445
)
Mittenberg
W.
Theroux-Fichera
S.
Zielinski
R. E.
Heilbronner
R. Z.
Identification of malingered head injury on the Wechsler Adult Intelligence Scale-Revised
Professional Psychology: Research and Practice
 , 
1995
, vol. 
26
 (pg. 
491
-
498
)
Rabin
L. A.
Barr
W. B.
Burton
L. A.
Assessment practices of clinical neuropsychologists in the United States and Canada: A survey of INS, NAN, and APA division 40 members
Archives of Clinical Neuropsychology
 , 
2005
, vol. 
20
 (pg. 
33
-
65
)
Schwarz
L. R.
Gfeller
J. D.
Oliveri
M. V.
Detecting feigned impairment with the digit span and vocabulary subtests of the Wechsler Adult Intelligence Scale-Third Edition
The Clinical Neuropsychologist
 , 
2006
, vol. 
20
 (pg. 
741
-
753
)
Sharland
M. J.
Gfeller
J. D.
A survey of neuropsychologists’ beliefs and practices with respect to the assessment of effort
Archives of Clinical Neuropsychology
 , 
2007
, vol. 
22
 (pg. 
213
-
223
)
Solomon
R. E.
Boone
K. B.
Miora
D.
Skidmore
S.
Cottingham
M.
Victor
T.
, et al.  . 
Use of the WAIS-III Picture Completion subtest as an embedded measure of response bias
The Clinical Neuropsychologist
 , 
2010
, vol. 
24
 (pg. 
1243
-
1256
)
StataCorp.
Stata Statistical Software
 , 
2005
Release (Vol. 9). College Station, TX
Author
Sweet
J. J.
King
J. H.
Malina
A. C.
Bergman
M. A.
Simmons
A.
Documenting the prominence of forensic neuropsychology at national meetings and in relevant professional journals from 1990 to 2000
The Clinical Neuropsychologist
 , 
2002
, vol. 
16
 (pg. 
481
-
494
)
Trueblood
W.
Schmidt
M.
Malingering and other validity considerations in the neuropsychological evaluation of mild head injury
Journal of Clinical Experimental Neuropsychology
 , 
1993
, vol. 
15
 (pg. 
578
-
590
)
Wechsler
D.
WAIS-R manual
 , 
1981
San Antonio, TX
The Psychological Corporation
Wechsler
D.
Wechsler Adult Intelligence Scale-Third Edition: Administration and scoring manual
 , 
1997
San Antonio, TX
The Psychological Corporation
Whitney
K. A.
Davis
J. J.
Shepard
P. H.
Bertram
D. M.
Adams
K. M.
Digit Span age scaled score in middle-aged military Veterans: Is it more closely associated with TOMM failure than Reliable Digit Span?
Archives of Clinical Neuropsychology
 , 
2009
, vol. 
24
 (pg. 
263
-
272
)