Objective

The current study addressed two aims: (i) determine how Word Memory Test (WMT) performance relates to test performance across numerous cognitive domains and (ii) evaluate how current psychiatric disorders or mild traumatic brain injury (mTBI) history affects performance on the WMT after excluding participants with poor symptom validity.

Method

Participants were 235 Iraq and Afghanistan-era veterans (Mage = 35.5) who completed a comprehensive neuropsychological battery. Participants were divided into two groups based on WMT performance (Pass = 193, Fail = 42). Tests were grouped into cognitive domains and an average z-score was calculated for each domain.

Results

Significant differences were found between those who passed and those who failed the WMT on the memory, attention, executive function, and motor output domain z-scores. WMT failure was associated with a larger performance decrement in the memory domain than the sensation or visuospatial-construction domains. Participants with a current psychiatric diagnosis or mTBI history were significantly more likely to fail the WMT, even after removing participants with poor symptom validity.

Conclusions

Results suggest that the WMT is most appropriate for assessing validity in the domains of attention, executive function, motor output and memory, with little relationship to performance in domains of sensation or visuospatial-construction. Comprehensive cognitive batteries would benefit from inclusion of additional performance validity tests in these domains. Additionally, symptom validity did not explain higher rates of WMT failure in individuals with a current psychiatric diagnosis or mTBI history. Further research is needed to better understand how these conditions may affect WMT performance.

Introduction

Performance validity tests (PVTs) are often used as part of cognitive evaluations to provide evidence for or against the validity of the data. Use of PVTs is recognized as a recommended, routine, and medically necessary practice in neuropsychological assessment (Bush et al., 2005; Heilbronner, Sweet, Morgan, Larrabee, & Millis, 2009). Current recommendations for thorough assessment of performance validity include the need to assess across multiple domains of cognition, as respondents may show a tendency to invalidate tests of only a particular domain (Boone, 2009). The Green's Word Memory Test (WMT; Green, 2005) is a widely used measure of performance validity using a forced choice recognition format and presented as a test of memory. The WMT has consistently been shown to identify non-credible performance on memory tasks (Green, 2005; Green, Montijo, & Brockhaus, 2011) and has been shown to be related to other PVTs, such as the Test of Memory Malingering (TOMM; Bauer, O'Bryant, Lynch, McCaffrey, & Fisher, 2007; Greiffenstein, Greve, Bianchini, & Baker, 2008; Heyanka et al., 2015; Sollman & Berry, 2011).

Other studies have examined the performance of the WMT as a PVT outside of the domain of memory, most commonly in the context of attention-deficit/hyperactivity disorder (ADHD) evaluations. In patients undergoing ADHD evaluations, Sullivan et al. (2007) demonstrated a positive relationship between WMT scores and Index Scores on the Wechsler Adult Intelligence Scale-III (WAIS-III). Suhr, Hammers, Dobbins-Buckland, Zimak, and Hughes (2008) found that those who failed the WMT performed worse on tests of memory and executive function, but not processing speed, when compared to those who passed the WMT. Similarly, Suhr, Sullivan, and Rodriguez (2011) demonstrated that those who failed the WMT performed worse on many outcome variables of the Conner's Continuous Performance Test-II (CPT-II) than those who passed the WMT.

Outside of the context of ADHD evaluations, Lange, Pancholi, Bhagwat, Anderson-Barnes, and French (2012) examined clinical data from military personnel evaluated an average of 4 months post mild traumatic brain injury (mTBI), reporting that participants with mTBI history who failed the WMT and at least one additional embedded PVT performed worse across several domains of cognitive functioning than those who passed. Effect sizes were largest for tests of attention as well as learning and memory, with smaller effect sizes and non-significant results on tests of working memory, processing speed, visuospatial function, and executive function. Using similar data, Lange and colleagues (2013) demonstrated poorer performance on several CPT-II variables in patients with mTBI history who failed the WMT compared to patients with mild or severe TBI history who passed the WMT.

Overall, available evidence suggests that outside of the domain of memory, individuals who fail the WMT may also perform worse on tests of attention; however, evidence is sparse or inconsistent for other domains of cognitive functioning. Two studies reported a relationship between WMT validity subtests and overall test battery mean scores, demonstrating lower scores in participants failing the WMT and a significant correlation between WMT performance and the overall test battery mean (Green, Rohling, Lees-Haley, & Allen, 2001; Rohling & Demakis, 2010). However, considering the results of previous studies, it is not possible to rule out that these findings are the result of large differences in performance on tests of memory and attention, rather than poor performance across all testing. Thus, the first aim of this study was to determine the ability of the WMT to identify non-credible responding outside of the domain of memory.

Several studies of Operation Enduring Freedom/Operation Iraqi Freedom/Operation New Dawn (OEF/OIF/OND) veterans have reported increased rates of PVT failure by individuals with post-traumatic stress disorder (PTSD) or TBI history (Armistead-Jehle, 2010; Axelrod & Schutte, 2010; Clark, Amick, Fortier, Milberg, & McGlinchey, 2014; Lange et al., 2012; Nelson et al., 2010; Russo, 2012; Sherer et al., 2015; Whitney, Shepard, Williams, Davis, & Adams, 2009; Wisdom et al., 2014; Young, Sawyer, Roper, & Baughman, 2012) with failure rates varying from 9% to 58%. A common explanation suggested for elevated rates of failure in clinical groups is that individuals failing PVTs exaggerate self-reported symptoms, increasing the likelihood that these individuals will be diagnosed inaccurately with a psychiatric disorder or TBI. Many studies that reported higher rates of PVT failure associated with PTSD and TBI in the OEF/OIF/OND cohort included data gathered in a clinical or forensic context, and few examined the validity of symptom report independent from performance validity. Larrabee (2012) proposed a distinction between performance validity and symptom validity. PVTs are presumed to evaluate the validity of cognitive performance data, whereas symptom validity tests (SVTs) are presumed to evaluate the validity of symptom presentation data. Several studies have assessed this dichotomy, suggesting that, although the constructs may be distinct, a positive relationship exists such that poorer performance validity is associated with poorer symptom validity in some cases (Egeland, Andersson, Sundseth, & Schanke, 2015; Van Dyke, Millis, Axelrod, & Hanks, 2013; Zakzanis, Gammada, & Jeffay, 2012). These results indicate a need for investigations that address symptom validity concerns when examining differences in diagnostic rates between individuals passing and failing PVTs. The second aim of this study was to evaluate how current psychiatric disorders or mTBI history affects performance on the WMT after excluding participants with poor symptom validity performance.

Based on the study aims, it was hypothesized that: (i) an interaction would be observed between WMT failure and cognitive domain such that participants failing the WMT would perform worse than those passing the WMT, and the difference would be greater in the domain of memory than other domains, (ii) individuals with a current psychiatric diagnosis would be more likely to fail the WMT than those without a current diagnosis, and (iii) individuals with a history of mTBI would be more likely to fail the WMT than those without mTBI history.

Materials and Methods

These data were drawn from prospective studies reviewed and approved by the affiliated Institutional Review Board. The welfare and privacy of human participants were protected and maintained. Voluntary verbal and written informed consent was obtained prior to initiation of any study activities.

Participants

Participants included in the current analysis were part of the VA Mid-Atlantic Mental Illness Research, Education, and Clinical Center (MA-MIRECC) Post-Deployment Mental Health (PDMH) Repository, a repository created to study post-deployment mental health with the only inclusion criteria being that individuals have served in the U.S. Armed Forces post-September 11, 2001. The MA-MIRECC PDMH Repository includes data from the Structured Clinical Interview for DSM-IV Diagnosis (SCID; First, Spitzer, Gibbon, & Williams, 1996), a clinician administered TBI interview, and over 20 self-report inventories. Following completion of these measures, participants were invited to complete a comprehensive neurocognitive battery on a separate day. Exclusion criteria for completion of the neurocognitive battery included combat exposure prior to 1985, neuropsychological evaluation during the prior 6 months, psychosis at the time of testing, presence of current substance abuse or dependence per Diagnostic and Statistical Manual of Mental Disorders (4th ed., text rev. DSM-IV-TR; American Psychiatric Association, 2000) criteria, history of pre-deployment non-military related PTSD (e.g., childhood sexual trauma), and history of moderate to severe TBI prior to or since deployment. Tests were administered in a fixed order and in a standardized manner in accordance with respective test manuals. Participants were reimbursed for their time and travel.

Data for the current analyses were gathered at a single site. For the current analysis, participants who did not deploy to a combat zone, reported a TBI of severity greater than mild (based on interview or self-report measure), were missing WMT data, or requested a report summarizing their testing results be added to their medical record for clinical use were also excluded. The remaining participants completed the protocol in a purely research context without the possibility of data being used for clinical or forensic purposes and were directly informed of this situation during the consent process. The final sample (n = 235) had no immediate or apparent incentives to perform poorly, and had no obvious biases in the form of potential secondary gain (i.e., could not meet Slick, Sherman, and Iverson, 1999, criteria for malingered neurocognitive disorder). Participants scoring 82.5% or less on Immediate Recall (IR), Delayed Recall (DR) or Consistency (CNS) indices of the WMT were assigned to the WMT-Fail (WMT-F) group (n = 42, 17.9%). All other participants were assigned to the WMT Pass (WMT-P) group (n = 193, 82.1%). For analyses evaluating hypothesis 2, participants were required to have completed the Personality Assessment Inventory (PAI; Morey, 1991), and Miller Forensic Assessment of Symptoms Test (M-FAST; Miller, 2001) with scores below cutoffs on scales that might indicate invalid symptom report defined as PAI negative impression management <93, positive impression management <69, infrequency <71, inconsistency <71 (Morey, 1991), and M-FAST <6 (Miller, 2001), resulting in n = 178 (WMT-F: n = 26, 14.6%; WMT-P: n = 152, 85.4%) who performed adequately on SVTs. Demographic and descriptive data are presented in Table 1. Overall the sample was mostly men, well educated, and of average intelligence as measured by the Wechsler Test of Adult Reading (WTAR; Psychological Corporation, 2001).

Table 1.

Descriptive characteristics of the sample

Variable Total samplea WMT Passb WMT-Failc F/χ2  η2 /φ 
Aged 35.3 (9.7) 35.6 (9.4) 33.8 (10.9) 1.20 .01 
Education (years)d 13.6 (1.9) 13.7(2.0) 12.9 (1.5) 6.40* .03 
Deploymentsd 1.6 (1.0) 1.7 (1.0) 1.5 (0.7) 1.55 .01 
WTAR (StS)d, e 102.4 (12.3) 103.2 (12.3) 98.5 (11.4) 5.20* .02 
Number of lifetime mTBId 1.0 (1.2) 1.0 (1.3) 1.2 (1.0) 1.05 .00 
Number of deployment mTBId 0.5 (0.8) 0.5 (0.8) 0.8 (0.7) 7.37** .03 
Sex (% women) 11.9 14.5 0.0 6.92** .17 
Ethnicity (% minority) 28.5 25.9 40.5 3.59 .12 
Lifetime mTBI (% positive) 57.9 53.9 76.2 7.04** .17 
Deployment mTBI (% positive) 38.3 33.2 61.9 12.06** .23 
Current PTSD (% positive) 38.3 33.2 61.9 12.06** .23 
Current MDD (% positive) 20.0 14.5 45.2 20.36** .29 
Any current Dx (% positive) 51.1 46.6 71.4 8.49* .19 
Variable Total samplea WMT Passb WMT-Failc F/χ2  η2 /φ 
Aged 35.3 (9.7) 35.6 (9.4) 33.8 (10.9) 1.20 .01 
Education (years)d 13.6 (1.9) 13.7(2.0) 12.9 (1.5) 6.40* .03 
Deploymentsd 1.6 (1.0) 1.7 (1.0) 1.5 (0.7) 1.55 .01 
WTAR (StS)d, e 102.4 (12.3) 103.2 (12.3) 98.5 (11.4) 5.20* .02 
Number of lifetime mTBId 1.0 (1.2) 1.0 (1.3) 1.2 (1.0) 1.05 .00 
Number of deployment mTBId 0.5 (0.8) 0.5 (0.8) 0.8 (0.7) 7.37** .03 
Sex (% women) 11.9 14.5 0.0 6.92** .17 
Ethnicity (% minority) 28.5 25.9 40.5 3.59 .12 
Lifetime mTBI (% positive) 57.9 53.9 76.2 7.04** .17 
Deployment mTBI (% positive) 38.3 33.2 61.9 12.06** .23 
Current PTSD (% positive) 38.3 33.2 61.9 12.06** .23 
Current MDD (% positive) 20.0 14.5 45.2 20.36** .29 
Any current Dx (% positive) 51.1 46.6 71.4 8.49* .19 

Notes: WMT = Word Memory Test; WTAR = Wechsler Test of Adult Reading; StS = age-corrected standard score; mTBI = mild traumatic brain injury; PTSD = post-traumatic stress disorder; MDD = major depressive disorder; Dx = DSM-IV diagnosis. η2 = partial eta squared. Φ = phi. *p < .05. **p < .01.

an = 235.

bn = 193.

cn = 42.

dPresented as M (SD).

en = 234.

Measures

The SCID is a structured clinical interview for the evaluation of psychiatric diagnoses. Outcome variables include the presence or absence of current and lifetime psychiatric diagnoses according to DSM-IV criteria. For the current study, an outcome variable was created indicating the presence or absence of any current psychiatric diagnosis. All participants completed a self-report questionnaire and a subset (n = 168, 71.5%) also completed a semi-structured interview to evaluate lifetime TBI history of any severity based on the American Congress of Rehabilitation Medicine criteria (Menon, Schwab, Wright, & Maas, 2010). Data from the interview were used when available and were supplemented with the self-report measure otherwise. Outcome variables included presence or absence of mTBI during deployment and across the lifetime.

The Green WMT

The WMT (Green, 2005) is a stand-alone PVT using a forced choice format with a verbal memory presentation. Outcome measures included the three validity subtests: IR, DR, and CNS. The Paired Associates, Free Recall, and Long Delayed Free Recall subtests were not administered as part of this standardized battery; thus the genuine memory impairment profile (GMIP, or possible dementia profile; Green, Flaro, & Courtney, 2009) could not be utilized in group assignment.

Additional test selection

The aim of test selection from the available standardized battery was to establish a group of outcome variables that allowed the evaluation of as many cognitive domains as possible. The potential outcome variables were sorted into cognitive domains based on descriptions in (Lezak et al., 2012): sensation, visuospatial-construction, motor output, attention, executive function, and learning and memory. Although many of the tests include verbal components, none specifically measured language as identified by Lezak, Howieson, Bigler, and Tranel (2012) (i.e., aphasic processes); therefore the language domain was not sampled. The complete list of outcome variables is presented in Table 2. Raw scores were used in analyses with means and standard deviations presented in Table 2. Some measures were not completed by all participants and available sample sizes are noted where applicable.

Table 2.

Test scores and group differences by domain and sub-domain

Domain Measure WMT Pass (n = 193) WMT-Fail (n = 42) η2 
Sensation UPSIT 34.1 (4.2)a 32.9 (4.0) .01 
Motor output FTT 49.8 (7.8)b 48.0 (10.5) .01 
GP 72.6 (13.9)c 86.1 (24.5) .10* 
Visuospatial-construction WAIS-III Block 42.6 (12.2) 39.1 (10.6)d .01 
 RCFT Copy 33.6 (2.8) 33.1 (3.0) .01 
Attention CPT-II OM 2.1 (3.5) 10.5 (16.7) .15* 
CPT-II HRT 375.2 (61.6) 417.4 (135.0) .05* 
CPT-II HRT-SE 5.2 (2.0) 8.3 (6.1) .13* 
TMT Part A 29.0 (9.7)c 36.5 (14.7) .07* 
TMT Part B 62.5 (24.7)c 84.4 (44.0) .07* 
TMT Ratio 2.3 (0.8)c 2.4 (0.8) .01 
Stroop Word 96.9 (17.3)a 84.6 (20.1) .07* 
Stroop Color 71.8 (11.8)a 66.0 (13.8) .03* 
WAIS-III DSC 71.5 (14.9) 62.8 (13.8) .04* 
WAIS-III SS 33.9 (7.9) 30.4 (8.1) .03 
WAIS-III LNS 11.2 (2.8) 9.7 (2.5) .03* 
PASAT 73.8 (22.8)e 63.7 (21.7) .02* 
Executive WAIS-III SIM 24.5 (5.6) 21.0 (3.7) .06* 
Stroop C-W 43.1 (9.4)a 36.9 (10.5) .07* 
Stroop Ratio 0.6 (0.1)a 0.6 (0.1) .03* 
CPT-II COM 12.7 (7.1) 17.2 (7.5) .05* 
COWAT 41.2 (10.5) 34.3 (9.2) .06* 
WCST Correct 49.0 (8.6)f 45.0 (8.8)d .03* 
WCST Categories 3.6 (1.4)f 2.9 (1.2)d .04* 
WCST PE 7.5 (4.8)f 10.2 (6.5)d .04* 
WCST FMS 0.4 (0.6)f 0.5 (0.7)d .01 
Memory CVLT-II 1-5 Total 56.2 (8.4)c 46.9 (10.0) .15* 
CVLT-II 1-5 Slope 1.6 (0.6)c 1.4 (0.6) .03* 
CVLT-II LDFR 12.7 (2.7)c 9.1 (3.8) .19* 
CVLT-II d′ 3.4 (0.6)c 2.7 (0.8) .16* 
BVMT-R Total 26.4 (6.0)c 22.7 (5.8) .06* 
BVMT-R % Retained 96.5 (9.7)a 85.9 (22.1) .09* 
BVMT-R DR 10.3 (1.9)a 8.4 (2.8) .12* 
BVMT-R Recognition 5.8 (0.5)c 5.3 (1.2) .07* 
RCFT IR 21.5 (6.1) 18.3 (6.0) .04* 
RCFT DR 21.5 (6.3) 18.3 (5.8) .04* 
RCFT Recognition 20.9 (2.1)a 20.2 (2.0) .02 
Domain Measure WMT Pass (n = 193) WMT-Fail (n = 42) η2 
Sensation UPSIT 34.1 (4.2)a 32.9 (4.0) .01 
Motor output FTT 49.8 (7.8)b 48.0 (10.5) .01 
GP 72.6 (13.9)c 86.1 (24.5) .10* 
Visuospatial-construction WAIS-III Block 42.6 (12.2) 39.1 (10.6)d .01 
 RCFT Copy 33.6 (2.8) 33.1 (3.0) .01 
Attention CPT-II OM 2.1 (3.5) 10.5 (16.7) .15* 
CPT-II HRT 375.2 (61.6) 417.4 (135.0) .05* 
CPT-II HRT-SE 5.2 (2.0) 8.3 (6.1) .13* 
TMT Part A 29.0 (9.7)c 36.5 (14.7) .07* 
TMT Part B 62.5 (24.7)c 84.4 (44.0) .07* 
TMT Ratio 2.3 (0.8)c 2.4 (0.8) .01 
Stroop Word 96.9 (17.3)a 84.6 (20.1) .07* 
Stroop Color 71.8 (11.8)a 66.0 (13.8) .03* 
WAIS-III DSC 71.5 (14.9) 62.8 (13.8) .04* 
WAIS-III SS 33.9 (7.9) 30.4 (8.1) .03 
WAIS-III LNS 11.2 (2.8) 9.7 (2.5) .03* 
PASAT 73.8 (22.8)e 63.7 (21.7) .02* 
Executive WAIS-III SIM 24.5 (5.6) 21.0 (3.7) .06* 
Stroop C-W 43.1 (9.4)a 36.9 (10.5) .07* 
Stroop Ratio 0.6 (0.1)a 0.6 (0.1) .03* 
CPT-II COM 12.7 (7.1) 17.2 (7.5) .05* 
COWAT 41.2 (10.5) 34.3 (9.2) .06* 
WCST Correct 49.0 (8.6)f 45.0 (8.8)d .03* 
WCST Categories 3.6 (1.4)f 2.9 (1.2)d .04* 
WCST PE 7.5 (4.8)f 10.2 (6.5)d .04* 
WCST FMS 0.4 (0.6)f 0.5 (0.7)d .01 
Memory CVLT-II 1-5 Total 56.2 (8.4)c 46.9 (10.0) .15* 
CVLT-II 1-5 Slope 1.6 (0.6)c 1.4 (0.6) .03* 
CVLT-II LDFR 12.7 (2.7)c 9.1 (3.8) .19* 
CVLT-II d′ 3.4 (0.6)c 2.7 (0.8) .16* 
BVMT-R Total 26.4 (6.0)c 22.7 (5.8) .06* 
BVMT-R % Retained 96.5 (9.7)a 85.9 (22.1) .09* 
BVMT-R DR 10.3 (1.9)a 8.4 (2.8) .12* 
BVMT-R Recognition 5.8 (0.5)c 5.3 (1.2) .07* 
RCFT IR 21.5 (6.1) 18.3 (6.0) .04* 
RCFT DR 21.5 (6.3) 18.3 (5.8) .04* 
RCFT Recognition 20.9 (2.1)a 20.2 (2.0) .02 

Notes: Data are presented as mean (standard deviation). WMT = Word Memory Test; PS = processing speed; WM = working memory; PTSD = post-traumatic stress disorder; UPSIT = University of Pittsburg Smell Identification Test; FTT = Finger Tapping Test; GP = dominant hand Grooved Pegboard; CPT-II = Conners’ Continuous Performance Test-II; OM = Omissions; HRT = Hit Reaction Time; SE = Standard Error; TMT = Trail Making Test; WAIS-III = Wechsler Adult Intelligence Scale-III; DSC = Digit-Symbol Coding; SS = Symbol Search; LNS = Letter Number Sequencing; Block = Block Design; PASAT = Paced Auditory Serial Addition Test; SIM = Similarities; C-W = Color-Word; COM = Commissions; COWAT = Controlled Oral Word Association Test (CFL); WCST = Wisconsin Card Sort Test-64; PE = Perseverative Errors; FMS = Failure to Maintain Set; CVLT-II = California Verbal Learning Test-II; LDFR = Long Delay Free Recall; d′ = Total Recognition Discriminability; BVMT-R = Brief Visuospatial Memory Test-Revised; RCFT = Rey Complex Figure Test; IR = Immediate Recall; DR = Delayed Recall; FrSBe = Frontal Systems Behavior Scale; A = Apathy; D = Disinhibition; E = Executive Dysfunction; T = Total; BDI-II = Beck Depression Inventory-II; PCL-M = PTSD Checklist-Military. η2 = partial eta squared. *p < .05 corrected. Bold was used in addition to the asterisk to indicate statistical significance.

an = 191.

bn = 185.

cn = 192.

dn = 41.

en = 186.

fn = 189.

Test descriptions

The University of Pittsburg Smell Identification Test (UPSIT; Doty, 1995) measures olfactory abilities. The total score was used as the outcome variable, with higher scores reflecting better performance. The Finger Tapping Test (FTT) is a measure of speeded motor output (Reitan & Wolfson, 1985). Dominant hand performance (Tap) was used as the outcome variable with higher scores representing better performance. The Grooved Pegboard test (GP; Reitan & Davison, 1974) measures fine motor dexterity. Dominant hand performance in seconds was used as the outcome variable with higher scores representing poorer performance of the task.

The Conners’ Continuous Performance Test-II Version 5 (CPT-II; Conners & Staff, 2004) is a computerized test evaluating sustained attention, reaction time, and impulsivity. Outcome variables included raw scores for Omissions (OM, lack of response to targets) as a measure of inattention, Commissions (COM, responses to non-targets) as a measure of impulsivity, Hit Reaction Time (HRT) as a measure of processing speed, and Hit Reaction Time Standard Error (HRT-SE) as a measure of the stability of the HRT over time. Higher scores reflect poorer performance. The Trail Making Test (TMT) consists of two parts: Part A evaluates motor processing speed and Part B evaluates set shifting or divided attention. Higher scores reflect worse performance (Reitan & Wolfson, 1985). The ratio of Part B to Part A raw scores in seconds (TMT Ratio) was used as a measure of divided attention controlling for processing speed (Oosterman et al., 2010). Higher scores reflect greater discrepancy between the two subtests and poorer performance on Trial B when accounting for processing speed.

The Stroop Color and Word Test, Adult Version (Golden & Freshwater, 2002) consists of three parts: the Word Reading (Word) and Color Naming (Color) trials evaluate verbal processing speed, and the Color/Word Inhibition trial (C-W) evaluates inhibitory abilities and resistance to interference. The ratio of Color/Word Inhibition to Color Naming was also used as an outcome variable, providing a measure of inhibition controlling for variance due to processing speed (Stroop Ratio; Lansbergen, Kenemans, & van Engeland, 2007). Higher scores on Word, Color, and C-W reflect better performance. Higher scores on Stroop Ratio reflect less interference and better performance. The WAIS-III (Wechsler, 1997) is a battery for the assessment of intellectual ability. The WAIS-III was the most current version of the WAIS available when data acquisition began in 2005. Five subtests were used as outcome variables: Digit-Symbol Coding (DSC) and Symbol Search (SS) are timed tests of processing speed, visual scanning, and motor control; Letter Number Sequencing (LNS) is a verbal measure of complex span (working memory); Block Design is a measure of visuospatial organization; and Similarities (SIM) is a verbal test of abstract reasoning. Higher scores on WAIS-III subtests reflect better performance. The Paced Auditory Serial Addition Test (PASAT; Gronwall, 1977) is a measure of auditory processing speed and complex tracking (working memory). A total score was calculated by summing total correct across the first two trials and used as the outcome variable. The first two trials were used as, per standardized instructions, the test is discontinued if these trials are not tolerated (Strauss, Sherman, & Spreen, 2006); thus, a portion of the sample was missing data for the final two trials. Higher scores reflect better performance.

The California Verbal Learning Test-II (CVLT-II; Delis, Kramer, Kaplan, & Ober, 2000) is a test of verbal learning, recall, and recognition. Selected outcome variables included Trials 1–5 Correct (1–5) and Learning Slope Trials 1–5 (Slope 1–5) as measures of verbal learning, Long Delay Free Recall (LDFR) as a measure of verbal recall, and Total Recognition Discriminability (d′) as a measure of recognition. The Brief Visuospatial Memory Test-Revised (BVMT-R; Benedict, 1997) is a test of visual learning, recall, and recognition. Selected outcome variables included Total Recall (Total) as a measure of visual learning, DR as a measure of visual recall, Percent Retained as a measure of recall controlling for learning, and Recognition Discrimination Index (Rec) as a measure of recognition. The Rey Complex Figure Test (RCFT; Meyers & Meyers, 1995) is a test of visual-spatial construction ability, visual memory, and planning. The Copy trial was selected as a measure of visuospatial organization, the IR score was selected as a measure of learning, the DR score was selected as a measure of visual recall, and the Recognition Total Correct score (Rec) was used as a measure of visual recognition. Higher scores on all learning and memory variables reflect better performance.

The Controlled Oral Word Association Test (COWAT; Ruff, Light, Parker, & Levin, 1996) is a measure of phonemic verbal fluency. The total of three trials using the letters C, F, and L was used, with higher scores reflecting better performance. The Wisconsin Card Sort Test-64 (WCST-64; Heaton, 1981) is a measure of abstract reasoning, concept formation, and set shifting. This study used a computerized short form with 64 card presentations (Greve, 2001). Total Correct (Correct) and Categories Complete (Cat) were selected as measures of concept formation with higher scores indicating better performance. Perseverative Errors (Pers Errors) was selected as a measure of perseveration and Failure to Maintain Set (FTMS) was selected as a measure of persistence. Higher scores on these measures reflect poorer performance.

Analysis

Statistical analyses were conducted using SPSS 21. Unless otherwise stated, all analyses used raw scores due to the lack of a consistent normative database across tests. By using raw scores and covarying for relevant demographic variables, it was possible to adjust analyses in a consistent manner for the relationships present in the data. Between-group differences in continuous demographic and diagnostic variables were examined using one-way ANOVAs. Differences in categorical demographic and diagnostic variables were evaluated using χ2 analyses. Between-group differences in cognitive outcome variables were evaluated using univariate ANCOVAs including age and years of education as covariates. Associations between continuous variables were evaluated using bivariate correlations.

Univariate ANCOVA including age and years of education as covariates were conducted to evaluate differences in cognitive performance between individuals passing and failing the WMT on each cognitive outcome variable. False discovery rate (Benjamini & Hochberg, 1995) correction (step up method) was applied across these analyses (35 tests) to achieve a familywise error rate of 0.05. Next, performance on each test was converted to a z-score using the sample mean and standard deviation of that test across all participants. Z-scores were then averaged within each cognitive domain (sensation, motor, visuospatial-construction, memory, attention, and executive function) as indicated in Table 2 (Domain Scores). Univariate ANCOVA including age and years of education as covariates was conducted to determine between-group differences in Domain Scores.

Hypothesis 1

Hypothesis 1 was evaluated using a series of mixed ANCOVA to contrast the Memory Domain Score with that of each other domain using WMT Pass/Fail (dummy coded) as the between groups variable and age and education as covariates. This approach provides a contrast of Domain score distributions and can determine if differences in those distributions vary by group membership (WMT Pass/Fail). A significant interaction between WMT status and cognitive domain would clearly indicate a greater difference between the Memory Domain Score and the contrasting domain as a result of group membership (WMT failure). Presuming that the WMT-Fail group performed worse in the Memory Domain than the contrasting domain and the WMT Pass group maintained consistent performance across domains, an interaction would demonstrate greater sensitivity of WMT group membership to poor performance in the Memory Domain.

Hypotheses 2 and 3

To directly test Hypotheses 2 and 3, χ2 analyses were performed to compare the proportion of individuals with and without psychiatric diagnoses or with and without a history of lifetime mTBI and deployment mTBI passing and failing the WMT.

Results

Table 1 presents demographic and diagnostic data for the total sample and each WMT group. Participants in the WMT-F group had significantly fewer years of education, were all men, scored significantly lower on the WTAR, and reported a significantly higher number of deployment related mTBI. Table 2 presents means, standard deviations, and effect sizes for between-group comparisons of individual test variables. There were significant between-group differences observed on measures in all major cognitive domains except Sensation and Visuospatial-Construction. Table 3 presents means, standard deviations, and effect sizes for between-group comparisons of Domain Scores. Groups were significantly different in the domains of Attention, Executive Function, Memory, and Motor Output.

Table 3.

Test scores and group differences by domain Z-score

Domain WMT Pass (n = 193) WMT-Fail (n = 42) η2 
Sensation 0.05 (1.01)a −0.23 (0.95) .01 
Motor output 0.10 (0.66)b −0.42 (1.12) .07* 
Visuospatial-construction 0.04 (0.83) −0.18 (0.81) .01 
Attention 0.14 (0.48)c −0.31 (0.48)f .11* 
Executive 0.09 (0.49)d −0.37 (0.41) .13* 
Memory 0.13 (0.55)e −0.59 (0.81)g .18* 
Domain WMT Pass (n = 193) WMT-Fail (n = 42) η2 
Sensation 0.05 (1.01)a −0.23 (0.95) .01 
Motor output 0.10 (0.66)b −0.42 (1.12) .07* 
Visuospatial-construction 0.04 (0.83) −0.18 (0.81) .01 
Attention 0.14 (0.48)c −0.31 (0.48)f .11* 
Executive 0.09 (0.49)d −0.37 (0.41) .13* 
Memory 0.13 (0.55)e −0.59 (0.81)g .18* 

Notes: WMT = Word Memory Test. η2 = partial eta squared. *p < .05.

an = 191.

bn = 185.

cn = 184.

dn = 187.

en = 188.

fn = 38.

gn = 41.

Hypothesis 1

Results of mixed ANCOVA analyses demonstrated a significant interaction between WMT status and Domain Score contrast for the Memory/Sensation analysis, F(1, 224) = 5.48, p = .02, η2 = .02; and the Memory/Visuospatial-Construction analysis, F(1, 224) = 17.01, p = .001, η2 = .07. In each case, participants in the WMT-F group performed significantly worse in the domain of Memory than the contrasting Domain Score. There was no significant interaction for the Memory/Attention, Memory/Executive Function, or Memory/Motor Output analyses. The interactions between WMT status and Domain Score contrast provide support for Hypothesis 1 and demonstrate that the relationship between cognitive performance and WMT failure differs significantly across cognitive domains.

Hypotheses 2 and 3

Chi-square analyses demonstrated that a significantly higher proportion of participants with a current psychiatric diagnosis failed the WMT than those without psychiatric diagnosis, χ2 = 6.56, p = .018, φ = .192. The same was true for participants with lifetime mTBI history, χ2 = 5.05, p = .031, φ = .168, and deployment mTBI history, χ2 = 6.62, p = .014, φ = .193. The results of the χ2 analyses provide support for Hypotheses 2 and 3, demonstrating that participants with a current psychiatric diagnosis, lifetime mTBI history, or deployment mTBI history were more likely to fail the WMT, even in the absence of symptom validity concerns.

Discussion

The current study addressed two specific aims: (i) determine the ability of the WMT to identify non-credible performance on testing across cognitive domains and (ii) evaluate differences in WMT performance between those with and without psychiatric diagnosis or mTBI history (lifetime and deployment) in a sample screened to ensure adequate validity of symptom report. Results demonstrated that a higher proportion of participants with a psychiatric diagnosis and a higher proportion of participants with mTBI history failed the WMT. Although the WMT has been previously described as insensitive to the presence of psychopathology or mTBI history (Carone, 2008), the current findings suggest that this may not be accurate for OEF/OIF/OND post-deployment veterans. The sample utilized in this study was ideal to evaluate this question because well-established SVTs were used to exclude individuals with invalid psychiatric presentation for reasons including over-reporting and inattention. Concerns regarding secondary gain were minimized by excluding data gathered in a clinical context, and non-clinical participants were informed that the data and findings from this project would not be available for clinical or forensic purposes. Although individuals with current psychiatric diagnosis or mTBI history were more likely to fail the WMT, there was a wide range of WMT performance among those in the WMT-Fail group (Table 4). Visual inspection of the raw data revealed that at least one participant received a score of 82.5 on IR (the highest non-passing score possible) but passed DR and CNS. This wide range in performance strongly suggests the potential for a variety of contributing factors, including false positive results of the WMT, stereotype threat, or misattribution, among others (Silver, 2012, 2015). Future research is needed to better understand the factors that may affect the relationship between clinical conditions such as PTSD and mTBI history with PVT.

Table 4.

WMT performance of participants in the WMT-Fail group with various clinical conditions

 Deployment mTBI (n = 15) Lifetime mTBI (n = 20) Psychiatric diagnosis (n = 18) 
Min Max M SD Min Max M SD Min Max M SD 
IR 60.0 97.5 82.0 9.3 60.0 97.5 82.3 8.6 60.0 97.5 81.0 9.3 
DR 50.0 92.5 78.8 12.6 50.0 95.0 80.1 11.7 50.0 92.5 78.1 11.9 
CNS 47.5 87.5 73.5 10.9 47.5 87.5 74.4 9.6 47.5 85.0 72.6 10.1 
 Deployment mTBI (n = 15) Lifetime mTBI (n = 20) Psychiatric diagnosis (n = 18) 
Min Max M SD Min Max M SD Min Max M SD 
IR 60.0 97.5 82.0 9.3 60.0 97.5 82.3 8.6 60.0 97.5 81.0 9.3 
DR 50.0 92.5 78.8 12.6 50.0 95.0 80.1 11.7 50.0 92.5 78.1 11.9 
CNS 47.5 87.5 73.5 10.9 47.5 87.5 74.4 9.6 47.5 85.0 72.6 10.1 

Note: WMT = Word Memory Test; IR = Immediate Recall; DR = Delayed Recall; CNS = Consistency; mTBI = mild traumatic brain injury.

The current findings also demonstrate that test performance varies across cognitive domains as a function of WMT failure. When contrasting performance across domains, participants failing the WMT performed significantly worse in the memory domain than in the sensation and visuospatial-construction domains. There were no differences in performance between the memory/attention, memory/executive function, or memory/motor output domains related to WMT failure. Collectively, these results suggest that the WMT is most sensitive to performance on tests in the domains of memory, attention, executive function, and motor output. The between-group effect sizes were largest for memory, followed by executive function, then attention, and lastly motor output as can be seen in Table 3. Previous studies have demonstrated that WMT failure is associated with poorer performance on tests of attention, memory, and executive function (Green, 2005; Green, Montijo, & Brockhaus, 2011; Lange et al., 2013; Suhr et al., 2011); however, the current study was able to demonstrate how these between-group differences vary by cognitive domain. The current results suggest that performance on the WMT is unrelated to performance on tests of sensation or visuospatial-construction.

Limitations of the present study include a small WMT-Fail group, lack of true experimental design (common for validity test studies), and an absence of stand-alone PVTs in domains other than memory. Future studies with larger sample sizes and more comprehensive evaluations of performance validity could examine complex, multivariate relationships through the use of modeling techniques. Future studies should also investigate the effects of potential confounding variables on PVT and cognition including fatigue, sleepiness, and pain. Additionally, the small number of tests available within some cognitive domains (visuospatial-construction, motor output, and sensation) could have affected the reliability of the Domain Score outcome variables representing those domains. Further examination using a greater number of tests within each domain is certainly warranted. Although the current study employed several measures of performance and symptom validity, it is unlikely that these measures are perfectly sensitive measures of validity in this cohort. As a result, the potential exists for some individuals with poor symptom validity to have been included in the sample evaluating hypothesis 2, and some individuals with poor performance validity to have been included in the WMT Pass group. Future studies will benefit from including additional measures of symptom and performance validity covering a wide range of validity types. Although efforts were made to remove the potential of these data to be used for purposes of secondary gain, the “perception” of potential secondary gain was not evaluated. This perception of potential secondary gain could possibly be more important than the actual potential for secondary gain. Future studies would benefit from assessing this in the context of studying performance or symptom validity. Finally, because WMT Paired Associates, Free Recall, and Long Delayed Free Recall subtests were not administered it was not possible to conduct a GMIP analysis. However, given the relatively intact neurological status of the participants, this was likely not an issue because the GMIP was intended for use with individuals demonstrating significant impairments, such as advanced dementia or severe TBI.

Conclusion

The present findings support the applicability of the WMT as an indicator of performance validity, particularly in the domains of memory, executive function, and attention. Additional domain specific validity tests may be required to adequately evaluate performance validity in the domains of sensation and visuospatial-construction. The current findings demonstrate higher PVT failure rates by individuals with a psychiatric disorder or mTBI history, even when accounting for symptom validity. Further research is necessary to better understand the relationship between psychiatric conditions, TBI, and performance on the WMT.

Funding

This work was supported by resources of the W.G. “Bill” Hefner Veterans Affairs Medical Center; the Mid-Atlantic Mental Illness Research Education and Clinical Center; and the Department of Veterans Affairs Office of Academic Affiliations Advanced Fellowship Program in Mental Illness Research and Treatment.

Conflict of Interest

There are no conflicts of interest to disclose.

Acknowledgments

Disclaimer: The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs, the Department of Defense, or the U.S. Government.

References

American Psychiatric Association.
(
2000
).
Diagnostic and statistical manual of mental disorders
  (
4th ed.
).
Washington DC
:
Author
. , text rev.
Armistead-Jehle
,
P.
(
2010
).
Symptom validity test performance in U.S. veterans referred for evaluation of mild TBI
.
Applied Neuropsychology
 ,
17
,
52
59
.
Axelrod
,
B. N.
, &
Schutte
,
C.
(
2010
).
Analysis of the dementia profile on the Medical Symptom Validity Test
.
The Clinical Neuropsychologist
 ,
24
,
873
881
.
Bauer
,
L.
,
O'Bryant
,
S. E.
,
Lynch
,
J. K.
,
McCaffrey
,
R. J.
, &
Fisher
,
J. M.
(
2007
).
Examining the Test Of Memory Malingering Trial 1 and Word Memory Test Immediate Recognition as screening tools for insufficient effort
.
Assessment
 ,
14
,
215
222
.
Benedict
,
R. H.
(
1997
).
Brief Visuospatial Memory Test-Revised (BVMT-R)
 .
Lutz
:
Psychological Assessment Resources, Inc
.
Benjamini
,
Y.
, &
Hochberg
,
Y.
(
1995
).
Controlling the false discovery rate: A practical and powerful approach to multiple testing
.
Journal of the Royal Statistical Society: Seris B
 ,
57
,
289
300
.
Boone
,
K. B.
(
2009
).
The need for continuous and comprehensive sampling of effort/response bias during neuropsychological examinations
.
The Clinical Neuropsychologist
 ,
23
,
729
741
.
Bush
,
S. S.
,
Ruff
,
R. M.
,
Troster
,
A. I.
,
Barth
,
J. T.
,
Koffler
,
S. P.
,
Pliskin
,
N. H.
, et al
. (
2005
).
Symptom validity assessment: Practice issues and medical necessity NAN policy & planning committee
.
Archives of Clinical Neuropsychology
 ,
20
,
419
426
.
Carone
,
D. A.
(
2008
).
Children with moderate/severe brain damage/dysfunction outperform adults with mild-to-no brain damage on the Medical Symptom Validity Test
.
Brain Injury
 ,
22
,
960
971
.
Clark
,
A. L.
,
Amick
,
M. M.
,
Fortier
,
C.
,
Milberg
,
W. P.
, &
McGlinchey
,
R. E.
(
2014
).
Poor performance validity predicts clinical characteristics and cognitive test performance of OEF/OIF/OND Veterans in a research setting
.
The Clinical Neuropsychologist
 ,
28
,
802
825
.
Conners
,
C. K.
, &
Staff
,
M.
(
2004
).
Conners’ Continuous Performance Test II (CPT-II) for Windows: Technical and Software Manual
 .
North Tonawanda, NY
:
MHS
.
Delis
,
D. C.
,
Kramer
,
J. H.
,
Kaplan
,
E.
, &
Ober
,
B. A.
(
2000
).
California Verbal Learning Test (2nd ed., Adult Version): Manual
 .
Bloomington, MN
:
NCS Pearson, Inc
.
Doty
,
R. L.
(
1995
).
The Smell Identification Test administration manual
  (
3rd ed.
).
Haddon Heights, NH
:
Sensonics, Inc
.
Egeland
,
J.
,
Andersson
,
S.
,
Sundseth
,
O. O.
, &
Schanke
,
A. K.
(
2015
).
Types or modes of malingering? A confirmatory factor analysis of performance and symptom validity tests
.
Applied Neuropsychology Adult
 ,
22
,
215
226
.
First
,
M. B.
,
Spitzer
,
R. L.
,
Gibbon
,
M.
, &
Williams
,
J. B. W.
(
1996
).
Structured Clinical Interview for DSM-IV Axis I Disorders, Clinical Version (SCID-CV)
 .
Washington, DC
:
American Psychiatric Press, Inc
.
Golden
,
C. J.
, &
Freshwater
,
S. M.
(
2002
).
The Stroop Color and Word Test: A manual for clinical and experimental uses
 .
Wood Dale, IL
:
Stoelting
.
Green
,
P.
(
2005
).
Green's Word Memory Test for Windows user's manual
 .
Edmonton, Canada
:
Green's Publishing Inc
.
Green
,
P.
,
Flaro
,
L.
, &
Courtney
,
J.
(
2009
).
Examining false positives on the Word Memory Test in adults with mild traumatic brain injury
.
Brain Injury
 ,
23
,
741
750
.
Green
,
P.
,
Montijo
,
J.
, &
Brockhaus
,
R.
(
2011
).
High specificity of the Word Memory Test and Medical Symptom Validity Test in groups with severe verbal memory impairment
.
Applied Neuropsychology
 ,
18
,
86
94
.
Green
,
P.
,
Rohling
,
M. L.
,
Lees-Haley
,
P. R.
, &
Allen
,
L. M.
, III
(
2001
).
Effort has a greater effect on test scores than severe brain injury in compensation claimants
.
Brain Injury
 ,
15
,
1045
1060
.
Greiffenstein
,
M. F.
,
Greve
,
K. W.
,
Bianchini
,
K. J.
, &
Baker
,
W. J.
(
2008
).
Test of memory malingering and word memory test: A new comparison of failure concordance rates
.
Archives of Clinical Neuropsychology
 ,
23
,
801
807
.
Greve
,
K. W.
(
2001
).
The WCST-64: A standardized short-form of the Wisconsin Card Sorting Test
.
The Clinical Neuropsychologist
 ,
15
,
228
234
.
Gronwall
,
D. M.
(
1977
).
Paced auditory serial-addition task: A measure of recovery from concussion
.
Perceptual and Motor Skills
 ,
44
,
367
373
.
Heaton
,
R.
(
1981
).
Wisconsin Card Sorting Test
 .
Odessa
:
Psychological Assessment Resources
.
Heilbronner
,
R. L.
,
Sweet
,
J. J.
,
Morgan
,
J. E.
,
Larrabee
,
G. J.
, &
Millis
,
S. R.
(
2009
).
American Academy of Clinical Neuropsychology Consensus Conference Statement on the neuropsychological assessment of effort, response bias, and malingering
.
The Clinical Neuropsychologist
 ,
23
,
1093
1129
.
Heyanka
,
D. J.
,
Thaler
,
N. S.
,
Linck
,
J. F.
,
Pastorek
,
N. J.
,
Miller
,
B.
,
Romesser
,
J.
, et al
. (
2015
).
A Factor Analytic Approach to the Validation of the Word Memory Test and Test of Memory Malingering as Measures of Effort and Not Memory
.
Archives of Clinical Neuropsychology
 ,
30
,
369
376
.
Lange
,
R. T.
,
Iverson
,
G. L.
,
Brickell
,
T. A.
,
Staver
,
T.
,
Pancholi
,
S.
,
Bhagwat
,
A.
, et al
. (
2013
).
Clinical utility of the Conners’ Continuous Performance Test-II to detect poor effort in U.S. military personnel following traumatic brain injury
.
Psychological Assessment
 ,
25
,
339
352
.
Lange
,
R. T.
,
Pancholi
,
S.
,
Bhagwat
,
A.
,
Anderson-Barnes
,
V.
, &
French
,
L. M.
(
2012
).
Influence of poor effort on neuropsychological test performance in U.S. military personnel following mild traumatic brain injury
.
Journal of Clinical and Experimental Neuropsychology
 ,
34
,
453
466
.
Lansbergen
,
M. M.
,
Kenemans
,
J. L.
, &
van Engeland
,
H.
(
2007
).
Stroop interference and attention-deficit/hyperactivity disorder: a review and meta-analysis
.
Neuropsychology
 ,
21
,
251
262
.
Larrabee
,
G. J.
(
2012
).
Performance validity and symptom validity in neuropsychological assessment
.
Journal of the International Neuropsychological Society
 ,
18
,
625
630
.
Lezak
,
M. D.
,
Howieson
,
D. B.
,
Bigler
,
E. D.
, &
Tranel
,
D.
(
2012
).
Neuropsychological assessment
  (
5th ed.
).
New York, NY
:
Oxford University Press
.
Menon
,
D. K.
,
Schwab
,
K.
,
Wright
,
D. W.
, &
Maas
,
A. I.
(
2010
).
Position statement: Definition of traumatic brain injury
.
Archives of Physical Medicine and Rehabilitation
 ,
91
,
1637
1640
.
Meyers
,
J. E.
, &
Meyers
,
K. R.
(
1995
).
Rey Complex Figure Test and Recognition Trial: Professional manual
 .
Lutz, FL
:
Psychological Assessment Resources, Inc
.
Miller
,
H. A.
(
2001
).
Miller Forensic Assessment of Symptoms Test: Professional manual
 .
Lutz, FL
:
Psychological Assessment Resources, Inc
.
Morey
,
L. C.
(
1991
).
Personality Assessment Inventory: Professional manual
 .
Tampa, FL
:
Psychological Assessment Resources, Inc
.
Nelson
,
N. W.
,
Hoelzle
,
J. B.
,
McGuire
,
K. A.
,
Ferrier-Auerbach
,
A. G.
,
Charlesworth
,
M. J.
, &
Sponheim
,
S. R.
(
2010
).
Evaluation context impacts neuropsychological performance of OEF/OIF veterans with reported combat-related concussion
.
Archives of Clinical Neuropsychology
 ,
25
,
713
723
.
Oosterman
,
J. M.
,
Vogels
,
R. L.
,
van Harten
,
B.
,
Gouw
,
A. A.
,
Poggesi
,
A.
,
Scheltens
,
P.
, et al
. (
2010
).
Assessing mental flexibility: Neuroanatomical and neuropsychological correlates of the Trail Making Test in elderly people
.
The Clinical Neuropsychologist
 ,
24
,
203
219
.
Psychological Corporation.
(
2001
).
Wechsler Test of Adult Reading: Manual
 .
San Antonio, TX
:
Harcourt Assessments, Inc
.
Reitan
,
R. M.
, &
Davison
,
L. A.
(
1974
).
Clinical neuropsychology: Current status and applications
 .
Washington, DC
:
V.H. Winston & Sons
.
Reitan
,
R. M.
, &
Wolfson
,
D.
(
1985
).
The Hallsted-Reitan Neuropsychological Test Battery
 .
Tuscon, AZ
:
Neuropsychology Press
.
Rohling
,
M. L.
, &
Demakis
,
G. J.
(
2010
).
Bowden, shores, & Mathias (2006): Failure to replicate or just failure to notice. Does effort still account for more variance in neuropsychological test scores than TBI severity
.
The Clinical Neuropsychologist
 ,
24
,
119
136
.
Ruff
,
R. M.
,
Light
,
R. H.
,
Parker
,
S. B.
, &
Levin
,
H. S.
(
1996
).
Benton Controlled Oral Word Association Test: Reliability and updated norms
.
Archives of Clinical Neuropsychology
 ,
11
,
329
338
.
Russo
,
A. C.
(
2012
).
Symptom validity test performance and consistency of self-reported memory functioning of Operation Enduring Freedom/Operation Iraqi freedom veterans with positive Veteran Health Administration Comprehensive Traumatic Brain Injury evaluations
.
Archives of Clinical Neuropsychology
 ,
27
,
840
848
.
Sherer
,
M.
,
Davis
,
L. C.
,
Sander
,
A. M.
,
Nick
,
T. G.
,
Luo
,
C.
,
Pastorek
,
N.
, et al
. (
2015
).
Factors associated with Word Memory Test Performance in Persons with medically documented traumatic brain injury
.
The Clinical Neuropsychologist
 ,
29
,
522
541
.
Silver
,
J. M.
(
2012
).
Effort, exaggeration and malingering after concussion
.
Journal of Neurology, Neurosurgery, and Psychiatry
 ,
83
,
836
841
.
Silver
,
J. M.
(
2015
).
Invalid symptom reporting and performance: What are we missing
.
NeuroRehabilitation
 ,
36
,
463
469
.
Slick
,
D. J.
,
Sherman
,
E. M.
, &
Iverson
,
G. L.
(
1999
).
Diagnostic criteria for malingered neurocognitive dysfunction: Proposed standards for clinical practice and research
.
The Clinical Neuropsychologist
 ,
13
,
545
561
.
Sollman
,
M. J.
, &
Berry
,
D. T.
(
2011
).
Detection of inadequate effort on neuropsychological testing: A meta-analytic update and extension
.
Archives of Clinical Neuropsychology
 ,
26
,
774
789
.
Strauss
,
E.
,
Sherman
,
E. M. S.
, &
Spreen
,
O.
(
2006
).
A Compendium of Neuropsychological Tests
  (
3rd ed.
).
New York
:
Oxford University Press
.
Suhr
,
J.
,
Hammers
,
D.
,
Dobbins-Buckland
,
K.
,
Zimak
,
E.
, &
Hughes
,
C.
(
2008
).
The relationship of malingering test failure to self-reported symptoms and neuropsychological findings in adults referred for ADHD evaluation
.
Archives of Clinical Neuropsychology
 ,
23
,
521
530
.
Suhr
,
J. A.
,
Sullivan
,
B. K.
, &
Rodriguez
,
J. L.
(
2011
).
The relationship of noncredible performance to continuous performance test scores in adults referred for attention-deficit/hyperactivity disorder evaluation
.
Archives of Clinical Neuropsychology
 ,
26
(
1
),
1
7
.
Sullivan
,
B. K.
,
May
,
K. M.
, &
Galbally
,
L.
(
2007
).
Symptom exaggeration by college adults in attention-deficit hyperactivity disorder and learning disorder assessments
.
Applied Neuropsychology
 ,
14
,
189
207
.
Van Dyke
,
S. A.
,
Millis
,
S. R.
,
Axelrod
,
B. N.
, &
Hanks
,
R. A.
(
2013
).
Assessing effort: Differentiating performance and symptom validity
.
The Clinical Neuropsychologist
 ,
27
,
1234
1246
.
Wechsler
,
D.
(
1997
).
Wechsler Adult Intelligence Scale - III: Administration and scoring manual
 .
San Antonio, TX
:
The Psychological Corporation
.
Whitney
,
K. A.
,
Shepard
,
P. H.
,
Williams
,
A. L.
,
Davis
,
J. J.
, &
Adams
,
K. M.
(
2009
).
The Medical Symptom Validity Test in the evaluation of Operation Iraqi Freedom/Operation Enduring Freedom soldiers: A preliminary study
.
Archives of Clinical Neuropsychology
 ,
24
,
145
152
.
Wisdom
,
N. M.
,
Pastorek
,
N. J.
,
Miller
,
B. I.
,
Booth
,
J. E.
,
Romesser
,
J. M.
,
Linck
,
J. F.
, et al
. (
2014
).
PTSD and cognitive functioning: Importance of including performance validity testing
.
The Clinical Neuropsychologist
 ,
28
,
128
145
.
Young
,
J. C.
,
Sawyer
,
R. J.
,
Roper
,
B. L.
, &
Baughman
,
B. C.
(
2012
).
Expansion and re-examination of Digit Span effort indices on the WAIS-IV
.
The Clinical Neuropsychologist
 ,
26
,
147
159
.
Zakzanis
,
K. K.
,
Gammada
,
E.
, &
Jeffay
,
E.
(
2012
).
The predictive utility of neuropsychological symptom validity testing as it relates to psychological presentation
.
Applied Neuropsychology Adult
 ,
19
,
98
107
.