Abstract

Research has demonstrated the utility of performance validity tests (PVTs) as a method of determining adequate effort during a neuropsychological evaluation. Although some studies affirm that forced-choice PVTs measure effort rather than memory, doubts remain in the literature. The purpose of the current study was to evaluate the relationship between effort and memory variables in a mild traumatic brain injury (TBI) sample (n = 160) by separating memory and effort as distinct factors while statistically controlling for the shared covariance between the variables. A two-factor solution was extracted such that the five PVT variables loaded on Factor 1 and the four memory variables loaded on Factor 2. The pattern matrix, which controls for the covariance between variables, provided clear support of two highly distinct factors with minimal cross-loadings. Our findings support assertions that PVTs measure effort independent of memory in veterans with mild TBI.

Introduction

Performance validity tests (PVTs) have gained increased popularity as integral components of neuropsychological evaluations. Consensus groups (Bush et al., 2005; Heilbronner, Sweet, Morgan, Larrabee, & Millis, 2009) have recommended PVTs in clinical, research, and forensic evaluations as a method of determining if impaired neuropsychological test scores are a result of brain injury, inadequate effort, exaggeration of cognitive complaints, or some combination of these factors (Flaro, Green, & Robertson, 2007). Of the available PVTs, the Test of Memory Malingering (TOMM; Tombaugh, 1997) and the Word Memory Test (WMT; Green, 2003) are two of the more well validated, researched, and clinically implemented measures.

Designed to be relatively impervious to central nervous system dysfunction (Green, Flaro, & Courtney, 2009), extant literature has reported that the TOMM and the WMT are effective in identifying suboptimal effort with varying degrees of sensitivity and specificity (Flaro et al., 2007; Gervais, Rohling, Green, & Ford, 2004; Green, 2007; Green et al., 2009; Greve, Bianchini, & Doane, 2006; Rees, Tombaugh, Gansler, & Moczynski, 1998; Tombaugh, 1997). The clinical efficacy of PVTs in identifying poor effort not influenced by genuine cognitive impairment has been reported in a multitude of patient populations, including psychogenic non-epileptic seizures (Drane et al., 2006; Williamson, Drane, Stroup, Miller, & Holmes, 2003; Williamson et al., 2004), fibromyalgia (Gervais, Green, Russell, Pieschl, & Allen, 2000; Gervais, Russell, Green, Ferrari, & Peischl, 2001), orthopedic disability claims (Gervais et al., 2004), traumatic brain injury (TBI; Greve et al., 2006), and Attention-Deficit Hyperactivity Disorder (Sullivan, May, & Galbally, 2007). Given the frequency of litigation in mild TBI (mTBI) cases, a substantial portion of research data has focused on PVT efficacy in this population.

The pathophysiology and injury characteristics of moderate and/or severe TBI (e.g., Glasgow Coma Scale [GCS]: 3–12; posttraumatic amnesia [PTA]: >24 h; loss of consciousness [LOC]: >30 min) suggest the presence of more substantial residual neuropsychological deficits than the injury characteristics of mTBI (e.g., GCS: 13–15; PTA: <24 h; LOC: ≤30 min). This expected dose–response relation between severity of TBI and neuropsychological outcome was clearly demonstrated in the seminal work by Dikmen, Machamer, Winn, and Temkin (1995) where the authors demonstrated that less severe TBI resulted in more complete cognitive recovery, whereas more severe TBI resulted in significantly less complete cognitive recovery. One meta-analytic study (Schretlen & Shapiro, 2003) cited the effect of moderate–severe TBI (weighted mean Cohen's d = −0.74) to be more than three times the effect of mTBI (weighted mean Cohen's d = −0.24) on cognitive functioning. This same study also noted that cognitive functioning typically returns to baseline within 1–3 months after mTBI, whereas marked deficits often remain in moderate–severe TBI patients more than 2 years post-injury.

Despite the clear difference in neurologic and cognitive severity of these TBI subsets, much of the PVT literature has found that patients who purport to have persisting impairments and/or seek compensation related to a history of claimed mTBI tend to perform more poorly on PVTs than those with moderate-to-severe TBI. Although it should be noted that patients with mTBI do not invariably score lower on PVTs than those with moderate–severe TBI, results suggesting that mTBI patients tend to perform more poorly than moderate–severe TBI patients has been replicated multiple times with varying failure rates. For example, Green and colleagues (2009) reported that 48.2% of 309 adults with mTBI (none with brain scan abnormality) failed the WMT, whereas only 22.7% of 163 patients with moderate–severe TBI (all with brain scan abnormality) failed the WMT. This same study noted that 42% of 90 mTBI patients failed the Medical Symptom Validity Test (MSVT), whereas only 16% of 51 moderate–severe TBI patients failed the MSVT. Flaro and colleagues (2007) reported that 40% of 577 mTBI patients in litigation failed the WMT, whereas 21% of 197 moderate–severe TBI patients in litigation failed the WMT. Additionally, Armistead-Jehle (2010) provided base rate data on the PVT failure rate of veterans with mTBI who screened positive on the Veterans Health Administration TBI screen. The study found that 57.8% of the 45 included veterans failed the MSVT with at least one score below the established cutoff scores on the Immediate Recognition (IR), Delayed Recognition (DR), or Consistency (CNS) trials. Whitney, Shepard, Williams, Davis, and Adams (2009) found that 17% of Operation Enduring Freedom/Operation Iraqi Freedom (OEF/OIF) veterans with mTBI failed the MSVT, with significantly worse performances on the “easy” subtests when compared with those who passed the MSVT. Overall, these studies support the assertion that the IR, DR, and CNS trials of the WMT and MSVT (i.e., “easy” subtests) “are measures of effort rather than ability” (Armistead-Jehle, 2010, p. 56).

Despite what appears to be a solid foundation of supportive literature validating the use of the WMT/MSVT in mTBI without concern that a mild head injury will cause genuine performance decrements, some doubt remains in the literature. Bowden, Shores, and Mathias (2006) noted that “the inference that the WMT is measuring something other than, say, memory or cognition, remains to be demonstrated” (p. 860). These authors focused on interaction effects between TBI severity characteristics (i.e., GCS, PTA) and WMT IR performance for 100 consecutive patients who presented for a neuropsychological evaluation secondary to compensation litigation for TBI. The study failed to produce significant interaction effects between these variables, leading the authors to conclude that effort does not appear to interact with injury severity to suppress cognition. Bowden and colleagues (2006) concluded that the WMT cannot be viewed as a unique test of effort and stated that “like any number-correct score derived from a forced-choice recognition memory test is probably a measure of memory” (p. 869).

Willis, Farrer, and Bigler (2011) analyzed the performance of three veterans on various PVTs with the goal of reporting data that were interpreted to represent false positives for the WMT IR, DR, and CNS. All three veterans passed the TOMM and all reported embedded measures while failing the WMT. Willis and colleagues (2011) surmised that these performance patterns “challenge the assumption that only patients with severe and global cognitive deterioration will fail the IR and DR subtests of the WMT” (p. 1429) and that “some patients with what would be considered a relatively mild brain injury in global terms will demonstrate deficits that can lead to WMT failure” (p. 1430). Similarly, a neuroimaging study (Allen, Bigler, Larsen, Goodrich-Hunsaker, & Hopkins, 2007) suggested that the WMT activates a substantial degree of cortical regions and neuronal networks from traditional areas associated with cognitive effort. The authors posited that WMT performance requires a large degree of cognitive effort and that performance on this task is subsequently vulnerable to any disruption along this circuitry. The authors specifically noted “that the greatest likelihood of damage in TBI, including mild TBI, occurs within the frontotemporal and limbic regions (particularly the cingulate and hippocampal-amygdala complex); these are the very regions involved in cognitive effort shown in the fMRI study” (p. 1427–1428).

In a 2010 study, Rohling and Demakis challenged the findings asserted by Bowden and colleagues (2006) who concluded that the WMT is a memory test. The authors re-analyzed the data utilized in the Bowden and colleagues (2006) study and Green, Rohling, Lees-Haley, and Allen (2001) in an attempt to elucidate the underpinnings for the contradictory results reported by these studies. Rohling and Demakis (2010) reported non-significant correlations between the WMT IR and injury characteristics (i.e., PTA, GCS, LOC, CT/MRI abnormalities) as well as non-significant effect sizes and variance accounted for between the WMT IR and those same injury characteristics. Rohling and Demakis (2010) concluded that while TBI injury characteristics did not influence the WMT IR score, these same characteristics did influence traditional cognitive measures. The authors concluded that this pattern of findings suggests that the WMT is more of a measure of the separate construct of effort rather than of cognitive ability.

The current study aimed to add further support to the notion that PVTs measure a unique construct of effort independent of memory. This study utilized exploratory factors analysis (EFA) to evaluate the factor structure and factor loadings of PVTs and traditional memory measures. To the authors' knowledge, this methodology has not been implemented in the study of PVTs. As such, this study is predominately exploratory in nature. Specifically, it is expected that the memory and effort variables will share overall variance with each other and with the factors that can be attributed to a confounding variable (i.e., poor effort will impact effort and memory scores). However, we hypothesized that when this shared variance is controlled for, the variables will load onto distinct factors of interest, providing evidence that PVTs and memory tests measure separate constructs.

Method

Participants

Participants were selected from a consecutive series of 169 cases presenting with mTBI from the Functional Outcomes Research Team (FORT) study. Participants from all sites were referred as part of their ongoing care within the VA and not in the context of a disability exam or rating process. Participating sites included VA hospitals in Midwestern, Southern, and Western USA. Exclusionary criteria for this study included missing data from the WMT (IR, DR, or CNS subtests), TOMM (Trials 1 and 2), or CVLT-II (Total, Long Delay Free Recall, Long Delay Recognition, or Long Delay Forced Choice Recognition). Likewise, patients with neurologic conditions other than a history of mTBI were not included in the study. Nine participants were excluded from this study based upon the exclusionary criteria.

Participants in this study included 160 veterans (93.8% male; 63% Caucasian, 20% Hispanic, 13% African American). The average age and education of the sample was 31.7 years (SD = 7.4, range: 21–56) and 13.0 years (SD = 1.6, range: 9–18), respectively. Veterans were referred for neuropsychological evaluation after completion of a comprehensive TBI evaluation following at least one mTBI (mean injuries = 2.7, SD = 2.9, range: 1–21) sustained while on active duty during an OEF/OIF/OND deployment. An mTBI was defined based on criteria from Centers for Disease Control and Prevention (2003): an occurrence of injury to the head with at least one of the following: (1) any period of confusion, disorientation, or impaired consciousness, (2) any period of memory dysfunction around the time of injury, (3) LOC lasting <30 min, or (4) neurological or neuropsychological dysfunction. There were 115 injuries reported from blasts, 43 from falls, 34 were vehicular, 22 from assault, 10 from sports, and 9 from bullets. Of note, the total injuries outnumber the total participants as some participants experienced multiple head injuries. Table 1 contains descriptive statistics from the MMPI-2 for depression, anxiety, and PTSD. This study was approved by the respective Institutional Review Boards and VA Research and Design Committees at each VA site. The evaluations were not conducted for Compensation and Pension purposes.

Table 1.

Descriptive statistics for MMPI-2 scales

Variables Minimum Maximum Mean SD 
MMPI-2 Scale 2 34 102 76.41 13.47 
MMPI-2 Scale 7 39 115 76.26 14.27 
MMPI-2 PK Scale 38 112 81.33 17.65 
MMPI-2 PS Scale 27 111 82.70 17.42 
Variables Minimum Maximum Mean SD 
MMPI-2 Scale 2 34 102 76.41 13.47 
MMPI-2 Scale 7 39 115 76.26 14.27 
MMPI-2 PK Scale 38 112 81.33 17.65 
MMPI-2 PS Scale 27 111 82.70 17.42 

Notes: MMPI-2 = Minnesota Multiphasic Personality Inventory-Second Edition; Scale 2 = Depression scale; Scale 7 = Psychasthenia-anxiety scale; PS, PK = PTSD scale. Variables are represented in T-scores.

Measures

The WMT (Green, 2003) is a computer administered word list task designed to measure effort. In this study, the IR, DR, and CNS trials were utilized. The TOMM (Tombaugh, 1996) is an effort measure involving the presentation of pictures (line drawings) that was administered via booklets according to the standardized procedure. In this study, both Trials 1 and 2 were included. The TOMM Retention Trial was not included in the analysis due to recent literature that has suggested that TOMM Trial 1 has adequate diagnostic accuracy in identifying poor effort and that Trial 2 is highly beneficial at augmenting the diagnostic accuracy of Trial 1 (Hilsabeck, Gordon, Hietpas-Wilson, & Zartman, 2011). Immediate and delayed verbal recall memory and delayed verbal recognition memory were examined using the California Verbal Learning Test-Second Edition (CVLT-II; Delis, Kramer, Kaplan, & Ober, 2000) Trials 1–5, Long Delay Free Recall, and Long Delay Recognition raw scores. Additionally, CVLT-II Forced Choice, an embedded validity measure derived from the CVLT-II, was utilized in this study. Table 2 contains means, standard deviations, and ranges of the WMT, TOMM, and CVLT-II variables. Also contained within Table 2 is the percentage of the sample that scored below the cutoff score on the WMT, TOMM, and CVLT-II Forced Choice.

Table 2.

Descriptive statistics for WMT, TOMM, and CVLT-II variables

Variables Minimum Mean SD % below cutoff 
WMT IR 30 81.04 17.72 45.0 
WMT DR 20 79.55 19.03 45.0 
WMT CNS 35 78.52 16.39 55.0 
TOMM Trial 1 16 43.54 7.30 — 
TOMM Trial 2 18 47.26 5.94 16.3 
CVLT-II Total 24 48.55 10.83 — 
CVLT-II LDFR 9.89 4.10 — 
CVLT-II Recognition 13.39 2.87 — 
CVLT-II FC 15.26 1.61 16.9 
Variables Minimum Mean SD % below cutoff 
WMT IR 30 81.04 17.72 45.0 
WMT DR 20 79.55 19.03 45.0 
WMT CNS 35 78.52 16.39 55.0 
TOMM Trial 1 16 43.54 7.30 — 
TOMM Trial 2 18 47.26 5.94 16.3 
CVLT-II Total 24 48.55 10.83 — 
CVLT-II LDFR 9.89 4.10 — 
CVLT-II Recognition 13.39 2.87 — 
CVLT-II FC 15.26 1.61 16.9 

Notes: WMT = Word Memory Test; IR = Immediate Recognition; DR = Delayed Recognition; CNS = Consistency; TOMM = Test of Memory Malingering; CVLT-II = California Verbal Learning Test-Second Edition; Total = Trials 1–5; LDFR = Long Delay Free Recall; FC = Forced Choice.

WMT variables are percentage correct; TOMM and CVLT variables are raw scores.

Data Analysis

The current study used EFA. EFA is advantageous as it allows for a systematic investigation of shared variance among variables of interest while extracting empirically defined factors that ideally converge on theoretical constructs of interest (Henson & Roberts, 2006). Another advantage of EFA is that it provides opportunities to simultaneously examine unique and shared covariance between the measured and latent variables, which can be useful in determining the degree to which a measured variable actually contributes to the factor and how much of the variance can be attributed to other measures included in the model (Thompson, 2004). This is a method of statistical control relevant to the current data. A principle factor axis (PFA) was selected as it focuses on the covariance of theoretically underlying constructs, whereas the more widely used principal components analysis provides an empirical solution designed to extract maximum variance with the minimum number of factors (Thompson, 2004). The final factor solution was determined by inspection of the screen plot and by ensuring that loadings approached simple structure (i.e., variables had high loadings on only one factor). The extracted factors underwent an oblique Promax rotation to allow factors to correlate. This was necessary to obtain the shared and unique communalities between measured variables and factors and also theoretically preferred when factors are expected to covary (Tabachnick & Fidell, 2007).

Although orthogonal rotations provide a single pattern/structure matrix, oblique rotations extract two types of loadings: pattern and structure coefficients. Pattern coefficients are akin to regression beta weights such that they are standardized vector coefficients that assess the unique contribution of each independent variable (IV) to the dependent variable (DV) while controlling for the covariance of all other IVs to the DV. In EFA, the DV is the latent factor that is predicted by the values of all the measured variables weighted by their pattern coefficients (Tabachnick & Fidell, 2007). As with multiple regression, the inclusion and exclusion of other IVs directly affects pattern loadings, such that loading values change according to the contribution that other variables bring to the model. In other words, each loading is independently considered while statistically “controlling” the other loadings. This is distinct from structure coefficients, which reflect the shared communalities between each variable and latent factor, akin to the simple correlation coefficient between two variables.

Comparing these concepts to those of broader familiarity, it may help the reader to compare a series of individual correlation coefficients between each IV and DV individually to the beta coefficients of the IVs when simultaneously entered into multiple regression. Researchers who desire to control for a covariate in multiple regression enter the covariate as a predictor (often earlier in a hierarchical sequence), which then removes the variance that the covariate and the IV of interest share in predicting the criterion. In our study, the pattern loadings would represent the covariance that each measured variable (i.e., IV) shares with the underlying factor (i.e., DV) while “controlling” for the other variables. In contrast, the structure loadings represent the individual pairwise correlations between each subtest and factor (Courville & Thompson, 2001). By examining both pattern and structure matrices, we are able to determine the extent to which the expected correlations between PVT and memory measures are due to shared construct variance and how much of this covariance is related to an artifact of the data (such that low effort = low performance across measures).

Results

A correlation matrix (Table 3) demonstrated significant correlations (all at p < .001) between the CVLT-II and TOMM (ranging from .40 to .68) and the CVLT-II and the WMT (ranging from .43 to .61). Notably stronger correlations, ranging from .51 to .75, were found between the WMT and the TOMM. A two-factor solution accounting for 78.16% of the variance was extracted through a PFA with an oblique rotation. The five PVT variables loaded on Factor 1 (Effort Factor; Eigenvalue = 5.89; 65.44% of the variance) and the four CVLT-II variables loaded on Factor 2 (Memory Factor; Eigenvalue = 1.1; 12.71% of the variance). A third factor had an eigenvalue below the Kaiser cutoff (Eigenvalue = 0.80). In addition, the two identified factors demonstrated simple structure and fit our hypothesis that memory and effort factors would emerge. Therefore, we opted to retain these two factors for further analysis. On the structure matrix, which did not control for any putative confounding variables (e.g., poor effort), variables from the Effort Factor had moderate cross-loadings onto the Memory Factor (.57 to .68). Conversely, variables from the Memory Factor had moderate cross-loadings onto the Effort Factor (.47 to .64). The Effort and Memory Factors correlated moderately (r = .67). See Table 4 for a summary of the EFA structure matrix following Promax rotation.

Table 3.

Correlation matrix of WMT, TOMM, and CVLT-II variables

 
1. WMT IR – .92 .89 .74 .60 .43 .52 .57 .52 
2. WMT DR  – .88 .75 .64 .44 .55 .61 .55 
3. WMT CNS   – .66 .51 .44 .55 .56 .47 
4. TOMM Trial 1    – .83 .47 .55 .61 .64 
5. TOMM Trial 2     – .40 .47 .53 .68 
6. CVLT-II Total      – .78 .66 .52 
7. CVLT-II LDFR       – .78 .53 
8. CVLT-II Recg        – .70 
9. CVLT-II FC         – 
 
1. WMT IR – .92 .89 .74 .60 .43 .52 .57 .52 
2. WMT DR  – .88 .75 .64 .44 .55 .61 .55 
3. WMT CNS   – .66 .51 .44 .55 .56 .47 
4. TOMM Trial 1    – .83 .47 .55 .61 .64 
5. TOMM Trial 2     – .40 .47 .53 .68 
6. CVLT-II Total      – .78 .66 .52 
7. CVLT-II LDFR       – .78 .53 
8. CVLT-II Recg        – .70 
9. CVLT-II FC         – 

Notes: WMT = Word Memory Test; IR = Immediate Recognition; DR = Delayed Recognition; CNS = Consistency; TOMM = Test of Memory Malingering; CVLT-II = California Verbal Learning Test-Second Edition; Total = Trials 1–5; LDFR = Long Delay Free Recall; Recg = Recognition; FC = Forced Choice.

All correlation coefficients are significant at the p < .001 level.

Table 4.

Exploratory factor analysis structure matrix

 Effort factor Memory factor 
WMT 
 IR .95 .58 
 DR .96 .62 
 CNS .86 .57 
TOMM 
 Trial 1 .83 .68 
 Trial 2 .71 .61 
CVLT-II 
 Total .47 .81 
 LDFR .58 .87 
 Recognition .64 .86 
 Forced Choice .62 .71 
 Effort factor Memory factor 
WMT 
 IR .95 .58 
 DR .96 .62 
 CNS .86 .57 
TOMM 
 Trial 1 .83 .68 
 Trial 2 .71 .61 
CVLT-II 
 Total .47 .81 
 LDFR .58 .87 
 Recognition .64 .86 
 Forced Choice .62 .71 

Notes: WMT = Word Memory Test; IR = Immediate Recognition; DR = Delayed Recognition; CNS = Consistency; TOMM = Test of Memory Malingering; CVLT-II = California Verbal Learning Test-Second Edition; Total = Trials 1–5; LDFR = Long Delay Free Recall. Bold font indicates the variable included in the Factor.

The pattern matrix provided clearer support of two highly unique factors without cross-loadings. The pattern matrix (Table 5) demonstrated that the five PVT variables and the four memory variables loaded distinctly onto their respective factors without the aforementioned cross-loading noted in the structure matrix. Specifically, the Effort Factor loadings ranged from .54 to 1.00 with minimal cross-loadings from the Memory Factor (−.01 to .25). Conversely, the Memory Factor loadings ranged from .54 to .89 with minimal cross-loadings from the Effort Factor (−.01 to .27).

Table 5.

Exploratory factor analysis pattern matrix

 Effort factor Memory factor 
WMT 
 IR 1.00 −.10 
 DR .99 −.05 
 CNS .87 −.01 
TOMM 
 Trial 1 .68 .23 
 Trial 2 .54 .25 
CVLT-II 
 Total −.12 .89 
 LDFR −.01 .87 
 Recognition .13 .77 
 Forced Choice .27 .54 
 Effort factor Memory factor 
WMT 
 IR 1.00 −.10 
 DR .99 −.05 
 CNS .87 −.01 
TOMM 
 Trial 1 .68 .23 
 Trial 2 .54 .25 
CVLT-II 
 Total −.12 .89 
 LDFR −.01 .87 
 Recognition .13 .77 
 Forced Choice .27 .54 

Notes: WMT = Word Memory Test; IR = Immediate Recognition; DR = Delayed Recognition; CNS = Consistency; TOMM = Test of Memory Malingering; CVLT-II = California Verbal Learning Test-Second Edition; Total = Trials 1–5; LDFR = Long Delay Free Recall. Bold font indicates the variable included in the Factor.

Discussion

The purpose of this study was to evaluate the relationship between effort and memory variables in an mTBI sample. Although there have been attempts to separate memory and effort as distinct constructs, the inherently confounding nature of effort provides an unavoidable relationship between performance on effort tests and scores on standardized memory tests. To address this limitation, this paper utilized a factor analytic approach to systematically separate the effort (i.e., WMT, TOMM) and memory (i.e., CVLT Total, LDFR, Long Delay Recognition) variables, as well as a validity measure embedded into a memory measure (i.e., CVLT-II Forced Choice) into distinct factors once the shared covariance between effort and memory scores was statistically controlled. This was accomplished by examining both pattern and structure matrices of the EFA, which allows for a side-by-side comparison of the unique and shared communalities between test scores and factors. If measures actually converge on a theoretical construct, they should uniquely load onto a factor after the covariance between other measures and that factor are controlled for, regardless of actual covariance that each measure has with each factor.

The distinction between pattern and structure coefficients is not widely made in the neuropsychological literature but warrants consideration in situations where shared covariance may be present among constructs, as was in this case in where other variables (e.g. actual poor effort) might affect both effort and memory test scores. The pattern matrix in the present study represents the unique contributions that each variable has on each extracted factors, and in this case represents a striking differentiation in the subtest loadings. When all pairwise correlations between test scores (i.e. predictors) were controlled for, the effort variables of the TOMM and WMT loaded together while the CVLT-II formed its own factor. Conversely, when the variables were allowed to simply correlate with the factors, as seen in the structure matrix, test scores had moderate cross-loadings onto both factors. This distribution of loadings provides support for the hypothesis that effort and memory represent two distinct constructs in the neuropsychological evaluation but this distinction is often obfuscated by individual pairwise correlations that are often reported in lieu of pattern weights.

Overall, results from the EFA supported this study's hypothesis that effort and memory are indeed unique constructs in mTBI. The EFA also found that the CVLT-II Forced Choice loaded primarily on the memory factor. However, it is notable that the FC measure also had some cross-loading between factors. These findings were not unanticipated due to the fact that the CVLT-II FC subtest is an embedded validity measure originally derived from an actual memory measure and shares methods variance with the rest of the CVLT, so some degree of cross-loading is to be expected.

Broadly speaking, our findings supported assertions by multiple authors (Armistead-Jehle, 2010; Flaro et al., 2007; Green et al., 2001, 2009; Rohling & Demakis, 2010) that PVTs measure effort independent of memory in an mTBI sample. The current study's findings contrast suppositions by Bowden and colleagues (2006) that (1) “the inference that the WMT is measuring something other than, say, memory or cognition, remains to be demonstrated” (p. 860) and (2) “At this time we cannot say what the WMT test measures although a theoretically conservative approach would interpret the evidence regarding common variance as indicating that the WMT-IR score, like any number-correct score derived from a forced-choice recognition memory test, is probably a measure of memory” (p. 869). The interpretation of both the structure and pattern matrices to account for shared variance in this paper, which Bowden and colleagues directly referenced as a substantial concern of previous PVT literature, provides sound empirical evidence against this position as a concern of PVTs.

Neuropsychologists continue to be referred cases of memory loss secondary to reported mTBI, some of which involve substantial secondary/monetary gains. The need for the administration of PVTs in these scenarios is clear and it is important that clinicians have empirical backing for the actual neuropsychological construct measured by PVTs. This is especially necessary given the high rate of PVT failure in patients with mTBI (Armistead-Jehle, 2010; Flaro et al., 2007; Green et al., 2001, 2009). The results from this study in a sample of OEF/OIF/OND veterans with mTBI support that PVTs are highly distinct from memory tests and suggest that integrating such measures as a standard procedure in the Polytrauma System's TBI evaluation may be warranted. Demonstrating that memory and effort are indeed unique constructs allows providers to have more confidence in their findings when evaluating those reporting memory impairment.

Limitations to this study should be discussed. Though the data were collected as part of a prospective study of veterans consecutively referred for neuropsychological assessment, the participants represented a convenience sample in that all had been referred for clinical evaluations. The WMT and the TOMM are appealing effort variables to include as they cover both verbal and non-verbal modalities; unfortunately, a non-verbal complement to the CVLT-II was not available. The CVLT-II also was a standalone measure for the memory factor, and so strictly speaking it can be argued that an effort factor and a CVLT-II factor, rather than a memory factor, were extracted in this study. It is also unsurprising that the CVLT-II variables all loaded together due to shared method variance. However, the CVLT-II's longstanding use as a measure for verbal memory and learning is well established and its scores would be expected to generalize to the memory construct itself. Future studies should replicate this model using other effort variables with other neuropsychological constructs. As pattern coefficients can change when the overall model is altered, such replication would provide compelling support that effort scores capture a unique effort construct that is distinct, if interrelated, with other neuropsychological constructs.

Conflict of Interest

None declared.

References

Allen
M. D.
Bigler
E. D.
Larsen
J.
Goodrich-Hunsaker
N. J.
Hopkins
R. O.
(
2007
).
Functional neuroimaging evidence for high cognitive effort on the Word Memory Test in the absence of external incentives
.
Brain Injury
 ,
21
,
1425
1428
.
Armistead-Jehle
P.
(
2010
).
Symptom validity test performance in U.S. Veterans referred for evaluation of mild TBI
. Applied Neuropsychology
 ,
17
,
52
59
.
Bowden
S. C.
Shores
E. A.
Mathias
J. L.
(
2006
).
Does effort suppress cognition after traumatic brain injury? A re-examination of the evidence for the word memory test
.
The Clinical Neuropsychologist
 ,
20
,
858
872
.
Bush
S. S.
Ruff
R. M.
Troster
A. I.
Barth
J. T.
Koffler
S. P.
Pliskin
N. H.
et al
(
2005
).
Symptom validity assessment: Practice issues and medical necessity. NAN Policy and Planning Committee
.
Archives of Clinical Neuropsychology
 ,
20
,
419
426
.
Courville
T.
Thompson
B.
(
2001
).
Use of structure coefficients in published multiple regression articles: β is not enough
.
Educational and Psychological Measurement
 ,
61
(2)
,
229
248
.
Delis
D. C.
Kramer
J. H.
Kaplan
E.
Ober
B. A.
(
2000
).
California Verbal Learning Test–Second Edition
 .
San Antonio, TX
:
Psychological Corporation
.
Dikmen
S. S.
Machamer
J. E.
Winn
H. R.
Temkin
N. R.
(
1995
).
Neuropsychological outcome at 1-year post head injury
.
Neuropsychology
 ,
9
,
80
90
.
Drane
D. L.
Williamson
D. J.
Stroup
E. S.
Holmes
M. D.
Jung
M.
Koerner
E.
et al
(
2006
).
Cognitive impairment is not equal in patient with epileptic and psychogenic nonepileptic seizures
.
Epilepsia
 ,
47
,
1879
1886
.
Flaro
L.
Green
P.
Robertson
E.
(
2007
).
Word Memory Test failure 23 times higher in mild brain injury than in parents seeking custody: The power of external incentives
.
Brain Injury
 ,
21
,
373
383
.
Gervais
R.
Green
P.
Russell
A. S.
Pieschl
S.
Allen
L. M.
III
(
2000
).
Failure of symptom validity tests associated with disability incentives in fibromyalgia patients
.
Archives of Clinical Neuropsychology
 ,
15
,
841
842
.
Gervais
R. O.
Rohling
M. L.
Green
P.
Ford
W.
(
2004
).
A comparison of WMT, CARB, and TOMM failure rates in non-head injury disability claimants
.
Archives of Clinical Neuropsychology
 ,
19
,
475
487
.
Gervais
R. O.
Russell
A. S.
Green
P.
Ferrari
R.
Peischl
S. D.
(
2001
).
Effort testing in patients with fibromyalgia and disability incentives
.
Journal of Rheumatology
 ,
28
,
1892
1899
.
Green
P.
(
2003
).
Manual for the Word Memory Test for Windows
 .
Edmonton
:
Green's Publishing
.
Green
P.
(
2007
).
The pervasive influence of effort on neuropsychological tests
.
Physical Medicine and Rehabilitation Clinics of North America
 ,
18
,
43
68
.
Green
P.
Flaro
L.
Courtney
J.
(
2009
).
Examining false positives on the Word Memory Test in adults with mild traumatic brain injury
.
Brain Injury
 ,
23
,
741
750
.
Green
P.
Rohling
M. L.
Lees-Haley
P. R.
Allen
L. M.
(
2001
).
Effort has a greater effect on test scores than severe brain injury in compensation claims
.
Brain Injury
 ,
15
,
1045
1060
.
Greve
K. W.
Bianchini
K. J.
Doane
B. M.
(
2006
).
Classification accuracy of the Test of Memory Malingering in traumatic brain injury: Results of a known-group analysis
.
Journal of Clinical and Experimental Neuropsychology
 ,
28
,
1176
1190
.
Heilbronner
R. L.
Sweet
J. J.
Morgan
J. E.
Larrabee
G. J.
Millis
S. R.
(
2009
).
American academy of clinical neuropsychology consensus conference statement on the neuropsychological assessment of effort, response bias, and malingering
.
The Clinical Neuropsychologist
 ,
23
,
1093
1129
.
Henson
R. K.
Roberts
J. K.
(
2006
).
Use of exploratory factor analysis in published research: Common errors and some comment on improved practice
.
Educational and Psychological Measurement
 ,
66
,
393
416
.
Hilsabeck
R. C.
Gordon
S. N.
Hietpas-Wilson
T.
Zartman
A. L.
(
2011
).
Use of Trial 1 of the Test of Memory Malingering (TOMM) as a screening measure of effort: Suggested discontinuation rules
.
The Clinical Neuropsychologist
 ,
25
,
1228
1238
.
Rees
L. M.
Tombaugh
T. N.
Gansler
D. A.
Moczynski
N. P.
(
1998
).
Five validation experiments of the Test of Memory Malingering (TOMM)
.
Psychological Assessment
 ,
10
,
10
20
.
Rohling
M. L.
Demakis
G. J.
(
2010
).
Bowden, Shores, & Mathias (2006): Failure to replicate or just failure to notice. Does effort still account for more variance in neuropsychological test scores than TBI severity?
The Clinical Neuropsychologist
 ,
24
,
119
136
.
Schretlen
D. J.
Shapiro
A. M.
(
2003
).
A quantitative review of the effects of traumatic brain injury on cognitive functioning
.
International Review of Psychiatry
 ,
15
,
341
349
.
Sullivan
B. K.
May
K.
Galbally
L.
(
2007
).
Symptom exaggeration by college adults in attention-deficit hyperactivity disorder and learning disorder assessment
.
Applied Neuropsychology
 ,
14
,
189
207
.
Tabachnick
B. G.
Fidell
L. S.
(
2007
).
Using multivariate statistics
 .
Boston, MA: Pearson Education, Inc
.
Thompson
B.
(
2004
).
Exploratory and confirmatory factor analysis: Understanding concepts and applications
 .
Washington, DC: American Psychological Association
.
Tombaugh
T. N.
(
1996
).
Test of memory malingering
 .
Toronto, ON: Multi-Health Systems
.
Tombaugh
T. N.
(
1997
).
The Test of Memory Malingering (TOMM): Normative data from cognitively impaired individuals
.
Psychological Assessment
 ,
9
,
260
268
.
Whitney
K. A.
Shepard
P. H.
Williams
A. L.
Davis
J. J.
Adams
K. M.
(
2009
).
The Medical Symptom Validity Test in the evaluation of Operation Iraqi Freedom/Operation Enduring Freedom soldiers: A preliminary study
.
Archives of Clinical Neuropsychology
 ,
24
,
145
152
.
Williamson
D. J.
Drane
D. L.
Stroup
E. S.
Miller
J. W.
Holmes
M. D.
(
2003
).
Most patients with psychogenic nonepileptic seizures do not exert valid effort on neuropsychological testing
.
Epilepsia
 ,
44
(Suppl. 9)
,
13
.
Williamson
D. J.
Drane
D. L.
Stroup
E. S.
Miller
J. W.
Holmes
M. D.
Wilensky
A. J.
(
2004
).
Detecting cognitive differences between patients with epilepsy and patients with psychogenic seizures: Effort matters
.
Epilepsia
 ,
45
(Suppl. 7)
,
179
.
Willis
P. F.
Farrer
T. J.
Bigler
E. D.
(
2011
).
Are effort measures sensitive to cognitiveimpairment?
Military Medicine
 ,
176
,
1426
1431
.