Abstract

Performance validity tests (PVTs) have been shown to relate to neuropsychological performance, but no studies have looked at the ecological validity of these measures. Data from 131 veterans with a history of mild traumatic brain injury from a multicenter Veterans Administration consortium were examined to determine the relation between scores on a self-report version of the Mayo-Portland Adaptability Inventory Participation Index, a measure of community participation, and the Word Memory Test, a PVT. A restricted regression model, including education, age, history of loss of consciousness, cognitive measures, and a measure of symptom validity test performance, was not significantly associated with self-reported community reintegration. Adding PVT results to the restricted model, however, did significantly improve the prediction of community reintegration as PVT failure was associated with lower self-reported community participation. The results of this study indicate that PVTs may also serve as an indicator of patients' functioning in the community.

Introduction

Ecological validity has been described as a transformative construct in the field of neuropsychology, ushering in an era of heightened concern for the functional implications of neuropsychological test results as the field evolves from its historical focus on detecting and localizing neuropathology (Spooner & Pachana, 2006). Despite the apparent importance of this construct, Chaytor and Schmitter-Edgecombe (2003) found the relation between neuropsychological tests and everyday functioning to be modest at best, with the strength of the relation being somewhat dependent on the characteristics of the neuropsychological and outcome measures and the population being studied. One conclusion drawn by the authors of this review was that future research should explore the possibility that other clinical factors related to neuropsychological test performance (i.e., injury severity, IQ, population) influence the relation between neuropsychological tests and measures of everyday functioning. Performance validity is one such factor that has been shown to be related to neuropsychological tests that could also influence the measures of everyday functioning.

In recent years, there has been a growing consensus regarding the importance of routinely employing performance validity tests (PVTs; Larrabee, 2012) to detect response bias during neuropsychological evaluations. Specifically, a position paper by the National Academy of Neuropsychology (NAN) states that response bias must be assessed in all cases where financial incentives are involved or when the provider suspects the patient of invalid responding (Bush et al., 2005). A similar position paper from the American Academy of Clinical Neuropsychology holds an even stronger position, calling for response bias to be assessed in all neuropsychological evaluations (Heilbronner, Sweet, Morgan, Larrabee, & Millis, 2009). These recommendations are certainly appropriate, given that PVT failure rate may be as high as 40% in medicolegal settings where secondary gain is present (Larrabee, 2003; Mittenberg, Patton, Canyock, & Condit, 2002), which is consistent with findings in the veterans and military service members (Armistead-Jehle, 2010; Armistead-Jehle & Buican, 2012). Further, the effect of PVT failure on cognitive performance is well established (Iverson, 2005) and has also been shown to be equal to or greater than the effect of many psychiatric and medical conditions on cognition (Iverson, 2005). This association between effort and poor performance on testing has also been noted in military samples (Lange, Pancholi, Bhagwat, Anderson-Barnes, & French, 2012). Despite the empirical evidence and practice guidelines supporting their use, PVTs are inconsistently used by neuropsychologists. In their survey of practicing psychologists, Rabin, Barr, and Burton (2005) found that no PVTs were ranked among the top 40 most commonly used neuropsychological measures. A more recent survey of NAN members found that only half of respondents reported using effort measures often or always (Sharland & Gfeller, 2007).

The use of PVTs increases the potential for conflict between the patient and provider, as they may render test results uninterpretable and providers will have to inform the patient that they did not appear to fully engage in the testing session (Carone, Iverson, & Bush, 2010), which may partially explain the reticence of some practicing neuropsychologists to routinely use these measures (Sharland & Gfeller, 2007). Indeed, some Veterans Administration (VA) providers report being reprimanded for including measures of symptom validity (Poyner, 2010). Clinicians may also struggle with understanding the clinical significance of PVT failure, as they cannot know the specific reason why one failed. PVT failure should not be interpreted as a moral failing, however. Indeed, Boone (2007) illustrated that deception is commonly used by many animals and concluded that it was counterproductive to equate deception with moral shortcomings. Nevertheless, PVT performance does have very important implications when determining the interpretability of neuropsychological data.

Although it has been well established that failure of PVTs is related to neuropsychological performance, there is a distinct possibility that performance on PVTs may reveal other clinically relevant information about a patient. The impressive effect sizes associated with poor effort (Vickery, Berry, Inman, Harris, & Orey, 2001) lend further credence to the possibility that these measures may speak to issues beyond symptom validity, such as how well patients function in the community. The authors of this manuscript, however, are unaware of any attempt to determine if PVT failure is, in turn, related to real world functioning, or more precisely, if PVTs have ecological validity. By simply using these measures to determine whether we can reliably interpret other neuropsychological test results, we have never questioned the ecological validity of PVTs and whether these tests can provide useful information beyond the testing environment. Although purely speculative given the apparent lack of research on this topic, one reasonable hypothesis regarding the possible relation between PVT performance and real world functioning is that attempts to secure primary or secondary gain by intentionally underperforming on objective tests may represent a learned behavior that has already been reinforced outside the testing lab (e.g., securing attention or assistance from others by feigning or exaggerating impairment; Delis and Wetter, 2007). Alternatively, it is possible that PVT failure could represent a goal-directed behavior with context-specific demonstrations of impairment and disability.

Knowledge of how PVTs relate to reported community participation could serve as a catalyst for developing interventions for different subsets of patients who perform poorly on these measures. Indeed, one of the stated goals of Slick, Sherman, and Iverson's (1999) proposed criteria for diagnosing malingered neurocognitive dysfunction was to encourage clinicians to feel confident in assessing response bias and clearly documenting evidence of suboptimal effort and symptom over-endorsement. Evidence of broader clinical application of PVTs could help practitioners to feel more comfortable using and reporting on these instruments, while also informing both prevention and treatment of response bias issues, as recommended by Slick and colleagues (1999). To this end, we designed a study that aims to evaluate the relation between PVT performance and reported community participation in a group of returning veterans with remote histories of mild traumatic brain injury (mTBI). It was specifically hypothesized a priori that PVT failure would be related to lower levels of reported community participation. Although it seems unlikely that PVT performance supersedes the relevance of measures specifically designed to measure community reintegration, knowledge of how PVTs relate to reported community functioning might speak to how individuals conduct themselves in everyday life, or perhaps, to how individuals would like to be perceived as functioning in the community.

Methods

Participants

Data were collected from 169 Operations Enduring Freedom, Iraqi Freedom, and New Dawn veterans consecutively presenting to four Traumatic Brain Injury Clinics in the northern, western, and southern states. Although study participants were not followed longitudinally, the current investigation was prospectively designed and similarly conducted among all sites, including informed consent and study enrollment, semi-structured clinical interview, and selection and administration of tests and questionnaires. All data were collected in compliance with the regulations of their respective Committees for the Protection of Human Subjects and Institutional Review. All participants gave informed consent before participating in the study.

All participants included in this study were referred for evaluation through a nationwide VA TBI screening process. A referral for evaluation was automatically generated if the veteran endorsed having: (a) any potential head injury(ies) during deployment; (b) altered mental status at the time of the injury; (c) post-concussive symptoms at the time of the injury; and (d) current post-concussive symptoms. The screen has been found to have high internal consistency and positive predictive power, variable test­–retest reliability, sensitivity, and specificity, and generally poor negative predictive power (Donnelly et al., 2011; Terrio, Nelson, Betthauser, Harwood, & Brenner, 2011; Van Dyke, Axelrod, & Schutte, 2010). Due to the design of the screener, it should be noted that only patients with both a possible history of TBI and current symptoms were referred for evaluation; patients with histories of TBI who were not currently reporting symptoms were not referred for evaluation, and therefore, were not included in this study.

Measures

Outcome measure

The measure of real world functioning that we selected for the current study was the Mayo-Portland Adaptability Inventory Participation Index (M2PI; Malec, 2005), a measure of community participation, which consists of eight items on a 5-point Likert scale ranging from 0 to 4, with higher scores indicating worse community participation. The M2PI is currently being used as an outcome measurement tool for rehabilitation services provided throughout Veterans Affairs polytrauma clinics. It assesses initiation of activities, social interactions, recreational activities, basic and instrumental activities of daily living, transportation assistance needed, and employment. It has been shown to have satisfactory internal consistency, inter-rater reliability, and concurrent validity, as well as minimal floor and ceiling effects (Malec, 2005), although relatively less is known about the utility of this measure in homogenous, mTBI populations. Participants were given instructions on completing the M2PI as a self-report form. If one item was unanswered on the M2PI, the total score was prorated based on the seven answered questions. If more than one item was unanswered, the participant was excluded from the study.

Predictor measures

Injury history

A structured clinical interview template was developed to ensure all sites collected the same information in a similar manner. It assessed three separate time epochs: pre-deployment, during deployment, and post-deployment. The number of injuries, mechanism of each injury, and the presence and length of alteration of mental status [i.e., disorientation, post-traumatic amnesia, and loss of consciousness (LOC)] associated with the most serious and the most recent injury from each time epoch were assessed. For this study, a history of mTBI was operationalized as a period of self-reported LOC no longer than 30 min or disorientation no longer than 24 h following a credible injury mechanism (Centers for Disease Control and Prevention, 2003). Imaging was not available for review and was, therefore, not used for inclusion/exclusion decisions.

Cognition

The California Verbal Learning Test-II (CVLT-II; Delis, Kramer, Kaplan, & Ober, 2000), Paced Auditory Serial Addition Test (PASAT; Diehr et al., 2003), and Trail Making Test (TMT; Reitan, 1958) were used to assess delayed recall, sustained attention, and executive functioning, respectively. These measures are all commonly administered neuropsychological tests found to be sensitive to brain injury. Although the preponderance of evidence in the civilian literature suggests that cognitive effects dissipate by 3 months post-injury (Belanger, Curtiss, Demery, Lebowitz, & Vanderploeg, 2005; Carroll et al., 2004; Schretlen & Shapiro, 2003; Vanderploeg, Curtiss, & Belanger, 2005), neuropsychological tests of attention and working memory, processing speed, memory, and executive functions appear to be the most sensitive to mTBI (Frencham, Fox, & Maybery, 2005). These measures were administered to all study participants as part of a larger test battery.

Response bias

Performance validity, our main predictor variable of interest, was assessed using the immediate recall, delayed recall, and consistency scales from the computer version of the Word Memory Test (WMT; Green, 2005). Performance on the WMT was dichotomized based on the interpretive method recommended in the WMT manual. Scores falling at or below the recommended cutoff on the WMT are related to overall worse performance on neuropsychological tests, especially those of learning and memory (Gervais, Rohling, Green, & Ford, 2004; Gervais et al., 2001; Green & Flaro, 2003).

In order to account for any effect that symptom exaggeration may have had on our self-report measure of community functioning (M2PI), we assessed symptom validity with the Fake Bad Scale (FBS), which was derived from the administration of the full Minnesota Multiphasic Personality Inventory-Second Edition (MMPI-2). The FBS is a 43-item scale designed to detect exaggeration in litigants and shown to be the MMPI-2 validity scale most sensitive to the exaggeration of symptoms in forensic neuropsychological evaluations (Greiffenstein, Baker, Axelrod, Peck, & Gervais, 2004; Larrabee, 2003; Ross, Millis, Krukowski, Putnam, & Adams, 2004). Although various cutoffs have been examined, the current study employed a relatively conservative cutoff score of 26, as this cutoff was found to result in a specificity of 0.95 across multiple studies (Greiffenstein, Fox, & Lees-Haley, 2007). The FBS was included in the restricted and full regression models predicting self-reported community reintegration on the M2PI.

Results

Participants

Of the initial 169 veterans considered for inclusion in this study, 7 participants did not report any altered mental status, PTA, or LOC (i.e., indicating no TBI was sustained) and 16 participants reported altered mental status for over 24 h post-injury or LOC for over 30 min and therefore did not meet inclusion criteria. Of the remaining 146 eligible participants, 4 were excluded due to incomplete M2PIs, 4 were not administered the PASAT, 3 were not administered the WMT, 2 were missing education information, 1 was not administered the MMPI-2, and 1 was not administered the TMT, leaving 131 participants for the analysis of the effect of WMT performance on M2PI. The 38 participants excluded from the study did not differ from the participants included in the study in terms of age, education, race, SVT performance, PVT performance, executive functioning, sustained attention, delayed recall, or community participation (ps > .05).

In the final study sample, 58% (n = 76) of participants failed the PVT and 42% (n = 55) of participants passed the PVT. One-way analyses of variance (ANOVAs) were conducted to examine possible differences in age and education between participants falling above and below performance validity testing cutoffs. No differences were found (ps > .05). A Chi-square goodness of fit test did not reveal a significant difference in the presence of LOC between participants falling above and below the PVT cutoff (p > .05). Demographics, military service, injury severity characteristics, and clinical characteristics of the final sample are presented in Table 1.

Table 1.

Demographics, injury characteristics, and military characteristics of the study population

 Total sample (N = 131)
 
Passed the WMT (n = 55)
 
Failed the WMT (n = 76)
 
F χp η2 
 M SD n M SD n M SD n     
Age 31.81 7.2   30.76 7.2   32.57 7.1   2.01  .158 0.015 
Years of education 13.05 1.6   13.15 1.5   12.99 1.7   0.303  .583 0.002 
Males   124 94.7   52 94.5   72 94.7     
Ethnicity              4.16 .384  
 African American   15 11.5   9.1   10 13.2     
 Caucasian   85 66.4   37 67.3   48 63.2     
 Hispanic   24 18.3   14.5   16 21.1     
 Multiracial   0.8   1.8   0.0     
 Other   4.6   7.3   2.6     
Branch of service              11.50 .022  
 Army   87 66.4   39 70.9   48 63.2     
 Navy   10 7.6   1.8   11.8     
 Air Force   6.9   7.3   6.6     
 Marines   18 3.7   9.1   13 17.1     
 National Guard   5.3   10.9   1.3     
Marital status              2.53 .283  
 Married or Partnered   60 45.8   23 41.8   37 48.7     
 Single, never married   38 29.0   20 36.4   18 23.7     
 Divorced or separated   33 25.2   12 21.8   21 27.6     
Positive LOC   78 59.5   31 56.4   47 61.8  0.398 .528  
 Total sample (N = 131)
 
Passed the WMT (n = 55)
 
Failed the WMT (n = 76)
 
F χp η2 
 M SD n M SD n M SD n     
Age 31.81 7.2   30.76 7.2   32.57 7.1   2.01  .158 0.015 
Years of education 13.05 1.6   13.15 1.5   12.99 1.7   0.303  .583 0.002 
Males   124 94.7   52 94.5   72 94.7     
Ethnicity              4.16 .384  
 African American   15 11.5   9.1   10 13.2     
 Caucasian   85 66.4   37 67.3   48 63.2     
 Hispanic   24 18.3   14.5   16 21.1     
 Multiracial   0.8   1.8   0.0     
 Other   4.6   7.3   2.6     
Branch of service              11.50 .022  
 Army   87 66.4   39 70.9   48 63.2     
 Navy   10 7.6   1.8   11.8     
 Air Force   6.9   7.3   6.6     
 Marines   18 3.7   9.1   13 17.1     
 National Guard   5.3   10.9   1.3     
Marital status              2.53 .283  
 Married or Partnered   60 45.8   23 41.8   37 48.7     
 Single, never married   38 29.0   20 36.4   18 23.7     
 Divorced or separated   33 25.2   12 21.8   21 27.6     
Positive LOC   78 59.5   31 56.4   47 61.8  0.398 .528  

Notes. Age is in years; LOC = loss of consciousness; WMT = Word Memory Test.

Performance Validity, Symptom Validity, and Cognition

SVT performance differed significantly between patients passing and failing the PVT, χ2(1, N = 131) = 10.01, p = .001, with patients who failed the PVT more likely to score at or above the cutoff on the SVT, indicating probable symptom exaggeration. One-way ANOVAs were conducted to examine the relationship between PVT results and cognitive performance, with age and education as covariates. PVT failure was significantly related to worse delayed recall, F(1,127) = 41.89, p < .001, but not to executive functioning, F(1,127) = 3.14, p = .070, or to sustained attention, F(1,127) = .448, p = .505 (Table 2). A breakdown of scores on the MMPI-2 FBS and the WMT is presented in Table 3.

Table 2.

Means, standard deviations, and one-way ANOVA for PVT performance on neuropsychological test performance (after controlling for age and education), symptom validity, and community participation

 Total sample (M [SD]) Passed the WMT (M [SD]) Failed the WMT (M [SD]) F p η2 
    
Trails B time 73.45 (42.91) 66.74 (42.62) 80.16 (42.54) 3.14 .079 0.024 
PASAT Trials 1 and 2 total score 64.66 (17.37) 65.69 (17.28) 63.63 (17.26) 0.448 .505 0.004 
CVLT 20′ Free Recall 10.59 (3.30) 12.48 (3.27) 8.71 (3.27) 41.89 <.001 0.248 
MMPI-2 FBS raw score 23.16 (5.87) 21.18 (5.51) 24.60 (5.74) 10.03 .002 0.073 
M2PI 11.59 (6.43) 9.48 (6.40) 13.11 (6.05) 9.49 .003 0.069 
 Total sample (M [SD]) Passed the WMT (M [SD]) Failed the WMT (M [SD]) F p η2 
    
Trails B time 73.45 (42.91) 66.74 (42.62) 80.16 (42.54) 3.14 .079 0.024 
PASAT Trials 1 and 2 total score 64.66 (17.37) 65.69 (17.28) 63.63 (17.26) 0.448 .505 0.004 
CVLT 20′ Free Recall 10.59 (3.30) 12.48 (3.27) 8.71 (3.27) 41.89 <.001 0.248 
MMPI-2 FBS raw score 23.16 (5.87) 21.18 (5.51) 24.60 (5.74) 10.03 .002 0.073 
M2PI 11.59 (6.43) 9.48 (6.40) 13.11 (6.05) 9.49 .003 0.069 

Notes. PASAT = Paced Auditory Serial Addition Test; CVLT = California Verbal Learning Test-II; M2PI = Mayo-Portland Adaptability Inventory Participation Index; MMPI-2 = Minnesota Multiphasic Personality Inventory-Second Edition; WMT = Word Memory Test.

Table 3.

Breakdown of scores on the SVT and PVT

 Total sample (n = 131) Passed the WMT (n = 55) Failed the WMT (n = 76) 
MMPI-2 FBS raw score, n (%) 
 <10 2 (1.5) 1 (1.8) 1 (1.3) 
 11–15 9 (6.9) 5 (9.1) 4 (5.3) 
 16–20 32 (24.4) 22 (40.0) 10 (13.2) 
 21–25 44 (33.6) 17 (30.9) 27 (35.5) 
 26–30 27 (20.6) 6 (10.9) 21 (27.6) 
 31–35 16 (12.2) 4 (7.3) 12 (15.8) 
 36–40 1 (0.8) 0 (0.0) 1 (1.3) 
WMT IR 
 >82.5 68 (51.9) 55 (100.0) 13 (17.1) 
 70–82.5 34 (26.0) 0 (0.0) 34 (44.7) 
 50–69 24 (18.3) 0 (0.0) 24 (31.6) 
 <50 5 (3.8) 0 (0.0) 5 (6.6) 
WMT DR 
 >82.5 66 (50.4) 55 (100.0) 11 (14.5) 
 70–82.5 38 (29.0) 0 (0.0) 38 (50.0) 
 50–69 17 (13.0) 0 (0.0) 17 (22.4) 
 <50 10 (7.6) 0 (0.0) 10 (13.2) 
WMT CONS 
 >82.5 57 (43.5) 55 (100.0) 2 (2.6) 
 70–82.5 40 (30.5) 0 (0.0) 40 (52.6) 
 50–69 31 (23.7) 0 (0.0) 31 (40.8) 
 <50 3 (2.3) 0 (0.0) 3 (3.9) 
 Total sample (n = 131) Passed the WMT (n = 55) Failed the WMT (n = 76) 
MMPI-2 FBS raw score, n (%) 
 <10 2 (1.5) 1 (1.8) 1 (1.3) 
 11–15 9 (6.9) 5 (9.1) 4 (5.3) 
 16–20 32 (24.4) 22 (40.0) 10 (13.2) 
 21–25 44 (33.6) 17 (30.9) 27 (35.5) 
 26–30 27 (20.6) 6 (10.9) 21 (27.6) 
 31–35 16 (12.2) 4 (7.3) 12 (15.8) 
 36–40 1 (0.8) 0 (0.0) 1 (1.3) 
WMT IR 
 >82.5 68 (51.9) 55 (100.0) 13 (17.1) 
 70–82.5 34 (26.0) 0 (0.0) 34 (44.7) 
 50–69 24 (18.3) 0 (0.0) 24 (31.6) 
 <50 5 (3.8) 0 (0.0) 5 (6.6) 
WMT DR 
 >82.5 66 (50.4) 55 (100.0) 11 (14.5) 
 70–82.5 38 (29.0) 0 (0.0) 38 (50.0) 
 50–69 17 (13.0) 0 (0.0) 17 (22.4) 
 <50 10 (7.6) 0 (0.0) 10 (13.2) 
WMT CONS 
 >82.5 57 (43.5) 55 (100.0) 2 (2.6) 
 70–82.5 40 (30.5) 0 (0.0) 40 (52.6) 
 50–69 31 (23.7) 0 (0.0) 31 (40.8) 
 <50 3 (2.3) 0 (0.0) 3 (3.9) 

Notes. SVT = symptom validity test; PVT = performance validity test; MMPI-2 FBS = Minnesota Multiphasic Personality Inventory-2 Fake Bad Scale; WMT = Word Memory Test; IR = Immediate Recognition Trial; DR = Delayed Recognition Trial; CONS = Consistency.

Community Participation by PVT Performance

A hierarchical multiple linear regression was performed to examine the impact of PVT performance on self-reported general community participation. The restricted model included years of education, age, history of loss of consciousness, sustained attention, executive functioning, delayed recall, and SVT results as predictors of general community participation. The full model included all of the above predictors and added PVT performance as a predictor.

The restricted model indicated that the variance accounted for by education, age, history of loss of consciousness, sustained attention, executive functioning, delayed recall, and SVT performance was not significant, R2 = .105, F(7,123) = 2.07, p = .052. The full model, which added PVT results, was significant, F(8,122) = 2.66, p = .010, and represented a significant improvement in the prediction of community reintegration relative to the restricted model (R2 = .149; R2 change = .093). In this model, only SVT results (β = 0.202, p = .031) and PVT results (β = 0.248, p = .014) were significantly related to general community participation, with failure of the SVT related to worse community participation and failure of the PVT related to worse community participation. For complete results, see Table 4.

Table 4.

Hierarchical regression analysis assessing the relationship between self-reported community participation and age, education, loss of consciousness, executive functioning, sustained attention, delayed recall, SVT results, and PVT results

Step and predictor variables B SE B Β t p 
Step 1 
 Age 0.07 0.08 0.08 0.79 .429 
 Education −0.20 0.37 −0.05 −0.54 .590 
 Presence/absence of LOC −0.20 1.12 −0.02 −0.18 .857 
 Delayed Recall −0.02 0.15 −0.01 −0.14 .887 
 Executive Functioning 0.02 0.01 0.15 1.63 .106 
 Sustained Attention 0.03 0.04 0.07 0.72 .475 
 Pass/Fail SVT 3.31 1.26 0.24 2.63 .010 
Step 2 
 Age 0.07 0.08 0.08 0.86 .390 
 Education −0.25 0.36 −0.06 −0.68 .498 
 Presence/absence of LOC −0.33 1.10 −0.03 −0.30 .766 
 Delayed Recall 0.17 0.17 0.11 1.03 .304 
 Executive Functioning 0.02 0.01 0.14 1.53 .129 
 Sustained Attention 0.03 0.03 0.07 0.71 .478 
 Pass/Fail SVT 2.74 1.25 0.20 2.19 .031 
 Pass/Fail PVT 3.22 1.29 0.25 2.49 .014 
Step and predictor variables B SE B Β t p 
Step 1 
 Age 0.07 0.08 0.08 0.79 .429 
 Education −0.20 0.37 −0.05 −0.54 .590 
 Presence/absence of LOC −0.20 1.12 −0.02 −0.18 .857 
 Delayed Recall −0.02 0.15 −0.01 −0.14 .887 
 Executive Functioning 0.02 0.01 0.15 1.63 .106 
 Sustained Attention 0.03 0.04 0.07 0.72 .475 
 Pass/Fail SVT 3.31 1.26 0.24 2.63 .010 
Step 2 
 Age 0.07 0.08 0.08 0.86 .390 
 Education −0.25 0.36 −0.06 −0.68 .498 
 Presence/absence of LOC −0.33 1.10 −0.03 −0.30 .766 
 Delayed Recall 0.17 0.17 0.11 1.03 .304 
 Executive Functioning 0.02 0.01 0.14 1.53 .129 
 Sustained Attention 0.03 0.03 0.07 0.71 .478 
 Pass/Fail SVT 2.74 1.25 0.20 2.19 .031 
 Pass/Fail PVT 3.22 1.29 0.25 2.49 .014 

Notes. Age and education are in years; LOC = loss of consciousness; SVT = symptom validity test; PVT = performance validity test.

Discussion

This study investigated differences in self-reported community participation in veterans with histories of mTBI with respect to their performance on a measure of performance validity. Veterans who passed did not differ from veterans who failed the PVT in terms of age, education, or presence of loss of consciousness at the time of injury. Veterans who failed the PVT were significantly more likely to score at or above the cutoff on the SVT, indicating symptom exaggeration. PVT and SVT failure were the only significant correlates of reported community participation in this study, as measured by the M2PI, whereas age, education, presence of loss of consciousness at the time of injury, and current neuropsychological functioning all failed to relate to reported community participation.

Although a fair number of studies have investigated how psychological and cognitive factors affect impairment and participation following mTBI, to date, the authors of this study are not aware of any prior studies that have investigated the relation between PVT performance and community participation. The field of neuropsychology has not yet questioned whether these PVTs can also be useful above and beyond determining the validity of neuropsychological test results. The current study found a clear relation between both PVT performance and reported community functioning, as veterans who performed below expectation on a performance validity measure reported a higher degree of participation restriction. The possibility that PVTs may serve another important role as part of the larger neuropsychological evaluation is appealing. Rather than simply calling into question the validity of the neuropsychological test results, failure of PVTs may lend some insight into the patient's ability to cope with the stressors of everyday life and may even contribute to decisions regarding treatment planning. Alternatively, PVT failure may be associated with exaggeration of disability and subsequent motivation to avoid detection of this exaggeration, which may lead to a volitional reduction in community participation, or perhaps, simply increased likelihood of exaggerating difficulties on self-report measures of community participation that is not solely accounted for by SVT performance.

A myriad of person and environment factors may account for the relation between PVT performance and reported participation in the community. The availability of financial compensation has been shown to play a significant role in probable symptom exaggeration (Binder & Rohling, 1996; Cook, 1972; Miller, 1961; Paniak et al., 2002; Reynolds, Paniak, Toller-Lobe, & Nagy, 2003) and could reasonably cause disincentive to participate in the community, especially regarding the pursuit of competitive employment. Of course, along with financial incentive, there are likely many other factors that affect both PVT results and community participation. Although the authors of this study are not aware of empirical studies that have examined the relationship between comorbidities and environmental factors and PVT failure, it is reasonable to postulate that in the veteran population, physical issues (e.g., chronic pain), psychological comorbidities (e.g., PTSD, substance misuse, depression), and environmental factors (e.g., marital distress, high rates of unemployment, limited social support) may play complex roles both in performance validity performance and in community participation in the years following an mTBI. It is important that future research consider all contributing factors when predicting outcome (Shames, Treger, Ring, & Giaquinto, 2007), as multiple stressors may combine with each other and with premorbid psychological and medical factors to result in increased disability (Evered, Ruff, Baldo, & Isomura, 2003). It is our hope that further research will lead to useful information about how those who fail PVTs differ from those who pass with regard to psychological, behavioral, and environmental characteristics. Following, or perhaps, in conjunction with an accurate description of these populations, interventions specific to those who demonstrate poor performance on PVTs can be developed. These interventions will likely involve an extensive assessment of each patient's perceived barriers to increased community participation that may otherwise be difficult to objectively quantify.

It is important to consider that the observational design of the study precludes any statements indicating causality between variables. The current study relied in part on self-reported injury characteristics to diagnose a history of mTBI, as well as self-report of current community participation. Problems with self-report include differences in willingness to admit problems, symptom exaggeration, the fallibility of memory (Loftus, Levidow, & Duensing, 2002), and overestimation of preinjury functioning (Gunstad & Suhr, 2001). Van Dyke and colleagues (2010) have specifically called into question the reliability of self-report information pertaining to injury characteristics and symptom report in the population of returning veterans. Although it is possible that participants who failed the PVT may be more likely to exaggerate their community participation difficulties; we attempted to address this issue by including a measure of symptom exaggeration as a covariate in the model. The FBS was used as a proxy for potential symptom exaggeration on the M2PI to better control for the possibility of exaggerated community participation difficulties. It is possible, however, that not all variance attributable to symptom exaggeration was accounted for in the M2PI self-reported ratings. Indeed, some may argue that MMPI-2 questions have less face validity than the M2PI. It could be that many patients that do not exaggerate on the MMPI-2 exaggerate on relatively transparent measures, such as the M2PI. In order to address this limitation, future studies should employ alternative indicators of participation, such as clinician or collateral ratings of participation. Finally, before being able to generalize the results of this study to non-mTBI populations, it will be important to replicate these findings in other clinical populations.

The results of this study indicate that PVT performance is related to self-reported community participation in returning veterans, whereas injury severity, age, education, and cognitive abilities were unrelated to the level of participation. This study highlights the importance of assessing performance validity in clinical research involving patients with histories of mTBI, as simply considering symptom exaggeration does not appear to be sufficient. This study also highlights the possibility of clinical interventions for those with questionable performance validity.

Funding

No pharmaceutical or corporate funding was used in the preparation of this manuscript or collection of data. This material is based upon work supported by the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development; B6812C (PI- Levin HS) VA RR&D Traumatic Brain Injury Center of Excellence, Neurorehabilitation: Neurons to Networks, as well as the Minnesota Veterans Medical Research & Education Foundation.

References

Armistead-Jehle
P.
Symptom validity test performance in U.S. veterans referred for evaluation of mild TBI
Applied Neuropsychology
 , 
2010
, vol. 
17
 (pg. 
52
-
59
)
Armistead-Jehle
P.
Buican
B.
Evaluation context and Symptom Validity Test performances in a U.S. Military sample
Archives of Clinical Neuropsychology
 , 
2012
, vol. 
27
 (pg. 
828
-
839
)
Belanger
H. G.
Curtiss
G.
Demery
J. A.
Lebowitz
B. K.
Vanderploeg
R. D.
Factors moderating neuropsychological outcomes following mild traumatic brain injury: A meta-analysis
Journal of the International Neuropsychological Society
 , 
2005
, vol. 
11
 (pg. 
215
-
227
)
Binder
L. M.
Rohling
M. L.
Money matters: A meta-analytic review of the effects of financial incentives on recovery after closed-head injury
American Journal of Psychiatry
 , 
1996
, vol. 
153
 (pg. 
7
-
10
)
Boone
K. B.
Assessment of feigned cognitive impairment: A neuropsychological perspective
 , 
2007
New York
The Guilford Press
Bush
S. S.
Ruff
R. M.
Troster
A. I.
Barth
J. T.
Koffler
S. P.
Pliskin
N. H.
, et al.  . 
Symptom validity assessment: Practice issues and medical necessity NAN policy & planning committee
Archives of Clinical Neuropsychology
 , 
2005
, vol. 
20
 (pg. 
419
-
426
)
Carone
D. A.
Iverson
G. L.
Bush
S. S.
A model to approaching and providing feedback to patients regarding invalid test performance in clinical neuropsychological evaluations
The Clinical Neuropsychologist
 , 
2010
, vol. 
24
 (pg. 
759
-
778
)
Carroll
L. J.
Cassidy
J. D.
Peloso
P. M.
Borg
J.
von Holst
H.
Holm
L.
, et al.  . 
Prognosis for mild traumatic brain injury: Results of the WHO Collaborating Centre Task Force on Mild Traumatic Brain Injury
Journal of Rehabilitation Medicine
 , 
2004
, vol. 
0
 (pg. 
84
-
105
)
Centers for Disease Control and Prevention
National Center for Injury Prevention and Control. Report to Congress. Mild Traumatic Brain Injury in the United States: Steps to Prevent a Serious Public Health Problem
2003
Atlanta, GA
Chaytor
N.
Schmitter-Edgecombe
M.
The ecological validity of neuropsychological tests: A review of the literature on everyday cognitive skills
Neuropsychology Review
 , 
2003
, vol. 
13
 (pg. 
181
-
197
)
Cook
J. B.
The post-concussional syndrome and factors influencing recovery after minor head injury admitted to hospital
Scandinavian Journal of Rehabilitation Medicine
 , 
1972
, vol. 
4
 (pg. 
27
-
30
)
Delis
D. C.
Kramer
J. H.
Kaplan
E.
Ober
B. A.
California Verbal Learning Test
 , 
2000
2nd ed.
San Antonio
Psychological Corporation
Delis
D. C.
Wetter
S. R.
Cogniform Disorder and Cogniform Condition: Proposed diagnoses for excessive cognitive symptoms
Archives of Clinical Neuropsychology
 , 
2007
, vol. 
22
 (pg. 
589
-
604
)
Diehr
M. C.
Cherner
M.
Wolfson
T. J.
Miller
S. W.
Grant
I.
Heaton
R. K.
HIV Neurobehavioral Research Center Group
The 50 and 100-Item Short Forms of the Paced Auditory Serial Addition Task (PASAT): Demographically corrected norms and comparisons with the full PASAT in normal and clinical samples
Journal of Clinical and Experimental Neuropsychology
 , 
2003
, vol. 
25
 (pg. 
571
-
585
)
Donnelly
K. T.
Donnelly
J. P.
Dunnam
M.
Warner
G. C.
Kittleson
C. J.
Constance
J. E.
, et al.  . 
Reliability, sensitivity, and specificity of the VA traumatic brain injury screening tool
Journal of Head Trauma Rehabilitation
 , 
2011
, vol. 
26
 (pg. 
439
-
453
)
Evered
L.
Ruff
R.
Baldo
J.
Isomura
A.
Emotional risk factors and postconcussional disorder
Assessment
 , 
2003
, vol. 
10
 (pg. 
420
-
427
)
Frencham
K. A.
Fox
A. M.
Maybery
M. T.
Neuropsychological studies of mild traumatic brain injury: A meta-analytic review of research since 1995
Journal of Clinical and Experimental Neuropsychology
 , 
2005
, vol. 
27
 (pg. 
334
-
351
)
Gervais
R. O.
Rohling
M. L.
Green
P.
Ford
W.
A comparison of WMT, CARB, and TOMM failure rates in non-head injury disability claimants
Archives of Clinical Neuropsychology
 , 
2004
, vol. 
19
 (pg. 
475
-
487
)
Gervais
R. O.
Russell
A. S.
Green
P.
Allen
L. M.
3rd
Ferrari
R.
Pieschl
S. D.
Effort testing in patients with fibromyalgia and disability incentives
Journal of Rheumatology
 , 
2001
, vol. 
28
 (pg. 
1892
-
1899
)
Green
P.
Green's Word Memory Test for Microsoft windows: User's manual
 , 
2005
Seattle
Green's Publishing
Green
P.
Flaro
L.
Word memory test performance in children
Child Neuropsychology
 , 
2003
, vol. 
9
 (pg. 
189
-
207
)
Greiffenstein
M. F.
Baker
W. J.
Axelrod
B.
Peck
E. A.
Gervais
R.
The Fake Bad Scale and MMPI-2 F-family in detection of implausible psychological trauma claims
The Clinical Neuropsychologist
 , 
2004
, vol. 
18
 (pg. 
573
-
590
)
Greiffenstein
M. F.
Fox
D.
Lees-Haley
P. R.
Boone
K. B.
The MMPI-2 Fake Bad Scale in detection of noncredible brain injury claims
Assessment of feigned cognitive impairment: A neuropsychological perspective
 , 
2007
New York
Guilford Press
(pg. 
210
-
238
)
Gunstad
J.
Suhr
J. A.
“Expectation as etiology” “versus the good old days:” postconcussion syndrome symptom reporting in athletes, headache sufferers, and depressed individuals
Journal of the International Neuropsychological Society
 , 
2001
, vol. 
7
 (pg. 
323
-
333
)
Heilbronner
R. L.
Sweet
J. J.
Morgan
J. E.
Larrabee
G. J.
Millis
S. R.
American Academy of Clinical Neuropsychology Consensus Conference Statement on the neuropsychological assessment of effort, response bias, and malingering
The Clinical Neuropsychologist
 , 
2009
, vol. 
23
 (pg. 
1093
-
1129
)
Iverson
G. L.
Outcome from mild traumatic brain injury
Current Opinion in Psychiatry
 , 
2005
, vol. 
18
 (pg. 
301
-
317
)
Lange
R. T.
Pancholi
S.
Bhagwat
A.
Anderson-Barnes
V.
French
L. M.
Influence of poor effort on neuropsychological test performance in U.S. military personnel following mild traumatic brain injury
Journal of Clinical and Experimental Neuropsychology
 , 
2012
, vol. 
34
 (pg. 
453
-
466
)
Larrabee
G.
Detection of symptom exaggeration with the MMPI-2 in litigants with malingered neurocognitive dysfunction
The Clinical Neuropsychologist
 , 
2003
, vol. 
17
 (pg. 
54
-
68
)
Larrabee
G. J.
Performance validity and symptom validity in neuropsychological assessment
Journal of the International Neuropsychological Society
 , 
2012
, vol. 
18
 (pg. 
625
-
630
)
Loftus
E. F.
Levidow
B.
Duensing
S.
Who remembers best? Individual differences in memory for events that occurred in a science museum
Applied Cognitive Psychology
 , 
2002
, vol. 
6
 (pg. 
93
-
97
)
Malec
J.
The Mayo-Portland Adaptability Inventory. The Center for outcome Measurement in Brain Injury
2005
 
Miller
H.
Accident neurosis
British Medical Journal
 , 
1961
, vol. 
1
 (pg. 
992
-
998
)
Mittenberg
W.
Patton
C.
Canyock
E. M.
Condit
D. C.
Base rates of malingering and symptom exaggeration
Journal of Clinical and Experimental Neuropsychology
 , 
2002
, vol. 
24
 (pg. 
1094
-
1102
)
Paniak
C.
Reynolds
S.
Toller-Lobe
G.
Melnyk
A.
Nagy
J.
Schmidt
D.
A longitudinal study of the relationship between financial compensation and symptoms after treated mild traumatic brain injury
Journal of Clinical and Experimental Neuropsychology
 , 
2002
, vol. 
24
 (pg. 
187
-
193
)
Poyner
G.
Psychological evaluations of veterans claiming PTSD disability with the Department of Veterans Affairs: A clinician's viewpoint
Psychological Injury and Law
 , 
2010
, vol. 
3
 (pg. 
130
-
132
)
Rabin
L. A.
Barr
W. B.
Burton
L. A.
Assessment practices of clinical neuropsychologists in the United States and Canada: A survey of INS, NAN, and APA Division 40 members
Archives of Clinical Neuropsychology
 , 
2005
, vol. 
20
 (pg. 
33
-
65
)
Reitan
R. M.
Validity of the Trail Making Test as an indicator of organic brain damage
Perceptual and Motor Skills
 , 
1958
, vol. 
8
 (pg. 
271
-
276
)
Reynolds
S.
Paniak
C.
Toller-Lobe
G.
Nagy
J.
A longitudinal study of compensation-seeking and return to work in a treated mild traumatic brain injury sample
Journal of Head Trauma Rehabilitation
 , 
2003
, vol. 
18
 (pg. 
139
-
147
)
Ross
S. R.
Millis
S. R.
Krukowski
R. A.
Putnam
S. H.
Adams
K. M.
Detecting incomplete effort on the MMPI-2: An examination of the Fake-Bad Scale in mild head injury
Journal of Clinical and Experimental Neuropsychology
 , 
2004
, vol. 
26
 (pg. 
115
-
124
)
Schretlen
D. J.
Shapiro
A. M.
A quantitative review of the effects of traumatic brain injury on cognitive functioning
International Review of Psychiatry
 , 
2003
, vol. 
15
 (pg. 
341
-
349
)
Shames
J.
Treger
I.
Ring
H.
Giaquinto
S.
Return to work following traumatic brain injury: Trends and challenges
Disability and Rehabilitation
 , 
2007
, vol. 
29
 (pg. 
1387
-
1395
)
Sharland
M. J.
Gfeller
J. D.
A survey of neuropsychologists’ beliefs and practices with respect to the assessment of effort
Archives of Clinical Neuropsychology
 , 
2007
, vol. 
22
 (pg. 
213
-
223
)
Slick
D. J.
Sherman
E. M.
Iverson
G. L.
Diagnostic criteria for malingered neurocognitive dysfunction: Proposed standards for clinical practice and research
The Clinical Neuropsychologist
 , 
1999
, vol. 
13
 (pg. 
545
-
561
)
Spooner
D. M.
Pachana
N. A.
Ecological validity in neuropsychological assessment: A case for greater consideration in research with neurologically intact populations
Archives of Clinical Neuropsychology
 , 
2006
, vol. 
21
 (pg. 
327
-
337
)
Terrio
H. P.
Nelson
L. A.
Betthauser
L. M.
Harwood
J. E.
Brenner
L. A.
Postdeployment traumatic brain injury screening questions: Sensitivity, specificity, and predictive values in returning soldiers
Rehabilitation Psychology
 , 
2011
, vol. 
56
 (pg. 
26
-
31
)
Vanderploeg
R. D.
Curtiss
G.
Belanger
H. G.
Long-term neuropsychological outcomes following mild traumatic brain injury
Journal of the International Neuropsychological Society
 , 
2005
, vol. 
11
 (pg. 
228
-
236
)
Van Dyke
S. A.
Axelrod
B. N.
Schutte
C.
Test-retest reliability of the Traumatic Brain Injury Screening Instrument
Military Medicine
 , 
2010
, vol. 
175
 (pg. 
947
-
949
)
Vickery
C. D.
Berry
D. T.
Inman
T. H.
Harris
M. J.
Orey
S. A.
Detection of inadequate effort on neuropsychological testing: A meta-analytic review of selected procedures
Archives of Clinical Neuropsychology
 , 
2001
, vol. 
16
 (pg. 
45
-
73
)