Abstract

Objective

This study tested whether patients who were given a handout based on deterrence theory, immediately prior to evaluation, would provide invalid data less frequently than patients who were simply given an informational handout.

Method

All outpatients seen for clinical evaluation in a VA Neuropsychology Clinic were randomly given one of the two handouts immediately prior to evaluation. The “Intervention” handout emphasized the importance of trying one's hardest, explicitly listed consequences of valid and invalid responding and asked patients to sign and initial it. The “Control” handout provided general information about neuropsychological evaluation. Examiners were blinded to condition. Patients were excluded from analyses if they were diagnosed with major neurocognitive disorder or could not read the handout. Medical Symptom Validity Test (MSVT) was used to determine performance validity.

Results

Groups did not differ on age, education, or litigation status. For the entire sample ( N  = 251), there was no effect of handout on passing versus failing MSVT. However, among patients who were seeking disability benefits at the time of evaluation ( n  = 70), the Intervention handout was associated with lower frequency of failing MSVT than the Control handout.

Conclusions

This brief, theory-based, cost-free intervention was associated with lower frequency of invalid data among patients seeking disability benefits at the time of clinical evaluation. We suggest methodological modifications that might produce a more potent intervention that could be effective with additional subsets of patients.

Introduction

The importance of assessing validity in neuropsychological examination is now well recognized ( Bush et al., 2005 ). As methods of assessing validity have been developed, it has become apparent that many patients do not provide credible data on clinical examination. Across settings, between 8% and 39% of patients might not perform to the best of their ability ( Mittenberg, Patton, Canyock, & Condit, 2002 ; Young, Roper, & Arentsen, 2016 ). This is observed even in clinical, non-forensic settings in which clear reasons for noncredible performance, such as “secondary gain,” might not be obvious. Invalid findings can occur when patients are not fully engaged in the evaluation process, for reasons that might include intentional exaggeration or production of deficit, fears that the examination will be insufficiently sensitive to deficits, distrust of the examiner or healthcare system, or other motivational factors ( Carone, Iverson, & Bush, 2010 ; Iverson, 2006 ).

Invalid examination findings can result in negative outcomes for patients, healthcare systems, and the field of neuropsychology as a whole. Patients who provide invalid data might not receive accurate diagnoses or necessary services, as their needs cannot be validly determined. Alternatively, they might receive unnecessary services. Some patients can become frustrated that a lengthy examination has not helped their care. Annually, tens of thousands of hours of clinicians’ and patients’ time are spent on neuropsychological evaluations that fail to provide valid cognitive data with associated economic costs ( Chafetz & Underhill, 2013 ; Horner, VanKirk, Dismuke, Turner, & Muzzy, 2014 ). Furthermore, if evaluations frequently produce invalid data that cannot sufficiently address the referral question, there is a risk that neuropsychological evaluation itself might be perceived as less useful by referral sources. That is, physicians and others might be less likely to refer patients if such evaluations are not perceived as consistently helpful in clinical care.

A minority of practicing neuropsychologists inform patients that performance validity tests (PVTs) will be administered ( Martin, Schroeder, & Odland, 2015 ; Sharland & Gfeller, 2007 ). Relatively few studies have addressed whether such interventions can decrease the occurrence of invalid data. Nearly all published studies on the effectiveness of providing such information have examined healthy individuals instructed to feign impairment. Johnson and Lesniak-Karpiak (1997) found that such “informed” simulators performed better on cognitive tests than simulators who had not been informed and similarly to controls. However, in other studies ( Erdal, 2004 ; Suhr & Gunstad, 2000 ), informing participants about the inclusion of PVTs has led simulators to feign less obviously but still to perform more poorly than controls, and one nonspecific intervention enabled simulators to avoid detection ( Gunstad & Suhr, 2001 ). Similarly, “coaching” patients or simulated malingerers about specific PVTs has been shown to raise scores on those measures ( Gervais, Green, Allen, & Iverson, 2001 ). Other studies have found no differences between “informed” and “uninformed” simulators on cognitive tests ( Gorny & Merten, 2007 ; Johnson, Bellah, Dodge, Kelley, & Livingston, 1998 ; Sullivan, Keane, & Deffenti, 2001 ; Wong, Lerner-Poppen, & Durham, 1998 ) or PVTs ( Gunstad & Suhr, 2004 ).

Overall, these studies have differed widely in the potency, specificity, and content of the information provided about PVTs ( King & Sullivan, 2009 ; Schenk & Sullivan, 2010 ; Youngjohn, Lees-Haley, & Binder, 1999 ). Typically, experimental simulators have been told that the test battery includes validity measures  or that obvious feigning will otherwise be detectable. These interventions have been based on the premise that knowledge of the presence of validity indicators will influence test behavior. Few studies have adopted a theory-based approach to decreasing the occurrence of invalid data; to our knowledge, none have provided such information about the inclusion of PVTs to actual clinical patients rather than simulators.

Deterrence theory is based on a rational choice model of behavioral economics initially proposed by Becker (1968) . As applied to psychology, it posits that verbal attempts to decrease a specific behavior are most effective if they encourage a cost–benefit analysis of the consequences of the behavior and an assessment of the likelihood of those consequences ( Sullivan & Richer, 2002 ). The theory assumes that behaviors are based on rational choice, taking into account risks and benefits of a given behavior, and that these risks and benefits can be modified by varying the consequences of the behavior and the likelihood that those consequences will occur ( Becker, 1974 ; Grasmick & Bryjak, 1980 ; Ward, Stafford, & Gray, 2006 ).

This study examined whether a simple, cost-free intervention, based on deterrence theory, can decrease the occurrence of invalid data on neuropsychological evaluation. We hypothesized that patients who had been shown a handout detailing the costs and benefits of providing invalid versus valid data would fail a PVT less frequently than patients who had been shown a neutral handout. Given the complexity of patients’ motivations and test-taking behaviors, we presumed that the handout might have a relatively modest effect. In particular, we hypothesized that the intervention might be effective among patients who were not seeking disability benefits because invalid data in such patients could reflect disengagement, apathy, or similar motivational factors that might be more easily modifiable. In contrast, we hypothesized that some patients seeking disability benefits might have already decided not to try their hardest and thus might be less likely to alter their behavior simply on the basis of a handout.

Materials and Methods

Participants

Participants were 251 adults who had been referred for clinical evaluation to a VA Medical Center outpatient Neuropsychology Clinic. All consecutive referrals who were eligible for the study and for whom data were available were included. Referrals came from throughout the Medical Center (primarily Neurology, Mental Health, and Primary Care) with a broad range of referral questions and presenting complaints. No evaluations were done for forensic purposes or for any reason other than clinical care. Patients whose clinical evaluation led to a diagnosis of dementia or major neurocognitive disorder were excluded from the study due to possible ambiguities if such patients were to score below recommended cutoffs on performance validity testing. Patients for whom administration of the Medical Symptom Validity Test (MSVT; Green, 2005 ) was deemed clinically inappropriate (e.g., patients with poor vision who were unable to see the computer screen clearly) were also excluded from the study.

Demographic and other characteristics of the sample are shown in Table 1 . There were 182 patients who were receiving VA service-connected disability benefits at the time of evaluation (“Service connection” refers to disability compensation for diseases or injuries that were incurred or aggravated during active military service). Seventy patients were in the process of applying for disability benefits, or an increase in disability benefits, as determined by self-report. Even for these patients, the evaluation was performed only for clinical purposes and had no direct relationship to the disability claim; the conditions for which patients were seeking benefits were not necessarily neurocognitive or other psychiatric disorders. Information about disability-seeking status was unavailable for 8 patients. A small number of patients in each group reported that they were in litigation at the time of evaluation, again unrelated to the clinical referral. Psychiatric diagnoses assigned at the time of clinical evaluation are shown in Table 2 .

Table 1.

Demographic characteristics of the sample

  Intervention ( n  = 127)   Control ( n  = 124)  
Age 50.1 (16.0) 47.2 (14.9) 
Education (years) 13.3 (2.1) 13.4 (2.2) 
Percent service connection 49.3 (37.2) 54.9 (37.3) 
Sex   
 Male 105 100 
 Female 19 22 
Race   
 White 89 88 
 African American 31 31 
 Other 
History of traumatic brain injury with loss of consciousness 43 33 
Service-connected disability 89 93 
Service-connected for psychiatric or brain disorder 61 56 
Seeking disability benefits (or increase in benefits) 36 34 
In litigation 
  Intervention ( n  = 127)   Control ( n  = 124)  
Age 50.1 (16.0) 47.2 (14.9) 
Education (years) 13.3 (2.1) 13.4 (2.2) 
Percent service connection 49.3 (37.2) 54.9 (37.3) 
Sex   
 Male 105 100 
 Female 19 22 
Race   
 White 89 88 
 African American 31 31 
 Other 
History of traumatic brain injury with loss of consciousness 43 33 
Service-connected disability 89 93 
Service-connected for psychiatric or brain disorder 61 56 
Seeking disability benefits (or increase in benefits) 36 34 
In litigation 

Note : Some patients were given more than one diagnosis. Age, education, and percent service connection are reported as mean (standard deviation); the remaining variables are reported as number of participants. “Percent service connection” refers to the mean disability rating (0–100%) of those patients who were receiving VA disability benefits in each group.

Table 2.

Psychiatric diagnoses assigned to participants at time of clinical evaluation, stratified by group and disability-seeking status

Diagnosis Intervention Control 
Not seeking disability benefits ( n  = 85)   Seeking disability benefits ( n  = 36)   Not seeking disability benefits ( n  = 88)   Seeking disability benefits ( n  = 34)  
PTSD 26 12 28 
Mild neurocognitive disorder 20 22 
Major depressive disorder 18 19 
No diagnosis 10 11 
ADHD 11 
Other and unspecified anxiety disorders 
Other and unspecified depressive disorders 
Generalized anxiety disorder 
Alcohol use disorders 
Bipolar disorder 
Other substance use disorders 
Persistent depressive disorder (dysthymia) 
Specific learning disorder 
Other 
Schizoaffective disorder, schizophrenia 
Somatic symptom disorder, conversion disorder 
Panic disorder 
Social anxiety disorder 
Borderline personality disorder 
Factitious disorder, malingering 
Adjustment disorder 
Unspecified personality disorder 
Unspecified schizophrenia spectrum and other psychotic disorder 
Diagnosis Intervention Control 
Not seeking disability benefits ( n  = 85)   Seeking disability benefits ( n  = 36)   Not seeking disability benefits ( n  = 88)   Seeking disability benefits ( n  = 34)  
PTSD 26 12 28 
Mild neurocognitive disorder 20 22 
Major depressive disorder 18 19 
No diagnosis 10 11 
ADHD 11 
Other and unspecified anxiety disorders 
Other and unspecified depressive disorders 
Generalized anxiety disorder 
Alcohol use disorders 
Bipolar disorder 
Other substance use disorders 
Persistent depressive disorder (dysthymia) 
Specific learning disorder 
Other 
Schizoaffective disorder, schizophrenia 
Somatic symptom disorder, conversion disorder 
Panic disorder 
Social anxiety disorder 
Borderline personality disorder 
Factitious disorder, malingering 
Adjustment disorder 
Unspecified personality disorder 
Unspecified schizophrenia spectrum and other psychotic disorder 

Notes : PTSD = posttraumatic stress disorder; ADHD = attention deficit/hyperactivity disorder.

Procedure

Upon checking in for his or her appointment for neuropsychological evaluation, each patient was given an envelope randomly containing one of the 2 handouts. The “Intervention” handout, which was based on deterrence theory, explicitly listed the positive consequences of trying one's hardest on the cognitive tests and the negative consequences of not trying one's hardest. For example, it stated that if patients try their hardest on the tests, “it helps us identify the problems you are having, and how severe they are.” The handout stated that, if patients do not try their hardest, “we might not be able to suggest treatments, so you might not get all the care you need.” The handout indicated that the evaluation would include tasks that would show whether patients were trying their best. Patients were asked to sign the form indicating that they had read it and agreed with it. They were also asked to initial several statements indicating that they would try their best and provide honest answers and that they understood some of the consequences of providing valid versus invalid data. The text of the handout is shown in Fig. 1 .

Fig. 1.

“Intervention” handout.

Fig. 1.

“Intervention” handout.

The “Control” handout simply contained general information about neuropsychological evaluation. For example, it stated, “We might ask you to do some tasks of memory and thinking … These tasks can help us to better understand how your memory and thinking are working.” This handout is shown in Fig. 2 . Control participants were not asked to sign the handout, as the act of signing was considered a component of the intervention itself. That is, the intervention consisted of both “informing” patients of the consequences of providing valid or invalid data, and “obtaining their written commitment” that they had read the handout and agreed with its content. The Intervention and Control handouts were matched for length and for required reading level.

Fig. 2.

“Control” handout.

Fig. 2.

“Control” handout.

In the waiting room, prior to evaluation, the patient was asked to remove the handout from the envelope to read it, to sign and initial it if requested, and to replace it in the envelope. This procedure allowed the examiner to remain blinded to which form of the handout the patient had been given. The patient was then seen for clinical neuropsychological evaluation according to standard clinical practices, with the test battery varying according to the specifics of the case and the referral question. After the clinical interview and immediately prior to formal testing, all patients were read a standardized short paragraph introducing the tests to ensure that all patients received equal instructions about effort and test difficulty aside from the information contained in the handouts. The paragraph simply stated that the evaluation would consist of different types of tests, some of which would be easy and some difficult on which the patient had to try his or her hardest.

The MSVT ( Green, 2005 ) served as the outcome measure of whether each patient provided valid data. This measure was chosen because of its sensitivity to invalid responding, its use in a substantial amount of preexisting research, its relatively brief administration time (e.g., as compared to the Word Memory Test; Green, 2003 ), and its frequent use in clinical practice (e.g., Sollman & Berrry, 2011 ). The MSVT was administered and scored in the standard manner described in its manual, including the use of standard cutoff scores. Patients were classified as having passed or failed the MSVT using the criteria in the test manual. The rest of the neuropsychological test battery administered to each patient was determined according to the specific referral question and each patient's particular clinical needs.

Data Analysis

Potential differences on demographic and clinical variables between the Intervention and Control groups were examined using t -tests for age, education, and VA disability rating (“percent service connection”); and chi-square analyses for sex, race, number of patients receiving or seeking disability benefits, litigation status, and history of traumatic brain injury (TBI) with loss of consciousness. For analyses in which the 2 groups were subdivided according to disability-seeking status (i.e., whether or not a patient reported being in the process of applying for disability benefits or an increase in benefits at the time of evaluation), groups were compared using two-way analysis of variance for age, education, and percent service connection, with group and disability-seeking status as between-subjects factors, and each demographic variable as the dependent variable in separate analyses. Differences on other demographic variables among the 4 subgroups were again examined using chi-square analyses.

Associations between group and MSVT performance were examined using SPSS Generalized Linear Models (binary logistic) with group (Intervention vs. Control) as the independent variable and MSVT performance (pass vs. fail) as the dependent variable. Disability-seeking status (i.e., whether or not a patient reported being in the process of applying for disability benefits, or an increase in benefits at the time of evaluation) was included as an additional independent variable.

Results

There were no differences between patients who received the Intervention handout and those who received the Control handout in age, level of education, VA disability rating (“percent service connection”), sex, self-reported history of TBI with loss of consciousness, or other demographic variables (Table 1 ). In addition, the groups did not differ in number of patients who were receiving VA service-connected disability benefits at the time of evaluation, number of patients who were receiving such benefits for psychiatric or neurological conditions, or number who reported that they were seeking disability benefits. When the Intervention and Control groups were further subdivided according to disability-seeking status, there were no main effects of group or disability-seeking status, and no interactions of group by disability-seeking status, for age, education, or percent service connection. There were also no differences among these 4 subgroups in sex, race, self-reported history of TBI with loss of consciousness, number of patients receiving service-connected disability benefits, number of patients service-connected for psychiatric or neurological conditions, or number of patients in litigation.

Analysis of the association between group (Intervention vs. Control) and MSVT performance (pass vs. fail) in the entire sample showed no difference between groups (chi-square = 0.64, not significant; Fig. 3 ), suggesting that the deterrence theory-based handout had no effect on the frequency with which patients provided invalid data. The next analysis included disability-seeking status (i.e., whether or not a patient was in the process of applying for disability benefits at the time of evaluation) as an additional predictor. Seeking disability benefits was significantly associated with failing the MSVT (chi-square = 17.33, p  < .001). The interaction of group by disability-seeking status was also significant (chi-square = 3.89, p  < .05); contrary to our hypothesis, the Intervention handout was associated with lower frequency of failing the MSVT “only” among patients who were seeking disability benefits (Fig. 4 ).

Fig. 3.

Percentage of patients in the overall sample who passed versus failed MSVT as a function of the handout they received.

Fig. 3.

Percentage of patients in the overall sample who passed versus failed MSVT as a function of the handout they received.

Fig. 4.

Percentage of patients who failed MSVT as a function of disability-seeking status and the handout they received.

Fig. 4.

Percentage of patients who failed MSVT as a function of disability-seeking status and the handout they received.

The use of the MSVT alone to determine data invalidity could raise concerns that reliance on a single PVT, without additional data or context, is not entirely consistent with current clinical guidelines ( Bush et al., 2005 ; Heilbronner, Sweet, Morgan, Larrabee, & Millis, 2009 ; but see Proto et al., 2014 ). An additional analysis was thus undertaken, using, instead as an outcome measure, the clinician's overall judgment, “at the time of evaluation,” of whether the patient had provided valid data. This was a clinical determination based on the MSVT, whichever other validity indices the clinician had administered, behavioral observations, overall patterns of performance, and other factors. Using clinicians’ determination of valid versus invalid data as an outcome measure produced results very similar to those reported earlier. Specifically, 8 patients in the Intervention group and 11 patients in the Control group were excluded from this analysis, as the clinician was unable to determine confidently whether valid neuropsychological data had been provided. As in the analysis of MSVT performance, there was no difference between the Intervention and Control groups (chi-square = 0.40, not significant) overall, and disability-seeking was strongly associated with clinicians’ determination of invalid data (chi-square = 21.43, p  < .001). The interaction of group by disability-seeking status was again significant (chi-square = 4.79, p  < .05), indicating that the Intervention handout was associated with lower frequency of providing invalid data only among patients seeking disability benefits.

Discussion

This study examined whether a brief, cost-free, theoretically-based intervention could decrease the frequency with which neuropsychology clinic patients failed the MSVT, a common PVT. The intervention consisted of a handout, based on deterrence theory, which explicitly detailed the positive consequences of trying one's hardest on cognitive tests and the negative consequences of not trying one's hardest. Across the entire sample, patients who were given the Intervention handout failed the MSVT as often as those who were given a Control handout. However, among patients who were applying for disability benefits at the time of evaluation, the Intervention handout was associated with lower frequency of failing the MSVT, compared to the Control handout. There was no group difference among patients who were not seeking disability benefits.

We had hypothesized that the Intervention handout would decrease the occurrence of MSVT failure among patients who were “not” seeking disability benefits because these patients’ motivations might be more easily modifiable than those of patients who might have specific, concrete incentives to perform poorly. It is thus unclear why the intervention was effective only among patients who were in the process of applying for disability benefits or an increase in disability benefits. One possibility is that the intervention was based on deterrence theory, which assumes that behavior is a product of rational choice. Thus, perhaps a subset of patients had intended deliberately not to perform to the best of their ability, in the hope that poor performance would increase the likelihood of their disability claim being approved. The consequences listed in the handout might have caused such patients to reconsider their actions and to provide a more accurate reflection of their true cognitive abilities. Conversely, among patients without such financial incentive, the reasons for providing invalid data might perhaps have been less deliberate—perhaps simply reflecting a lack of engagement with the tasks, distrust of the procedures or the healthcare system, apathy, or other factors ( Carone et al., 2010 )—and might thus have been less influenced by an intervention that aimed to encourage patients to consider the consequences of their actions. To the extent that these patients’ noncredible performance might have been less “calculated” or “premeditated,” the findings would be consistent with a rational choice theory ( Becker, 1974 ).

While it appears that the intervention encouraged some disability-seeking patients to provide valid data, an alternative interpretation could be that it simply alerted patients to the presence of PVTs, allowing them to pass the MSVT but still to exaggerate impairment on other cognitive indices ( Suhr & Gunstad, 2000 ). To investigate this possibility, the Free Recall (FR) trial of the MSVT was used as an index of verbal memory because no other memory test had been administered to the entire sample and because FR is a test of memory rather than effort ( Green, 2005 ). We reasoned that if the intervention simply alerted patients to the presence of PVTs, disability-seeking patients in the Intervention group who passed the MSVT would perform more poorly on FR than disability-seeking patients in the Control group who passed the MSVT. This was not the case, as the groups performed equivalently on FR (Intervention mean = 61.5, standard deviation [ SD ] = 17.6; Control mean = 64.2, SD  = 15.2; t  = −0.44, not significant). Similarly, we reasoned that if the intervention simply alerted patients to the presence of PVTs, then among disability-seeking patients in the Intervention group, those who passed the MSVT would perform equivalently on FR to those who failed the MSVT. Again, this was not the case, as disability-seeking patients in the Intervention group who passed the MSVT outperformed those who failed it (“pass” mean = 61.5, SD  = 17.6; “fail” mean = 41.4, SD  = 18.0; t  = −3.25, p  < .01). These analyses thus support the interpretation that the Intervention was associated with improved performance on cognitive tests, as well as validity indices, among a subgroup of patients.

Although the intervention effectively reduced the occurrence of invalid data in a subgroup of the sample, several factors indicate that these findings should be considered preliminary. The rate of MSVT failure in this clinically referred sample might be considered quite high. Specifically, 35% of the overall sample—and 26% of those who were not applying for disability benefits—failed the MSVT. This high rate of PVT failure could potentially raise concerns about sample characteristics or even about false-positive results on the MSVT. However, these findings are comparable to percentages reported in other studies. For example, in a review of freestanding PVT failure rates in military and Veteran samples, Denning (2015) reported a failure rate of 29% averaged across all samples and MSVT failure rate of 28%; most of these samples consisted of Veterans or active duty military personnel seen for clinical or research—not forensic—evaluations. PVT failure rates of 35%–37% have been reported in large samples of active duty service members seen for clinical evaluations, compared to 54% and 52% in disability evaluations ( Armistead-Jehle & Buican, 2012 ; Grills & Armistead-Jehle, 2016 ). A recent survey of neuropsychologists in the Veterans Health Administration ( Young et al., 2016 ) reported an estimated failure rate of 23% on symptom/PVTs. Thus, the frequency of invalid data may be higher in this population than in other clinical settings, possibly related to the seemingly high proportion of patients receiving or seeking disability benefits. The extent to which these results would generalize to patient populations other than at a VA Medical Center thus remains unknown.

Similarly, the intervention could most probably not be used in its present form in medicolegal settings because plaintiff attorneys might not allow their clients to sign and initial the handout. Also, disability-seeking status, which proved to be a strong predictor of MSVT performance, was assessed only by self-report.

In this study, the Intervention and Control handouts were given to patients in the waiting room prior to the clinical interview, so that the examiner would remain blinded to which handout the patient had been given. It might have been more effective for patients to read the handout after the interview immediately before beginning formal testing. By doing so, the information in the handouts might have been more “available” to patients as they began testing and might thus have exerted a greater influence on their test-taking behavior.

Finally, the intervention used in this study can be considered a rather “low dose.” In particular, it is difficult to know with certainty how carefully patients read the handouts or even how clearly they understood the information presented. Handouts were used for pragmatic reasons in this study; for example, if the intervention had been presented orally to patients immediately before formal testing, the examiner would no longer have been blinded to each patient's group assignment. It seems likely, though, that presenting the information orally, face to face, could allow it to be delivered in a more compelling manner while ensuring to a greater degree that the patient was attending to the information and comprehending it. Studies are planned in our laboratory to further explore these possibilities.

Even though the intervention was effective only for a subgroup of patients, these results are quite promising. The intervention itself entails no financial cost, requires only a minimal amount of time, and produced no observable adverse effects. Thus, even if it were effective only in a minority of cases, it would still provide substantial benefit with essentially no cost. Future research could address the limitations listed earlier to identify and optimize an intervention that is more effective in the general clinical population, including patients who are not seeking disability benefits. Such a cost-free intervention could significantly improve the diagnosis and treatment of patients with possible cognitive disorders, while reducing the costs currently associated with evaluations that do not provide valid data. That intervention, ultimately, would increase the utility and value of neuropsychological evaluation to both patients and referral sources.

Funding

This work was supported by a National Academy of Neuropsychology clinical research grant to MDH.

Conflict of Interest

None declared.

References

Armistead-Jehle
,
P.
, &
Buican
,
B.
(
2012
).
Evaluation context and symptom validity test performance in a U.S. military sample
.
Archives of Clinical Neuropsychology
  ,
27
,
828
839
.
Becker
,
G. S.
(
1968
).
Crime and punishment: An economic approach
.
Journal of Political Economy
  ,
78
,
169
217
.
Becker
,
G. S.
(
1974
). Crime and punishment: An economic approach . In
Becker
,
G. S.
, &
Landes
,
W. M.
(Eds.)
Essays in the economics of crime and punishment
  (pp.
1
54
).
New York
:
Columbia University Press
.
Bush
,
S. S.
,
Ruff
,
R. M.
,
Troster
,
A. I.
,
Barth
,
J. T.
,
Koffler
,
S. P.
,
Pliskin
,
N. H.
, et al
. (
2005
).
Symptom validity assessment: Practice issues and medical necessity
.
Archives of Clinical Neuropsychology
  ,
20
,
419
426
.
Carone
,
D. A.
,
Iverson
,
G. L.
, &
Bush
,
S. S.
(
2010
).
A model to approaching and providing feedback to patients regarding invalid test performance in clinical neuropsychological evaluations
.
The Clinical neuropsychologist
  ,
24
(
5
),
759
778
. .
Chafetz
,
M.
, &
Underhill
,
J.
(
2013
).
Estimated costs of malingered disability
.
Archives of Clinical Neuropsychology
  ,
29
(
7
),
633
639
.
Denning
,
J.
(
2015
).
Performance validity test failure rates in military and Veteran populations: The impact on clinical, research, and disability findings [Abstract]
.
The Clinical neuropsychologist
  ,
29
,
342
.
Erdal
,
K.
(
2004
).
The effects of motivation, coaching, and knowledge of neuropsychology on the simulated malingering of head injury
.
Archives of Clinical Neuropsychology
  ,
19
(
1
),
73
88
.
Gervais
,
R. O.
,
Green
,
P.
,
Allen
,
L. M.
, &
Iverson
,
G. L.
(
2001
).
Effects of coaching on symptom validity testing in chronic pain patients presenting for disability assessments
.
Journal of Forensic Neuropsychology
  ,
2
(
2
),
1
19
.
Gorny
,
I.
, &
Merten
,
T.
(
2007
).
Symptom information—warning—coaching: How do they affect successful feigning in neuropsychological assessment
.
Journal of Forensic Neuropsychology
  ,
4
(
4
),
71
97
.
Grasmick
,
H.
, &
Bryjak
,
G.
(
1980
).
The deterrent effect of perceived severity of punishment
.
Social Forces
  ,
59
,
471
491
.
Green
,
P.
(
2003
).
Word Memory Test Manual for Windows: User's manual
  .
Edmonton, Alberta, Canada
:
Green's Publishing
.
Green
,
P.
(
2005
).
Medical Symptom Validity Test manual
  .
Edmonton, Alberta, Canada
:
Green's Publishing
.
Grills
,
C. E.
, &
Armistead-Jehle
,
P.
(
2016
).
Performance validity test and Neuropsychological Assessment Battery Screening Module performances in an active duty sample with a history of concussion
.
Applied neuropsychology: Adult
  ,
23
(
4
),
1
7
.
Gunstad
,
J.
, &
Suhr
,
J. A.
(
2001
).
Courting the clinician: Efficacy of the full and abbreviated forms of the Portland Digit Recognition Test: Vulnerability to coaching
.
Clinical Neuropsychologist
  ,
15
(
3
),
397
. http://search.ebscohost.com/login.aspx?direct=true&db=pbh&AN=5343664&site=ehost-live .
Gunstad
,
J.
, &
Suhr
,
J. A.
(
2004
).
Use of the Abbreviated Portland Digit Recognition Test in Simulated Malingering and Neurological Groups
.
Journal of Forensic Neuropsychology
  ,
4
(
1
),
33
47
. .
Heilbronner
,
R. L.
,
Sweet
,
J. J.
,
Morgan
,
J. L.
,
Larrabee
,
G. J.
, &
Millis
,
S. R.
(
2009
).
American Academy of Clinical Neuropsychology Consensus Conference statement on neuropsychological assessment of effort, response bias, and malingering
.
. The Clinical neuropsychologist
  ,
23
,
1093
1129
.
Horner
,
M. D.
,
VanKirk
,
K. K.
,
Dismuke
,
C. E.
,
Turner
,
T. H.
, &
Muzzy
,
W.
(
2014
).
Inadequate effort on neuropsychological evaluation is associated with increased healthcare utilization
.
The Clinical neuropsychologist
  ,
28
,
703
713
.
Iverson
,
G. L.
(
2006
).
Ethical issues associated with the assessment of exaggeration, poor effort, and malingering
.
Applied neuropsychology
  ,
13
,
77
90
.
Johnson
,
J. L.
,
Bellah
,
C. G.
,
Dodge
,
T.
,
Kelley
,
W.
, &
Livingston
,
M. M.
(
1998
).
Effect of warning on feigned malingering on the WAIS-R in college samples
.
Perceptual and Motor Skills
  ,
87
(
1
),
152
154
.
Johnson
,
J. L.
, &
Lesniak-Karpiak
,
K.
(
1997
).
The effect of warning on malingering on memory and motor tasks in college samples
.
Archives of Clinical Neuropsychology
  ,
12
(
3
),
231
238
.
King
,
J.
, &
Sullivan
,
K. A.
(
2009
).
Deterring malingered psychopathology: The effect of warning simulating malingerers
.
Behavioral Sciences & the Law
  ,
27
(
1
),
35
49
. .
Martin
,
P. K.
,
Schroeder
,
R. W.
, &
Odland
,
A. P.
(
2015
).
Neuropsychologists’ validity testing beliefs and practices: A survey of North American professionals
.
The Clinical neuropsychologist
  ,
29
(
6
),
741
776
.
Mittenberg
,
W.
,
Patton
,
C.
,
Canyock
,
E. M.
, &
Condit
,
D. C.
(
2002
).
Base rates of malingering and symptom exaggeration
.
Journal of Clinical and Experimental Neuropsychology
  ,
24
(
8
),
1094
1102
.
Proto
,
D. A.
,
Pastorek
,
N. J.
,
Miller
,
B. I.
,
Romesser
,
J. M.
,
Sim
,
A. H.
, &
Linck
,
J. F.
(
2014
).
The dangers of failing one or more performance validity tests in individuals claiming mild traumatic brain injury-related postconcussive symptoms
.
Archives of Clinical Neuropsychology
  ,
29
(
7
),
614
624
.
Schenk
,
K.
, &
Sullivan
,
K. A.
(
2010
).
Do warnings deter rather than produce more sophisticated malingering
.
Journal of Clinical and Experimental Neuropsychology
  ,
32
(
7
),
752
762
. .
Sharland
,
M. J.
, &
Gfeller
,
J. D.
(
2007
).
A survey of neuropsychologists’ beliefs and practices with respect to the assessment of effort
.
Archives of Clinical Neuropsychology
  ,
22
(
2
),
213
223
. .
Sollman
,
M. J.
, &
Berrry
,
D. T. R.
(
2011
).
Detection of inadequate effort on neuropsychological testing: A meta-analytic update and extension
.
Archives of Clinical Neuropsychology
  ,
26
,
774
789
.
Suhr
,
J. A.
, &
Gunstad
,
J.
(
2000
).
The effects of coaching on the sensitivity and specificity of malingering measures
.
Archives of Clinical Neuropsychology
  ,
15
(
5
),
415
424
.
Sullivan
,
K.
,
Keane
,
B.
, &
Deffenti
,
C.
(
2001
).
Malingering on the RAVLT: Part I. Deterrence strategies
.
Archives of Clinical Neuropsychology
  ,
16
(
7
),
627
641
.
Sullivan
,
K.
, &
Richer
,
C.
(
2002
).
Malingering on subjective complaint tasks. An exploration of the deterrent effects of warning
.
Archives of Clinical Neuropsychology
  ,
17
(
7
),
691
708
.
Ward
,
D.
,
Stafford
,
M.
, &
Gray
,
L.
(
2006
).
Rational choice, deterrence, and theoretical integration
.
Journal of Applied Social Psychology
  ,
36
,
571
585
.
Wong
,
J. L.
,
Lerner-Poppen
,
L.
, &
Durham
,
J.
(
1998
).
Does warning reduce obvious malingering on memory and motor tasks in college samples
.
International Journal of Rehabilitation & Health
  ,
4
(
3
),
153
165
.
Young
,
J. C.
,
Roper
,
B. L.
, &
Arentsen
,
T. J.
(
2016
).
Validity testing and neuropsychology practice in the VA healthcare system: Results from recent practitioner survey
.
The Clinical neuropsychologist
  ,
30
(
4
),
497
514
. .
Youngjohn
,
J. R.
,
Lees-Haley
,
P. R.
, &
Binder
,
L. M.
(
1999
).
Comment: Warning malingerers produces more sophisticated malingering
.
Archives of Clinical Neuropsychology
  ,
14
(
6
),
511
515
.