Abstract

Larrabee (2008) applied chained likelihood ratios to selected performance validity measures (PVMs) to identify non-valid performances on neuropsychological tests. He presented a method of combining different PVMs with different sensitivities and specificities into an overall probability of non-validity. We applied his methodology to a set of 11 PVMs using a sample of 255 subjects. The results of the study show that in various combinations of two or three PVMs, a high reliability of invalidity can be determined using the chained likelihood ratio method. This study advances the ability of clinicians to chain various PVMs together and calculate the probability that a set of data is invalid.

Introduction

The purpose of internal and add-on performance validity measures (PVMs) is to objectively evaluate diminished efforts during neuropsychological assessments (Larrabee, 2003, 2008; Meyers & Volbrecht, 2003). In fact, the National Academy of Neuropsychology presented a position paper on the use of PVMs (Bush et al., 2005), which specifically recommends administration of at least two PVMs during an evaluation. Although some authors consider the term Symptom Validity Test (SVT; Bush et al., 2005; Larrabee, 2008) interchangeable with PVM, we will delineate the two terms by using SVT to indicate validity measures that are related to self-report and PVM to indicate validity measures related to performance on neuropsychological testing (Larrabee, 2012).

It is important that conclusions extrapolated from a neuropsychological assessment are based on valid neuropsychological performance (Slick, Sherman, & Iverson, 1999; Vickery, Berry, Inman, Harris, & Orey, 2001). The use of PVMs is the standard of practice (Bush et al., 2005; Larrabee, 2008). PVMs are used to detect invalid test performance, but should not be considered “malingering tests.” For consistency and clarity, PVMs are not tests of malingering, but instead are tests of performance validity, meaning that the tests focus on assessing valid data rather than identifying those who are purposefully exaggerating their symptoms. The conclusion/diagnosis of malingering is based on a number of factors, only some of which are the use of PVMs (Meyers, 2007; Meyers, Volbrecht, Axelrod, & Reinsch-Boothby, 2011; Slick et al., 1999). When multiple PVMs are failed in the context of external incentive, with no viable developmental, psychiatric, or neurologic explanation, then the data could be consistent with malingering.

The usefulness of a PVM is determined by its sensitivity, specificity, PPP, and NPP (Bianchini, Mathias, & Greve, 2001). Sensitivity is defined as the true positive rate which is a percentage of data sets correctly identified as non-valid. Poor sensitivity occurs when false negative errors are increased which means true invalid data sets are undiscovered. Specificity refers to the true negative rate which is the percentage of invalid data correctly identified as invalid (Etcoff & Kampfer, 1996). Poor specificity, however, indicates false positive errors where valid cases are incorrectly classified as invalid (Greve & Bianchini, 2004).

Bianchini and colleagues (2001) stated that positive and negative predictive powers (PPP and NPP) refer to the validity of the individual factors of a test and are dependent on the base rate of the particular disorder that is being diagnosed. PPP is the probability of the patient having a given diagnosis when they test positive for it. It is calculated by dividing the number of true positives by the number of true positives plus false positives (Baldessarini, Finklestein, & Arana, 1983). Conversely, NPP is the probability of actually testing negative for a given diagnosis when the patient's test results indicate that he or she has tested negative. NPP is calculated by dividing the number of true negatives by the number of true negatives plus false negatives (Baldessarini et al., 1983). Sensitivity and specificity can vary for each PVM and will, in turn, yield different PPPs and NPPs. After completing these calculations, it is possible to assess the accuracy of any given test. Since a single measure is rarely ever completely accurate, the use of multiple validity measures is recommended (Larrabee, 2003; Meyers & Volbrecht, 2003; Slick et al., 1999; Victor, Boon, Serpa, & Buehler, 2009).

Multiple indicators are required to accurately determine the validity of a participant's performance (Bianchini et al., 2001). This is because multiple indicators can improve poor sensitivity (Greve & Bianchini, 2004) and discriminatory functions (Victor, Boone, Serpa, Buehler, & Ziegler, 2009). Diagnostic criteria also state that in order for results to possibly be considered invalid, the patient is required to fail multiple indicators within a set of tests; simply failing a single test is insufficient (Slick et al., 1999).

In order to detect probable invalid data, diagnostic criteria mandate that multiple indicators of purposeful exaggeration of symptoms and poor effort are measured (Vickery et al., 2001). Detection methods have typically relied on the use of PPP and false positives; however, Larrabee (2008) applies chained likelihood ratios to show the probability of invalidity when one, two, or three PVMs are failed. By chaining likelihood ratios, Larrabee (2008) shows that patients who fail three PVMs in a setting with a base rate of non-validity of 0.40 have an average .99 probability of providing invalid data. For more information on livelihood ratios, see Grimes and Schulz (2005). These same probabilities are maintained even when calculated using a low base rate of invalid data. Based on Larrabee's (2008) findings, the current study was designed to examine whether such high probabilities would also be found when chained likelihood ratios were applied to other PVMs.

Method

The data set was obtained from a general neuropsychology practice located in the Midwest. The presence or the absence of external incentives was not a variable utilized in group formation. In order for data to be included, participants had to have completed all the tests necessary for the PVMs. If any of the individuals were missing any of the tests that are involved in this battery, they were not included in the analysis. Two hundred and fifty-five patients met the criteria for inclusion. The participants had a mean age of 34.55 (SD 12.12) with a range of 17–73. The range for educational background was 5–18 years, with a mean of 12.55 (SD 1.84). In the group, 231 were right-handed and 24 were left-handed. There were 190 Caucasian participants, 22 African American participants, 18 Hispanic participants, and 25 that were none of the aforementioned races. The participants consisted of 208 men and 47 women who were able to be included in the data set. Of these individuals, 95 were diagnosed with mental health issues, 24 suffered from mild Traumatic Brain Injury, and 49 had multiple diagnoses or did not meet the criteria for one of the other diagnostic groups. There were also 87 other individuals present who had a combination of cognitive complaints and a behavior health diagnosis; 179 were not in litigation and 76 were in litigation. All subjects were independent functioning; none was diagnosed with a dementia.

The initial examination of identifying pass or fail on the Word Memory Test (WMT) was done using two methods. On the WMT, a pass was coded if the Immediate, Delayed, and Consistency were all 82.5% or above. If any of the three scores were below 82.5% then the Genuine Memory Impairment Profile (GMIP; Green, Flaro, & Courtney, 2009) was calculated per instructions in Green and colleagues (2009). If the difference between the easy and hard items was ≥30, then a pass was coded (even if the Immediate, Delayed, or Consistency was below 82.5%). This method resulted in 186 (72.9%) passing the WMT and 69 (27.1%) failing the WMT. None of the 255 individuals would have qualified as moderately or severely impaired in accordance with Green and colleagues (2009) to qualify for the use of the GMIP. Next, the WMT pass and fail rate was calculated based only on the Immediate, Delayed, and Consistency scores. If any of the three scores fell below 82.5%, it was coded as a failure, and if all three scores were above 82.5%, then a pass was coded. Using this method alone and not using the GMIP, 157 (61.6%) passed the WMT and 98 (38.4%) failed the WMT. Interestingly, the 38.4% failure rate on the WMT was very similar to the 0.40 base rate reported by Larrabee (2003), and Larrabee, Millis, and Meyers (2009). We chose to use only the second method (cutoff below 82.5%) to establish our pass and fail for two reasons. (a) The population in this sample were not moderately to severely cognitively impaired and all were independently functioning at the time of the assessment and (b) the use of the GMIP with this data sample could introduce a change in the fail rate of 29 individuals or 11.3% (29/255) which could represent a substantial false negative rate into the classification of pass and fail on the WMT. Using only the regular cutoff score of 82.5%, of the 76 individuals in litigation, 31 (40.8%) passed the WMT, and 45 (59.2%) failed the WMT. Of the 179 non-litigants, 126 (70.4%) passed the WMT and 53 (29.6%) failed the WMT.

To be considered in the data set, participants had to have completed the Meyers Neuropsychological Battery (MNB; Meyers & Rohling, 2004; Rohling, Meyers, & Millis, 2003) and from this battery of tests, the 10 internal PVMs were used: Forced Choice (FC; Meyers & Volbrecht, 2003), Dichotic Listening (DL; Meyers, Roberts, Bayless, Volkert, & Evitts, 2002), Sentence Repetition (SR; Strauss, Sherman, & Spreen, 2006), Judgment of Line Orientation (JL; Benton, Hamsher, Varney, & Spreen, 1983), Token Test (TT; Strauss et al., 2006), AVLT-Recognition (AV; Strauss et al., 2006), Finger Tapping-Dominant Hand (FTD; Reitan, & Wolfson, 1985), Memory Error Pattern (MEP), Reliable Digit Span (RDS; Meyers & Volbrecht, 2003), WMTMNB, and Meyers Index (MI; Meyers et al., 2013; Meyers, Millis, & Volkert, 2002).

All of the PVMs included in the MNB use very conservative cutoff scores. This is a purposeful choice by the author. The cutoff scores are set at a zero false positive rate for persons with an loss of consciousness (LOC) of less than 8 days and independently functioning (Meyers & Volbrecht, 2003). Other authors have taken a less stringent approach to establishing cutoff scores (Boone et al., 2000). It is the authors' intensions to use the conservative cutoff scores of these PVMs as the cutoffs are already published and have been in use for many years.

The FC test (Brandt, Rubinsky, & Larson, 1985) is an FC recognition task. The participant must choose between two recognition choices. It is made up of 20 items. A failing score of 10 or lower is chance or below (Meyers, Galinsky, & Volbrecht, 1999; Meyers, Morrison, & Miller, 2001; Meyers & Volbrecht, 2003) and was the original cutoff used. A cutoff score of 13 has zero false positives for individuals with less than 8 days with LOC and is consistent with the method used to set the cutoffs for the other internal PVM in the MNB.

In the DL Task, the subject responds to different stimuli presented at the same time to both ears. The score that they receive is based on their accuracy. To pass on the PVM, a score of 10 or higher is needed (Meyers & Volbrecht, 2003). Before the subject is tested using the DL, there is a preliminary test to see if their hearing is within normal parameters first. If their hearing is found to be inadequate, the DL is not used (Hamberger & Tamny, 1999).

For the SR test (Strauss et al., 2006), the participant repeats sentences and the score is given for the sentences repeated correctly. To pass this PVM, a score of 10 or higher is needed (Meyers & Volbrecht, 2003).

On the JL, a failure on the PVM is a score of 12 or lower in the raw uncorrected number correct (Meyers et al., 1999; Meyers & Volbrecht, 2003). The participants are presented with pictures and are asked to match full or partial lines.

In the TT (Strauss et al., 2006), a score of 150 or less is a failing score (Meyers & Volbrecht, 2003). This test uses manipulation of shapes and colors to test subtle receptive language dysfunction.

To pass the AV test (Strauss et al., 2006), a measure of auditory and visual memory performance, a score of 10 or higher is needed (Meyers et al., 2001; Meyers & Volbrecht, 2003).

On the FTD test, the patient's abilities are assessed using the fingers on the subject's dominant hand, with the test scores being comparable with the calculated expected Finger tapping score. The formula used is estimated FT = Block Design scale score × 0.361 + Digit Symbol scale score × 0.491 + raw Copy score on CFT × 0.185 + 31.34. Difference = estimated FT − actual mean finger tapping score for the dominant hand. If the difference is >10, then this is a failure on the PVM (Meyers & Volbrecht, 2003).

MEP is the association of the immediate, delayed, and recognition scores on the Complex Figure Test (CFT; Meyers & Meyers, 1995; Meyers et al., 1996). The profiles for this PVM based on these relationships are delineated as Attention, Encoding, Storage, Retrieval, Consolidation, Peak, and Other (Meyers & Volbrecht, 1999, 2003). Production of an Attention, Encoding, or Storage MEP indicates failure on this PVM. These MEPs of Attention, Encoding, and Storage are only expected with individuals who are institutionalized due to cognitive impairment (Meyers & Volbrecht, 1999, 2003). An Attention MEP is classically identified when scores on the CFT Immediate, Delayed, and Recognition are all below 20 on a T-score scale (Meyers & Meyers, 1995). An Encoding MEP occurs when the CFT Immediate and Delayed are below 20 on a T-score scale and Recognition is higher (usually above 20 on a T-score scale; Meyers & Meyers, 1995). The Storage MEP slope is calculated using the CFT Immediate, CFT Delayed, and Recognition scores (Meyers & Meyers, 1995).

RDS is calculated by taking the longest span of Digit Span forward on which both trials are passed plus the longest span on Digit Span backwards on which both trials are passed. Several studies have examined the use of Reliable Digits as a PVM. Meyers and Volbrecht (1998, 2003) proposed a cutoff of 6 or below for a zero false positive rate. Greiffenstein, Baker, and Gola (1994) proposed a cutoff of 7, whereas a study by Greve and colleagues (2009) also found that a cutoff of 6 or less separated valid versus invalid test performance. For the MNB validity measure, a score of 6 or below is a considered a failure on the RDS (Meyers & Volbrecht, 1998, 1999, 2003). For purposes of this study, we used the WAIS-III version of the Digit Span test (Weschler, 1997).

A new PVM was added to the MNB since the publication of the original nine PVMs. This new PVM was made using a sample of 264 consecutive referrals (not overlapping with the current study data), which were referred for a neuropsychological assessment and were given the MNB and the WMT (Green, 2003, 2007). The mean age was 35.49 (SD = 12.57) years with 12.62 (SD = 1.88) years of education. The group was composed of 229 right-handed and 26 left-handed persons; 17 were African American, 200 were Caucasian, 3 Asian, 2 Native American, 18 Hispanic, and 14 Pacific Islanders; 52 were women and 203 were men. The discriminant function equation was: (AVLT Delayed Raw × 0.226) + (Finger Localization Dominant Hand Raw × 0.187) − 6.935. It is important to note that this calculation does not include any of the WMT scores, but only attempts to predict if the WMT would be passed or not, based on scores below 82.5% and the GMIP (Green et al., 2009), and is therefore distinct from the WMT. The cutoff score used was less than or equal to −0.5 which is consistent with the zero false positive rates used with the other PVMs used in this study. The discriminant function was able to correctly classify 74.6% of the total group. The data in this first sample of 264 data sets have no overlap with the data used in the current study. Details of this PVM are presented in the MNB electronic manual (Meyers, 2013).

The 11th validity measure is the MI (from MMPI-2) and the MI-r (from the MMPI-2-RF). The MI and MI-r are validity measures that look at self-report SVT. The MI and the MI-r are calculated similarly using a weighting of the validity measures for each test and are presented in Tables 1 and 2. Meyers and colleagues (2013) showed that the MI and the MI-r are equivalent SVT measures. A score of 5 or more is considered not valid.

Table 1.

Weightings of scores on validity scales for originals MI using the MMPI-2 (Meyers, 2002)

Scale Scores Weight Scores Weight 
F-K 1–9 10+ 
FT 75–89 90+ 
FBS (Raw) 25–29 30+ 
F(p) 75–89 90+ 
Ds-r 75–89 90+ 
Es 21–30 20− 
O-S 100–149 150+ 
Scale Scores Weight Scores Weight 
F-K 1–9 10+ 
FT 75–89 90+ 
FBS (Raw) 25–29 30+ 
F(p) 75–89 90+ 
Ds-r 75–89 90+ 
Es 21–30 20− 
O-S 100–149 150+ 

Notes: F-K = F-K raw score; FT = F scale T score; FBS = Fake Bad Scale raw score; F(p) = Infrequency-pathology scale T score; Ds-r = Dissimulation Scale-Revised T score; ES = Ego Strength T score; O-S = Obvious-Subtle; MI = Meyers Index for the MMPI-2.

Table 2.

Weightings of scores on the validity scale for the MMPI-2-RF used to calculate the MI-r (Meyers et al., 2013)

Scale Score Weight Score Weight Score Weight 
F-r <= 74 75–89 90 + 
Fp-r <= 74 75–89 90 + 
Fs <= 74 75–89 90 + 
FBS-r <= 74 75–89 90 + 
RBS <= 74 75–89 90 + 
Scale Score Weight Score Weight Score Weight 
F-r <= 74 75–89 90 + 
Fp-r <= 74 75–89 90 + 
Fs <= 74 75–89 90 + 
FBS-r <= 74 75–89 90 + 
RBS <= 74 75–89 90 + 

F-r = Infrequent Responses; Fp-r = Infrequent Psychopathology Responses; Fs = Infrequent Somatic Responses; FBS-r = Symptom Validity, RBS = Response Bias Scale.

Statistical Procedures

For the sample utilized, the base rate of invalid data was calculated using the rate of invalid data for the WMT, which was found to be 38.4%; using this calculation, we determined the base rate odds with the formula base rate/(1 − base rate). Sensitivity for each test was then determined using the percentage of people who passed both the WMT and the individual PVM, whereas the specificity was determined using the percentage of people who failed both the WMT and the PVM (Table 3). The current sample of 255 cases was used to set the sensitivity and specificity for each PVM in this study and was not based on the Meyers and Volbrecht (2003) data sample.

Table 3.

Diagnostic statistics and probabilities of invalid performance for individual performance validity measures

Test Sensitivity Specificity Likelihood ratio Pre-test oddsa Post-test odds Post-test probability 
FC 0.720 0.995 144.000 0.624 89.856 .989 
DL 0.664 0.971 22.897 0.624 14.288 .935 
SR 0.634 0.994 105.667 0.624 65.936 .985 
JL 0.642 0.994 107.000 0.624 66.768 .985 
TT 0.618 0.994 103.000 0.624 64.272 .985 
AV 0.736 0.974 28.308 0.624 17.973 .947 
FTD 0.665 0.959 16.220 0.624 10.121 .910 
MEP 0.677 0.994 112.833 0.624 70.408 .986 
RDS 0.659 0.947 12.434 0.624 7.759 .876 
WMTMNB 0.817 0.906 8.691 0.624 5.423 .844 
MI 1.00 0.631 2.710 0.624 1.691 .628 
Test Sensitivity Specificity Likelihood ratio Pre-test oddsa Post-test odds Post-test probability 
FC 0.720 0.995 144.000 0.624 89.856 .989 
DL 0.664 0.971 22.897 0.624 14.288 .935 
SR 0.634 0.994 105.667 0.624 65.936 .985 
JL 0.642 0.994 107.000 0.624 66.768 .985 
TT 0.618 0.994 103.000 0.624 64.272 .985 
AV 0.736 0.974 28.308 0.624 17.973 .947 
FTD 0.665 0.959 16.220 0.624 10.121 .910 
MEP 0.677 0.994 112.833 0.624 70.408 .986 
RDS 0.659 0.947 12.434 0.624 7.759 .876 
WMTMNB 0.817 0.906 8.691 0.624 5.423 .844 
MI 1.00 0.631 2.710 0.624 1.691 .628 

Notes: FC = Forced Choice; DL = Dichotic Listening; SR = Sentence Repetition; JL = Judgment of Line Orientation; TT = Token Test; AV = AVLT-Recognition; FTD = Finger Tapping-Dominant Hand; MEP = Memory Error Pattern; RDS = Reliable Digit Span; WMTMNB = Word Memory Test-Meyers Neurological Battery; MI = Meyers Index (from MMPI-2 or MMPI-2-RF).

aBase rate of WMT failure = 0.384, odds are 0.384/1 − 0.384 = 0.624.

Once these figures were calculated, the likelihood ratio for each test was calculated by dividing the sensitivity by (1 − specificity), as done by Larrabee (2008). The pre-test probabilities of failing each PVM were calculated using the failure rates given by the sample group. This was calculated by dividing the number of failures in a given PVM by the total number of passes and failures for that particular measure. Subsequently, the pre-test odds were calculated by dividing the pre-test probability of each test by (1 − pre-test probability). Post-test odds of each test were calculated by multiplying the pre-test odds by the likelihood ratio. Using these calculations, post-test probabilities for each test were calculated by dividing the post-test odds by (post-test odds + 1) (Table 3).

In order to calculate the post-test probabilities of invalid data for a pair of test failures (Table 4), one would multiply the post-test odds of the first test in the pair by the likelihood ratio of the second test, from the values calculated in Table 3. Post-test probabilities are then calculated, as aforementioned, by dividing the post-test odds by (post-test odds + 1). In order to calculate the post-test probabilities of invalid data for a set of three test failures (Table 5), one would multiply the post-test odds of the combination of the first two tests, as calculated in Table 4, by the likelihood ratio of the third test in the set, as calculated in Table 3. Post-test probabilities are then calculated by dividing the post-test odds by (post-test odds + 1). If desired, post-test odds for a set of “n” number of failures may be calculated by multiplying post-test probabilities of the first (n − 1) tests by the likelihood ratio of the last test; post-test probabilities would then be calculated by dividing the post-test odds by (post-test odds + 1).

Table 4.

Diagnostic statistics and probabilities of malingering for pairs of performance validity measure failure

Test combination Pre-test oddsa Likelihood ratiob Post-test odds Post-test probability 
FC/DL 89.856 22.897 2,057.433 .999 
FC/SR 89.856 105.667 9,494.814 .999 
FC/JL 89.856 107.000 9,614.592 .999 
FC/TT 89.856 103.000 9,255.168 .999 
FC/AV 89.856 28.308 2,543.644 .999 
FC/FTDif 89.856 16.220 1,457.464 .999 
FC/MEP 89.856 112.833 10,138.722 .999 
FC/RDS 89.856 12.434 1,117.270 .999 
FC/WMTMNB 89.856 8.691 780.938 .999 
FC/MI 89.856 2.710 243.510 .996 
DL/SR 14.288 105.667 1,509.770 .999 
DL/JL 14.288 107.000 1,528.816 .999 
DL/TT 14.288 103.000 1,471.664 .999 
DL/AV 14.288 28.308 404.465 .998 
DL/FTDif 14.288 16.220 231.751 .996 
DL/MEP 14.288 112.833 1,600.875 .999 
DL/RDS 14.288 12.434 177.657 .994 
DL/WMTMNB 14.288 8.691 124.177 .992 
DL/MI 14.288 2.710 38.720 .975 
SR/JL 65.936 107.000 7,055.152 .999 
SR/TT 65.936 103.000 6,791.408 .999 
SR/AV 65.936 28.308 1,866.516 .999 
SR/FTDif 65.936 16.220 1,096.482 .999 
SR/MEP 65.936 112.833 7,439.757 .999 
SR/RDS 65.936 12.434 819.848 .999 
SR/WMTMNB 65.936 8.691 573.050 .998 
SR/MI 65.936 2.710 178.687 .994 
JL/TT 66.768 103.000 6,877.104 .999 
JL/AV 66.768 28.308 1,890.069 .999 
JL/FTDif 66.768 16.220 1,082.977 .999 
JL/MEP 66.768 112.833 7,533.634 .999 
JL/RDS 66.768 12.434 830.193 .999 
JL/WMTMNB 66.768 8.691 580.281 .999 
JL/MI 66.768 2.710 180.941 .995 
TT/AV 64.272 28.308 1,819.412 .999 
TT/FTDif 64.272 16.220 1,042.492 .999 
TT/MEP 64.272 112.833 7,252.003 .999 
TT/RDS 64.272 12.434 799.158 .999 
TT/WMTMNB 64.272 8.691 558.588 .998 
TT/MI 64.272 2.710 174.177 .994 
AV/FTDif 17.973 16.220 291.522 .997 
AV/MEP 17.973 112.833 2,027.948 .999 
AV/RDS 17.973 12.434 221.238 .996 
AV/WMTMNB 17.973 8.691 156.203 .994 
AV/MI 17.973 2.710 48.707 .980 
FTDif/MEP 10.121 112.833 1,141.983 .999 
FTDif/RDS 10.121 12.434 125.845 .992 
FTDif/WMTMNB 10.121 8.691 87.962 .989 
FTDif/MI 10.121 2.710 27.428 .965 
MEP/RDS 70.408 12.434 875.453 .999 
MEP/WMTMNB 70.408 8.691 611.916 .998 
MEP/MI 70.408 2.710 190.806 .995 
RDS/WMTMNB 7.759 8.691 67.433 .985 
RDS/MI 7.759 2.710 21.027 .955 
WMTMNB/MI 5.423 2.710 14.209 .934 
Test combination Pre-test oddsa Likelihood ratiob Post-test odds Post-test probability 
FC/DL 89.856 22.897 2,057.433 .999 
FC/SR 89.856 105.667 9,494.814 .999 
FC/JL 89.856 107.000 9,614.592 .999 
FC/TT 89.856 103.000 9,255.168 .999 
FC/AV 89.856 28.308 2,543.644 .999 
FC/FTDif 89.856 16.220 1,457.464 .999 
FC/MEP 89.856 112.833 10,138.722 .999 
FC/RDS 89.856 12.434 1,117.270 .999 
FC/WMTMNB 89.856 8.691 780.938 .999 
FC/MI 89.856 2.710 243.510 .996 
DL/SR 14.288 105.667 1,509.770 .999 
DL/JL 14.288 107.000 1,528.816 .999 
DL/TT 14.288 103.000 1,471.664 .999 
DL/AV 14.288 28.308 404.465 .998 
DL/FTDif 14.288 16.220 231.751 .996 
DL/MEP 14.288 112.833 1,600.875 .999 
DL/RDS 14.288 12.434 177.657 .994 
DL/WMTMNB 14.288 8.691 124.177 .992 
DL/MI 14.288 2.710 38.720 .975 
SR/JL 65.936 107.000 7,055.152 .999 
SR/TT 65.936 103.000 6,791.408 .999 
SR/AV 65.936 28.308 1,866.516 .999 
SR/FTDif 65.936 16.220 1,096.482 .999 
SR/MEP 65.936 112.833 7,439.757 .999 
SR/RDS 65.936 12.434 819.848 .999 
SR/WMTMNB 65.936 8.691 573.050 .998 
SR/MI 65.936 2.710 178.687 .994 
JL/TT 66.768 103.000 6,877.104 .999 
JL/AV 66.768 28.308 1,890.069 .999 
JL/FTDif 66.768 16.220 1,082.977 .999 
JL/MEP 66.768 112.833 7,533.634 .999 
JL/RDS 66.768 12.434 830.193 .999 
JL/WMTMNB 66.768 8.691 580.281 .999 
JL/MI 66.768 2.710 180.941 .995 
TT/AV 64.272 28.308 1,819.412 .999 
TT/FTDif 64.272 16.220 1,042.492 .999 
TT/MEP 64.272 112.833 7,252.003 .999 
TT/RDS 64.272 12.434 799.158 .999 
TT/WMTMNB 64.272 8.691 558.588 .998 
TT/MI 64.272 2.710 174.177 .994 
AV/FTDif 17.973 16.220 291.522 .997 
AV/MEP 17.973 112.833 2,027.948 .999 
AV/RDS 17.973 12.434 221.238 .996 
AV/WMTMNB 17.973 8.691 156.203 .994 
AV/MI 17.973 2.710 48.707 .980 
FTDif/MEP 10.121 112.833 1,141.983 .999 
FTDif/RDS 10.121 12.434 125.845 .992 
FTDif/WMTMNB 10.121 8.691 87.962 .989 
FTDif/MI 10.121 2.710 27.428 .965 
MEP/RDS 70.408 12.434 875.453 .999 
MEP/WMTMNB 70.408 8.691 611.916 .998 
MEP/MI 70.408 2.710 190.806 .995 
RDS/WMTMNB 7.759 8.691 67.433 .985 
RDS/MI 7.759 2.710 21.027 .955 
WMTMNB/MI 5.423 2.710 14.209 .934 

Notes: FC = Forced Choice; DL = Dichotic Listening; SR = Sentence Repetition; JL = Judgment of Line Orientation; TT = Token Test; AV = AVLT-Recognition; FTD = Finger Tapping-Dominant Hand; MEP = Memory Error Pattern; RDS = Reliable Digit Span; WMTMNB = Word Memory Test-Meyers Neurological Battery; MI = Meyers Index (from MMPI-2 or MMPI-2-RF).

aPre-test odds are the post-test odds from Table 3 for the first test in the pair.

bLikelihood ratio (LR) is the LR for the second test in each pair, taken from Table 3.

Table 5.

Diagnostic statistics and probabilities for failure of three performance validity measures

Test combination Pre-test oddsa Likelihood ratiob Post-test odds Post-test probability 
FC/DL/SR 2,057.433 105.667 217,402.773 .999 
FC/DL/JL 2,057.433 107.000 220,145.331 .999 
FC/DL/TT 2,057.433 103.000 211,915.599 .999 
FC/DL/AV 2,057.433 28.308 58,241.813 .999 
FC/DL/FTDif 2,057.433 16.220 33,371.563 .999 
FC/DL/MEP 2,057.433 112.833 232,146.338 .999 
FC/DL/RDS 2,057.433 12.434 25,582.122 .999 
FC/DL/WMTMNB 2,057.433 8.691 17,881.1502 .999 
FC/DL/MI 2,057.433 2.710 5,575.643 .999 
FC/SR/JL 9,494.814 107.000 1,015,945.100 .999 
FC/SR/TT 9,494.814 103.000 977,965.842 .999 
FC/SR/AV 9,494.814 28.308 268,779.195 .999 
FC/SR/FTDif 9,494.814 16.220 154,005.883 .999 
FC/SR/MEP 9,494.814 112.833 1,071,328.350 .999 
FC/SR/RDS 9,494.814 12.434 118,058.517 .999 
FC/SR/WMTMNB 9,494.814 8.691 82,519.429 .999 
FC/SR/MI 9,494.814 2.710 25,730.950 .999 
FC/JL/TT 9,614.592 103.000 990,302.976 .999 
FC/JL/AV 9,614.592 28.308 272,169.870 .999 
FC/JL/FTDif 9,614.592 16.220 155,948.682 .999 
FC/JL/MEP 9,614.592 112.833 1,084,843.260 .999 
FC/JL/RDS 9,614.592 12.434 119,547.837 .999 
FC/JL/WMTMNB 9,614.592 8.691 83,560.419 .999 
FC/JL/MI 9,614.592 2.710 26,055.544 .999 
FC/TT/AV 9,255.168 28.308 261,995.296 .999 
FC/TT/FTDif 9,255.168 16.220 150,118.825 .999 
FC/TT/MEP 9,255.168 112.833 1,044,288.370 .999 
FC/TT/RDS 9,255.168 12.434 115,078.759 .999 
FC/TT/WMTMNB 9,255.168 8.691 80,436.665 .999 
FC/TT/MI 9,255.168 2.710 25,081.505 .999 
FC/AV/FTDif 2,543.644 16.220 41,257.906 .999 
FC/AV/MEP 2,543.644 112.833 287,006.983 .999 
FC/AV/RDS 2,543.644 12.434 31,627.670 .999 
FC/AV/WMTMNB 2,543.644 8.691 22,106.810 .999 
FC/AV/MI 2,543.644 2.710 6,893.275 .999 
FC/FTDif/MEP 1,457.464 112.833 164,450.036 .999 
FC/FTDif/RDS 1,457.464 12.434 18,122.107 .999 
FC/FTDif/WMTMNB 1,457.464 8.691 12,666.820 .999 
FC/FTDif/MI 1,457.464 2.710 3,949.727 .999 
FC/MEP/RDS 10,138.722 12.434 126,064.869 .999 
FC/MEP/WMTMNB 10,138.722 8.691 88,115.633 .999 
FC/MEP/MI 10,138.722 2.710 27,475.937 .999 
FC/RDS/WMTMNB 1,117.270 8.691 9,710.194 .999 
FC/RDS/MI 1,117.270 2.710 3,027.802 .999 
FC/WMTMNB/MI 780.938 2.710 2,116.342 .999 
DL/SR/JL 1,509.770 107.000 161,545.390 .999 
DL/SR/TT 1,509.770 103.000 155,506.310 .999 
DL/SR/AV 1,509.770 28.308 42,738.569 .999 
DL/SR/FTDif 1,509.770 16.220 24,488.469 .999 
DL/SR/MEP 1,509.770 112.833 170,351.878 .999 
DL/SR/RDS 1,509.770 12.434 18,772.480 .999 
DL/SR/WMTMNB 1,509.770 8.691 13,121.411 .999 
DL/SR/MI 1,509.770 2.710 4,091.477 .999 
DL/JL/TT 1,528.816 103.000 157,468.048 .999 
DL/JL/AV 1,528.816 28.308 43,277.723 .999 
DL/JL/FTDif 1,528.816 16.220 24,797.400 .999 
DL/JL/MEP 1,528.816 112.833 172,500.896 .999 
DL/JL/RDS 1,528.816 12.434 19,009.298 .999 
DL/JL/WMTMNB 1,528.816 8.691 13,286.940 .999 
DL/JL/MI 1,528.816 2.710 4,143.091 .999 
DL/TT/AV 1,471.664 28.308 41,659.865 .999 
DL/TT/FTDif 1,471.664 16.220 23,870.390 .999 
DL/TT/MEP 1,471.664 112.833 166,052.264 .999 
DL/TT/RDS 1,471.664 12.434 18,298.670 .999 
DL/TT/WMTMNB 1,471.664 8.691 12,790.232 .999 
DL/TT/MI 1,471.664 2.710 3,988.209 .999 
DL/AV/FTDif 404.465 16.220 6,560.422 .999 
DL/AV/MEP 404.465 112.833 45,636.999 .999 
DL/AV/RDS 404.465 12.434 5,029.118 .999 
DL/AV/WMTMNB 404.465 8.691 3,515.205 .999 
DL/AV/MI 404.465 2.710 1,096.100 .999 
DL/FTDif/MEP 231.751 112.833 26,149.161 .999 
DL/FTDif/RDS 231.751 12.434 2,881.592 .999 
DL/FTDif/WMTMNB 231.751 8.691 2,014.148 .999 
DL/FTDif/MI 231.751 2.710 628.045 .998 
DL/MEP/RDS 1,600.875 12.434 19,905.280 .999 
DL/MEP/WMTMNB 1,600.875 8.691 13,913.205 .999 
DL/MEP/MI 1,600.875 2.710 4,338.371 .999 
DL/RDS/WMTMNB 177.657 8.691 1,544.017 .999 
DL/RDS/MI 177.657 2.710 481.450 .998 
DL/WMTMNB/MI 124.177 2.710 336.520 .997 
SR/JL/TT 7,055.152 103.000 726,680.656 .999 
SR/JL/AV 7,055.152 28.308 199,717.243 .999 
SR/JL/FTDif 7,055.152 16.220 114,434.565 .999 
SR/JL/MEP 7,055.152 112.833 796,053.966 .999 
SR/JL/RDS 7,055.152 12.434 87,723.760 .999 
SR/JL/WMTMNB 7,055.152 8.691 61,316.326 .999 
SR/JL/MI 7,055.152 2.710 19,119.462 .999 
SR/TT/AV 6,791.408 28.308 192,251.178 .999 
SR/TT/FTDif 6,791.408 16.220 110,156.638 .999 
SR/TT/MEP 6,791.408 112.833 766,294.939 .999 
SR/TT/RDS 6,791.408 12.434 84,444.367 .999 
SR/TT/WMTMNB 6,791.408 8.691 59,024.127 .999 
SR/TT/MI 6,791.408 2.710 18,404.716 .999 
SR/AV/FTDif 1,866.516 16.220 30,274.890 .999 
SR/AV/MEP 1,866.516 112.833 210,604.600 .999 
SR/AV/RDS 1,866.516 12.434 23,208.260 .999 
SR/AV/WMTMNB 1,866.516 8.691 16,221.891 .999 
SR/AV/MI 1,866.516 2.710 5,058.258 .999 
SR/FTDif/MEP 1,096.482 112.833 123,719.354 .999 
SR/FTDif/RDS 1,096.482 12.434 13,633.6572 .999 
SR/FTDif/WMTMNB 1,096.482 8.691 9,529.525 .999 
SR/FTDif/MI 1,096.482 2.710 2,971.466 .999 
SR/MEP/RDS 7,439.757 12.434 92,505.939 .999 
SR/MEP/WMTMNB 7,439.757 8.691 64,658.928 .999 
SR/MEP/MI 7,439.757 2.710 20,161.742 .999 
SR/RDS/WMTMNB 819.848 8.691 7,125.299 .999 
SR/RDS/MI 819.848 2.710 2,221.788 .999 
SR/WMTMNB/MI 573.050 2.710 1,522.966 .999 
JL/TT/AV 6,877.104 28.308 194,677.060 .999 
JL/TT/FTDif 6,877.104 16.220 111,546.627 .999 
JL/TT/MEP 6,877.104 112.833 775,964.276 .999 
JL/TT/RDS 6,877.104 12.434 85,509.911 .999 
JL/TT/WMTMNB 6,877.104 8.691 59,768.911 .999 
JL/TT/MI 6,877.104 2.710 18,636.952 .999 
JL/AV/FTDif 1,890.069 16.220 30,656.919 .999 
JL/AV/MEP 1,890.069 112.833 213,262.155 .999 
JL/AV/RDS 1,890.069 12.434 23,501.118 .999 
JL/AV/WMTMNB 1,890.069 8.691 16,426.590 .999 
JL/AV/MI 1,890.069 2.710 5,122.087 .999 
JL/FTDif/MEP 1,082.977 112.833 122,195.544 .999 
JL/FTDif/RDS 1,082.977 12.434 13,465.736 .999 
JL/FTDif/WMTMNB 1,082.977 8.691 9,412.153 .999 
JL/FTDif/MI 1,082.977 2.710 2,934.868 .999 
JL/MEP/RDS 7,533.634 12.434 93,673.205 .999 
JL/MEP/WMTMNB 7,533.634 8.691 65,474.813 .999 
JL/MEP/MI 7,533.634 2.710 20,416.148 .999 
JL/RDS/WMTMNB 830.193 8.691 7,215.207 .999 
JL/RDS/MI 830.193 2.710 2,249.823 .999 
JL/WMTMNB/MI 580.281 2.710 1,572.562 .999 
TT/AV/FTDif 1,819.412 16.220 29,510.863 .999 
TT/AV/MEP 1,819.412 112.833 205,289.714 .999 
TT/AV/RDS 1,819.412 12.434 22,622.569 .999 
TT/AV/WMTMNB 1,819.412 8.691 15,812.510 .999 
TT/AV/MI 1,819.412 2.710 4,930.607 .999 
TT/FTDif/MEP 1,042.492 112.833 117,627.500 .999 
TT/FTDif/RDS 1,042.492 12.434 12,962.346 .999 
TT/FTDif/WMTMNB 1,042.492 8.691 9,060.298 .999 
TT/FTDif/MI 1,042.492 2.710 2,825.153 .999 
TT/MEP/RDS 7,252.003 12.434 90,171.405 .999 
TT/MEP/WMTMNB 7,252.003 8.691 63,027.158 .999 
TT/MEP/MI 7,252.003 2.710 19,652.928 .999 
TT/RDS/WMTMNB 799.158 8.691 6,945.482 .999 
TT/RDS/MI 799.158 2.710 2,165.718 .999 
TT/WMTMNB/MI 558.588 2.710 1,513.733 .999 
AV/FTDif/MEP 291.522 112.833 32,894.543 .999 
AV/FTDif/RDS 291.522 12.434 3,624.785 .999 
AV/FTDif/WMTMNB 291.522 8.691 2,533.618 .999 
AV/FTDif/MI 291.522 2.710 790.025 .999 
AV/MEP/RDS 2,027.948 12.434 25,215.505 .999 
AV/MEP/WMTMNB 2,027.948 8.691 17,624.896 .999 
AV/MEP/MI 2,027.948 2.710 5,495.739 .999 
AV/RDS/WMTMNB 221.238 8.691 1,922.779 .999 
AV/RDS/MI 221.238 2.710 599.555 .998 
AV/WMTMNB/MI 156.203 2.710 423.310 .998 
FTDif/MEP/RDS 1,141.983 12.434 14,199.417 .999 
FTDif/MEP/WMTMNB 1,141.983 8.691 9,924.974 .999 
FTDif/MEP/MI 1,141.983 2.710 3,094.774 .999 
FTDif/RDS/WMTMNB 125.845 8.691 1,093.719 .998 
FTDif/RDS/MI 125.845 2.710 341.040 .997 
FTDif/WMTMNB/MI 87.962 2.710 238.377 .996 
MEP/RDS/WMTMNB 875.453 8.691 7,608.562 .999 
MEP/RDS/MI 875.453 2.710 2,372.478 .999 
MEP/WMTMNB/MI 611.916 2.710 1,658.292 .999 
RDS/WMTMNB/MI 67.433 2.710 182.743 .995 
Test combination Pre-test oddsa Likelihood ratiob Post-test odds Post-test probability 
FC/DL/SR 2,057.433 105.667 217,402.773 .999 
FC/DL/JL 2,057.433 107.000 220,145.331 .999 
FC/DL/TT 2,057.433 103.000 211,915.599 .999 
FC/DL/AV 2,057.433 28.308 58,241.813 .999 
FC/DL/FTDif 2,057.433 16.220 33,371.563 .999 
FC/DL/MEP 2,057.433 112.833 232,146.338 .999 
FC/DL/RDS 2,057.433 12.434 25,582.122 .999 
FC/DL/WMTMNB 2,057.433 8.691 17,881.1502 .999 
FC/DL/MI 2,057.433 2.710 5,575.643 .999 
FC/SR/JL 9,494.814 107.000 1,015,945.100 .999 
FC/SR/TT 9,494.814 103.000 977,965.842 .999 
FC/SR/AV 9,494.814 28.308 268,779.195 .999 
FC/SR/FTDif 9,494.814 16.220 154,005.883 .999 
FC/SR/MEP 9,494.814 112.833 1,071,328.350 .999 
FC/SR/RDS 9,494.814 12.434 118,058.517 .999 
FC/SR/WMTMNB 9,494.814 8.691 82,519.429 .999 
FC/SR/MI 9,494.814 2.710 25,730.950 .999 
FC/JL/TT 9,614.592 103.000 990,302.976 .999 
FC/JL/AV 9,614.592 28.308 272,169.870 .999 
FC/JL/FTDif 9,614.592 16.220 155,948.682 .999 
FC/JL/MEP 9,614.592 112.833 1,084,843.260 .999 
FC/JL/RDS 9,614.592 12.434 119,547.837 .999 
FC/JL/WMTMNB 9,614.592 8.691 83,560.419 .999 
FC/JL/MI 9,614.592 2.710 26,055.544 .999 
FC/TT/AV 9,255.168 28.308 261,995.296 .999 
FC/TT/FTDif 9,255.168 16.220 150,118.825 .999 
FC/TT/MEP 9,255.168 112.833 1,044,288.370 .999 
FC/TT/RDS 9,255.168 12.434 115,078.759 .999 
FC/TT/WMTMNB 9,255.168 8.691 80,436.665 .999 
FC/TT/MI 9,255.168 2.710 25,081.505 .999 
FC/AV/FTDif 2,543.644 16.220 41,257.906 .999 
FC/AV/MEP 2,543.644 112.833 287,006.983 .999 
FC/AV/RDS 2,543.644 12.434 31,627.670 .999 
FC/AV/WMTMNB 2,543.644 8.691 22,106.810 .999 
FC/AV/MI 2,543.644 2.710 6,893.275 .999 
FC/FTDif/MEP 1,457.464 112.833 164,450.036 .999 
FC/FTDif/RDS 1,457.464 12.434 18,122.107 .999 
FC/FTDif/WMTMNB 1,457.464 8.691 12,666.820 .999 
FC/FTDif/MI 1,457.464 2.710 3,949.727 .999 
FC/MEP/RDS 10,138.722 12.434 126,064.869 .999 
FC/MEP/WMTMNB 10,138.722 8.691 88,115.633 .999 
FC/MEP/MI 10,138.722 2.710 27,475.937 .999 
FC/RDS/WMTMNB 1,117.270 8.691 9,710.194 .999 
FC/RDS/MI 1,117.270 2.710 3,027.802 .999 
FC/WMTMNB/MI 780.938 2.710 2,116.342 .999 
DL/SR/JL 1,509.770 107.000 161,545.390 .999 
DL/SR/TT 1,509.770 103.000 155,506.310 .999 
DL/SR/AV 1,509.770 28.308 42,738.569 .999 
DL/SR/FTDif 1,509.770 16.220 24,488.469 .999 
DL/SR/MEP 1,509.770 112.833 170,351.878 .999 
DL/SR/RDS 1,509.770 12.434 18,772.480 .999 
DL/SR/WMTMNB 1,509.770 8.691 13,121.411 .999 
DL/SR/MI 1,509.770 2.710 4,091.477 .999 
DL/JL/TT 1,528.816 103.000 157,468.048 .999 
DL/JL/AV 1,528.816 28.308 43,277.723 .999 
DL/JL/FTDif 1,528.816 16.220 24,797.400 .999 
DL/JL/MEP 1,528.816 112.833 172,500.896 .999 
DL/JL/RDS 1,528.816 12.434 19,009.298 .999 
DL/JL/WMTMNB 1,528.816 8.691 13,286.940 .999 
DL/JL/MI 1,528.816 2.710 4,143.091 .999 
DL/TT/AV 1,471.664 28.308 41,659.865 .999 
DL/TT/FTDif 1,471.664 16.220 23,870.390 .999 
DL/TT/MEP 1,471.664 112.833 166,052.264 .999 
DL/TT/RDS 1,471.664 12.434 18,298.670 .999 
DL/TT/WMTMNB 1,471.664 8.691 12,790.232 .999 
DL/TT/MI 1,471.664 2.710 3,988.209 .999 
DL/AV/FTDif 404.465 16.220 6,560.422 .999 
DL/AV/MEP 404.465 112.833 45,636.999 .999 
DL/AV/RDS 404.465 12.434 5,029.118 .999 
DL/AV/WMTMNB 404.465 8.691 3,515.205 .999 
DL/AV/MI 404.465 2.710 1,096.100 .999 
DL/FTDif/MEP 231.751 112.833 26,149.161 .999 
DL/FTDif/RDS 231.751 12.434 2,881.592 .999 
DL/FTDif/WMTMNB 231.751 8.691 2,014.148 .999 
DL/FTDif/MI 231.751 2.710 628.045 .998 
DL/MEP/RDS 1,600.875 12.434 19,905.280 .999 
DL/MEP/WMTMNB 1,600.875 8.691 13,913.205 .999 
DL/MEP/MI 1,600.875 2.710 4,338.371 .999 
DL/RDS/WMTMNB 177.657 8.691 1,544.017 .999 
DL/RDS/MI 177.657 2.710 481.450 .998 
DL/WMTMNB/MI 124.177 2.710 336.520 .997 
SR/JL/TT 7,055.152 103.000 726,680.656 .999 
SR/JL/AV 7,055.152 28.308 199,717.243 .999 
SR/JL/FTDif 7,055.152 16.220 114,434.565 .999 
SR/JL/MEP 7,055.152 112.833 796,053.966 .999 
SR/JL/RDS 7,055.152 12.434 87,723.760 .999 
SR/JL/WMTMNB 7,055.152 8.691 61,316.326 .999 
SR/JL/MI 7,055.152 2.710 19,119.462 .999 
SR/TT/AV 6,791.408 28.308 192,251.178 .999 
SR/TT/FTDif 6,791.408 16.220 110,156.638 .999 
SR/TT/MEP 6,791.408 112.833 766,294.939 .999 
SR/TT/RDS 6,791.408 12.434 84,444.367 .999 
SR/TT/WMTMNB 6,791.408 8.691 59,024.127 .999 
SR/TT/MI 6,791.408 2.710 18,404.716 .999 
SR/AV/FTDif 1,866.516 16.220 30,274.890 .999 
SR/AV/MEP 1,866.516 112.833 210,604.600 .999 
SR/AV/RDS 1,866.516 12.434 23,208.260 .999 
SR/AV/WMTMNB 1,866.516 8.691 16,221.891 .999 
SR/AV/MI 1,866.516 2.710 5,058.258 .999 
SR/FTDif/MEP 1,096.482 112.833 123,719.354 .999 
SR/FTDif/RDS 1,096.482 12.434 13,633.6572 .999 
SR/FTDif/WMTMNB 1,096.482 8.691 9,529.525 .999 
SR/FTDif/MI 1,096.482 2.710 2,971.466 .999 
SR/MEP/RDS 7,439.757 12.434 92,505.939 .999 
SR/MEP/WMTMNB 7,439.757 8.691 64,658.928 .999 
SR/MEP/MI 7,439.757 2.710 20,161.742 .999 
SR/RDS/WMTMNB 819.848 8.691 7,125.299 .999 
SR/RDS/MI 819.848 2.710 2,221.788 .999 
SR/WMTMNB/MI 573.050 2.710 1,522.966 .999 
JL/TT/AV 6,877.104 28.308 194,677.060 .999 
JL/TT/FTDif 6,877.104 16.220 111,546.627 .999 
JL/TT/MEP 6,877.104 112.833 775,964.276 .999 
JL/TT/RDS 6,877.104 12.434 85,509.911 .999 
JL/TT/WMTMNB 6,877.104 8.691 59,768.911 .999 
JL/TT/MI 6,877.104 2.710 18,636.952 .999 
JL/AV/FTDif 1,890.069 16.220 30,656.919 .999 
JL/AV/MEP 1,890.069 112.833 213,262.155 .999 
JL/AV/RDS 1,890.069 12.434 23,501.118 .999 
JL/AV/WMTMNB 1,890.069 8.691 16,426.590 .999 
JL/AV/MI 1,890.069 2.710 5,122.087 .999 
JL/FTDif/MEP 1,082.977 112.833 122,195.544 .999 
JL/FTDif/RDS 1,082.977 12.434 13,465.736 .999 
JL/FTDif/WMTMNB 1,082.977 8.691 9,412.153 .999 
JL/FTDif/MI 1,082.977 2.710 2,934.868 .999 
JL/MEP/RDS 7,533.634 12.434 93,673.205 .999 
JL/MEP/WMTMNB 7,533.634 8.691 65,474.813 .999 
JL/MEP/MI 7,533.634 2.710 20,416.148 .999 
JL/RDS/WMTMNB 830.193 8.691 7,215.207 .999 
JL/RDS/MI 830.193 2.710 2,249.823 .999 
JL/WMTMNB/MI 580.281 2.710 1,572.562 .999 
TT/AV/FTDif 1,819.412 16.220 29,510.863 .999 
TT/AV/MEP 1,819.412 112.833 205,289.714 .999 
TT/AV/RDS 1,819.412 12.434 22,622.569 .999 
TT/AV/WMTMNB 1,819.412 8.691 15,812.510 .999 
TT/AV/MI 1,819.412 2.710 4,930.607 .999 
TT/FTDif/MEP 1,042.492 112.833 117,627.500 .999 
TT/FTDif/RDS 1,042.492 12.434 12,962.346 .999 
TT/FTDif/WMTMNB 1,042.492 8.691 9,060.298 .999 
TT/FTDif/MI 1,042.492 2.710 2,825.153 .999 
TT/MEP/RDS 7,252.003 12.434 90,171.405 .999 
TT/MEP/WMTMNB 7,252.003 8.691 63,027.158 .999 
TT/MEP/MI 7,252.003 2.710 19,652.928 .999 
TT/RDS/WMTMNB 799.158 8.691 6,945.482 .999 
TT/RDS/MI 799.158 2.710 2,165.718 .999 
TT/WMTMNB/MI 558.588 2.710 1,513.733 .999 
AV/FTDif/MEP 291.522 112.833 32,894.543 .999 
AV/FTDif/RDS 291.522 12.434 3,624.785 .999 
AV/FTDif/WMTMNB 291.522 8.691 2,533.618 .999 
AV/FTDif/MI 291.522 2.710 790.025 .999 
AV/MEP/RDS 2,027.948 12.434 25,215.505 .999 
AV/MEP/WMTMNB 2,027.948 8.691 17,624.896 .999 
AV/MEP/MI 2,027.948 2.710 5,495.739 .999 
AV/RDS/WMTMNB 221.238 8.691 1,922.779 .999 
AV/RDS/MI 221.238 2.710 599.555 .998 
AV/WMTMNB/MI 156.203 2.710 423.310 .998 
FTDif/MEP/RDS 1,141.983 12.434 14,199.417 .999 
FTDif/MEP/WMTMNB 1,141.983 8.691 9,924.974 .999 
FTDif/MEP/MI 1,141.983 2.710 3,094.774 .999 
FTDif/RDS/WMTMNB 125.845 8.691 1,093.719 .998 
FTDif/RDS/MI 125.845 2.710 341.040 .997 
FTDif/WMTMNB/MI 87.962 2.710 238.377 .996 
MEP/RDS/WMTMNB 875.453 8.691 7,608.562 .999 
MEP/RDS/MI 875.453 2.710 2,372.478 .999 
MEP/WMTMNB/MI 611.916 2.710 1,658.292 .999 
RDS/WMTMNB/MI 67.433 2.710 182.743 .995 

Notes: FC = Forced Choice; DL = Dichotic Listening; SR = Sentence Repetition; JL = Judgment of Line Orientation; TT = Token Test; AV = AVLT-Recognition; FTD = Finger Tapping-Dominant Hand; MEP = Memory Error Pattern; RDS = Reliable Digit Span; WMTMNB = Word Memory Test-Meyers Neurological Battery; MI = Meyers Index (from MMPI-2 or MMPI-2-RF).

aPre-test odds are the post-test odds from Table 4 for the first two tests in the triplet.

bLikelihood ratio (LR) is the LR from Table 3 for the third test in the triplet.

Results

Table 3 displays the sensitivities and specificities of the sample of 255 subjects calculated based on the failure rate of the tests administered. The sensitivities and specificities were applied to create likelihood ratios, pre-test odds (base rate/1 − base rate; the base rate was determined by the pass/fail rate of the WMT, not using the GMIP), post-test odds, and post-test probabilities of having invalid data for each of the individual PVMs, as based on Larrabee (2008).

Table 4 displays the odds and probabilities of invalid results for each possible pair of tests. The post-test probabilities found in Tables 4 and 5 were calculated following the methodology used by Larrabee (2008). The pre-test odds found in Table 4 are the post-test odds of the first test in the pair taken from Table 3, and the likelihood ratio is the likelihood ratio of the second test in the pair extracted from Table 3. Additionally, Table 5 displays the pre-test odds of each combination of three tests by extracting it from the post-test odds from Table 4 for the first two tests in the combination, and the likelihood ratio for each combination is the likelihood ratio of the third test in the combination extracted from Table 3.

The data in Tables 3–5 are derived based on the assumption that the 10 PVM indicators and the MI are independent from one another. Evaluation of the assumption of independence between the 10 PVMs and the MI “when using chained likelihood ratios” was done by computing the PVM intercorrelations for the 255 subjects who had complete data on all 10 PVM indicators and the MI. PVM correlations ranged from −.041 (SR/MEP) to .478 (AV/WMTMNB); the average of the correlations was .123, which was not found to be statistically significantly different from zero, thus showing that the PVM indicators could be considered independent from one another.

Table 6 portrays both the post-test odds and post-test probabilities for failure of any one, two, or three of the PVMs. Within this table, it is assumed that sensitivity is set at 0.80 and specificity is set at 0.90, based on the average sensitivity (0.77) and specificity (0.93) of the individual measures listed in Table 3. It is also assumed that each of the tests are independent of one another. We also used the same base rate assumption of 40% used by Larrabee (2008) and identified by Larrabee and colleagues (2009) and Mittenberg, Patton, Canyock, and Condit (2002). As we were attempting to replicate Larrabee's (2008) method, we chose to use this already identified rate (40%) in our study. Table 6 shows a wide range of post-test probabilities when only one PVM is failed, but as an increasing number of measures are failed, posterior probabilities decrease in range, becoming high and rather unaffected by the base rate by the time three PVMs are failed. See Table 7 for details on the intercorrelations of the PVMs. The selected PVMs used in this study are sufficiently independent of each other to allow this chaining method to be used as we have demonstrated in this article.

Table 6.

Probability of malingering utilizing independent tests, each with sensitivity of 0.80 and specificity of 0.90 at different base rates of malingering

Base rate of malingering Pre-test odds Likelihood ratio Post-test odds failing one test (probability) Post-test odds failing two tests (probability) Post-test odds failing three tests (probability) 
0.10 0.111 8.0 0.889 (.471) 7.112 (.877) 56.896 (.983) 
0.20 0.250 8.0 2.000 (.667) 16.000 (.941) 128.000 (.992) 
0.30 0.429 8.0 3.432 (.774) 27.456 (.965) 219.648 (.995) 
0.40 0.667 8.0 5.336 (.842) 42.688 (.978) 341.344 (.997) 
0.50 1.000 8.0 8.000 (.889) 64.000 (.985) 512.000 (.998) 
0.60 1.500 8.0 12.000 (.923) 96.000 (.990) 768.000 (.999) 
0.70 2.333 8.0 18.664 (.949) 149.312 (.993) 1,194.496 (.999) 
0.80 4.000 8.0 32.000 (.970) 256.000 (.996) 2,048.000 (.999) 
0.90 9.000 8.0 72.000 (.986) 576.000 (.998) 4,608.000 (.999) 
Base rate of malingering Pre-test odds Likelihood ratio Post-test odds failing one test (probability) Post-test odds failing two tests (probability) Post-test odds failing three tests (probability) 
0.10 0.111 8.0 0.889 (.471) 7.112 (.877) 56.896 (.983) 
0.20 0.250 8.0 2.000 (.667) 16.000 (.941) 128.000 (.992) 
0.30 0.429 8.0 3.432 (.774) 27.456 (.965) 219.648 (.995) 
0.40 0.667 8.0 5.336 (.842) 42.688 (.978) 341.344 (.997) 
0.50 1.000 8.0 8.000 (.889) 64.000 (.985) 512.000 (.998) 
0.60 1.500 8.0 12.000 (.923) 96.000 (.990) 768.000 (.999) 
0.70 2.333 8.0 18.664 (.949) 149.312 (.993) 1,194.496 (.999) 
0.80 4.000 8.0 32.000 (.970) 256.000 (.996) 2,048.000 (.999) 
0.90 9.000 8.0 72.000 (.986) 576.000 (.998) 4,608.000 (.999) 
Table 7.

Performance validity measures intercorrelations (range: −0.042, 0.418; average: 0.151, SD: 0.118)

Test combination Correlation Test combination Correlation 
FC/DL .336 JL/AV .245 
FC/SR .094 JL/FTDif .192 
FC/JL −.002 JL/MEP .403 
FC/TT −.034 JL/RDS .179 
FC/AV .339 JL/WMTMNB .192 
FC/FTDif .097 JL/MI .066 
FC/MEP .178 TT/AV .216 
FC/RDS .164 TT/FTDif .135 
FC/WMTMNB .318 TT/MEP −.024 
FC/MI −.031 TT/RDS −.028 
DL/SR .153 TT/WMTMNB .095 
DL/JL .035 TT/MI .190 
DL/TT .144 AV/FTDif .241 
DL/AV .350 AV/MEP .292 
DL/FTDif .241 AV/RDS .259 
DL/MEP .164 AV/WMTMNB .418 
DL/RDS .276 AV/MI .066 
DL/WMTMNB .214 FTDif/MEP .149 
DL/MI .078 FTDif/RDS .055 
SR/JL −.028 FTDif/WMTMNB .206 
SR/TT .280 FTDif/MI .001 
SR/AV .083 MEP/RDS .190 
SR/FTDif .048 MEP/WMTMNB .223 
SR/MEP −.042 MEP/MI .015 
SR/RDS .132 RDS/WMTMNB .117 
SR/WMTMNB .114 RDS/MI −.005 
SR/MI .210 WMTMNB/MI .089 
JL/TT .239   
Test combination Correlation Test combination Correlation 
FC/DL .336 JL/AV .245 
FC/SR .094 JL/FTDif .192 
FC/JL −.002 JL/MEP .403 
FC/TT −.034 JL/RDS .179 
FC/AV .339 JL/WMTMNB .192 
FC/FTDif .097 JL/MI .066 
FC/MEP .178 TT/AV .216 
FC/RDS .164 TT/FTDif .135 
FC/WMTMNB .318 TT/MEP −.024 
FC/MI −.031 TT/RDS −.028 
DL/SR .153 TT/WMTMNB .095 
DL/JL .035 TT/MI .190 
DL/TT .144 AV/FTDif .241 
DL/AV .350 AV/MEP .292 
DL/FTDif .241 AV/RDS .259 
DL/MEP .164 AV/WMTMNB .418 
DL/RDS .276 AV/MI .066 
DL/WMTMNB .214 FTDif/MEP .149 
DL/MI .078 FTDif/RDS .055 
SR/JL −.028 FTDif/WMTMNB .206 
SR/TT .280 FTDif/MI .001 
SR/AV .083 MEP/RDS .190 
SR/FTDif .048 MEP/WMTMNB .223 
SR/MEP −.042 MEP/MI .015 
SR/RDS .132 RDS/WMTMNB .117 
SR/WMTMNB .114 RDS/MI −.005 
SR/MI .210 WMTMNB/MI .089 
JL/TT .239   

Notes: FC = Forced Choice; DL = Dichotic Listening; SR = Sentence Repetition; JL = Judgment of Line Orientation; TT = Token Test; AV = AVLT-Recognition; FTD = Finger Tapping-Dominant Hand; MEP = Memory Error Pattern; RDS = Reliable Digit Span; WMTMNB = Word Memory Test-Meyers Neurological Battery; MI = Meyers Index (from MMPI-2 or MMPI-2-RF).

Discussion

As shown in Tables 3–5, the post-test probability of having invalid data increases from a range of .579–.982 with the failure of one test to a range of .916–.999 with the failure of two tests, and to .989–.999 for the failure of three tests. The probability of having invalid data can be quickly calculated using Table 1, by multiplying the post-test odds of the first test in a combination by the likelihood ratios of the subsequent tests in combination. As evidenced by the range of post-test probabilities in Table 5, calculating the post-test probability of having invalid data for four-test combinations is redundant and likely to result in similar post-test probabilities for test combinations that are greater than 3.

The aforementioned ranges of post-test probabilities for various test combinations serve two main purposes. The first is that they support the requirement for failure of multiple indicators when testing for non-valid results. When only one test is failed, it can only be said with 50%–90% certainty that the data were invalid; however, as more tests are failed, the post-test probability swiftly increases, with the failure of any three tests indicating a near guarantee that the participant provided invalid data. Simultaneously, the process of aggregation via the chaining of likelihood ratios further invalidates the argument of vote-counting enterprises. A vote-counting enterprise is a system in which a person must fail a majority of the given test indicators in order to be suspected of malingering; however, by using likelihood ratios, it becomes evident that it is only necessary to fail three of the ten PVMs, as evidenced in Table 5, rather than the six failed out of 10 total indicators demanded by a vote-counting enterprise (Larrabee, 2008).

Table 6 shows that, while post-test probability ranges for one failed PVM are quite high at base rates spanning from .10 to .90, post-test probability ranges decrease greatly for two failed PVMs and even more so for three failed PVMs, when it is assumed that sensitivity (0.80) and specificity (0.90) are held constant and each PVM is an independent test. Therefore, using chained likelihood ratios to detect invalid data is applicable, even when base rates of failure are low, and failing at least three PVMs still indicates a high probability that the patient is producing invalid data. As indicated in Table 6, as the average base rate of invalid performance increases, the number of failed PVMs needed to detect invalid performance decreases. For example, an invalid performance base rate of 0.10 would require three PVM failures for the probability of having invalid data to be over .90, but with an invalid performance base rate of 0.6 or higher, only one PVM test failure would be needed for the probability of having invalid data to exceed .90, using a test with a sensitivity of 0.70 and specificity of 0.90.

One difference between the Larrabee (2008) study and the current study is that he used the Portland Digit Recognition Test (PDRT; Binder, 1993) as his standard for identifying the pass and fail for his group of subjects. Larrabee used a known-groups design, with a sample of litigating individuals who performed significantly worse than chance on the PDRT, and meeting Slick and colleagues (1999) criteria for malingered neurocognitive deficit. The current paper does not use a known-groups design to evaluate detection of malingering but rather uses the WMT as the standard to detect invalid performance. The paper also did not use a criterion of worse than chance performance on the WMT. Also Larrabee used different PVMs, with specificity closer to 0.90 that were lower than most of specificities in the current paper, and sensitivities that were closer to 0.50 than the 0.77 in the current paper. Examining the data presented in Table 6 in the present paper compared with Table 6 in Larrabee (2008), we see that both use the same pre-test base rate (0.40), and pre-test false positive rate of 0.10, but Larrabee used a pre-test sensitivity of 0.50 compared with 0.70 used in the current study.

We feel that the method used in the current study yields results that are more useful across various clinical settings compared with the previous study by Larrrabee (2008). In Larrabee (2008), he used a known-groups design; however, our findings also show that his chaining approach can be used when the individual group membership is not known. This chaining approach is applicable not only to a forensic setting, but to a regular clinical setting.

Although the use of chained likelihood ratios to determine the validity of data in PVMs is worthy of further study on its own merits, it could also be argued that the probability of purposefully providing invalid data could be calculated using likelihood ratios for any set of tests that meet the criteria of having low intercorrelations. This study, in conjunction with previous studies conducted by Larrabee (2003, 2008), suggests that it would be reasonable to apply chained likelihood ratios to a broader range of measures utilized in clinical settings. This article builds on the work of Larrabee (2008) and presents more PVMs that the clinician can use. Even if the MNB is not used, if the tests that comprise the PVMs are given, then they can serve additionally as embedded PVMs. One noted caution is the use of the MI which is an SVT. The other measures are PVMs. The noted difference in sensitivity and specificity from the other tests presented in Table 3 may be due to the nature of the SVT as a self-report and not a performance-based task. These results indicate that self-report may not be well correlated with PVM test performance.

One of the strengths of this study is that if a clinician does not use all of these PVMs, the data obtained can still be useful. Chaining of likelihood ratios is a methodology that is portable across different PVM and SVT measures as only one need to estimate the base rate of invalid performance/malingering and know the specific sensitivity and specificity of the measures being used to be able to compute post-test probability from administration of multiple PVMs and SVTs.

Conflict of Interest

J.E.M. is one of the Author of the Rey Complex Figure and Recognition Trial, and the author of the Meyers Neuropsychological Battery software.

References

Baldessarini
R. J.
Finklestein
S.
Arana
G. W.
The predictive power of diagnostic tests and the effect of prevalence of illness
Archives of General Psychiatry
 , 
1983
, vol. 
40
 (pg. 
569
-
573
)
Benton
A.
Hamsher
K.
Varney
N.
Spreen
O.
Contributions to neuropsychological assessment: A clinical manual
 , 
1983
New York: Oxford University Press
Bianchini
K. J.
Mathias
C. W.
Greve
K. W.
Symptom validity testing: A critical review
The Clinical Neuropsychologist
 , 
2001
, vol. 
15
 
1
(pg. 
19
-
45
)
Binder
L. M.
Assessment of malingering after mild head trauma with the Portland Digit Recognition Test
Journal of Clinical and Experimental Neuropsychology
 , 
1993
, vol. 
15
 (pg. 
170
-
182
)
Boone
K. B.
Lu
P.
Sherman
D.
Palmer
B.
Back
C.
Shamieh
E.
, et al.  . 
Validation of a new technique to detect malingering of cognitive symptoms: The b Test
Archives of Clinical Neuropsychology
 , 
2000
, vol. 
15
 
3
(pg. 
227
-
241
)
Brandt
J.
Rubinsky
E.
Larson
G.
Uncovering malingered amnesia
Annals of the New York Academy of Science
 , 
1985
, vol. 
44
 (pg. 
502
-
503
)
Bush
S. S.
Ruff
R. M.
Tröster
A. I.
Barth
J. T.
Koffler
S. P.
Pliskin
N. H.
, et al.  . 
Symptom validity assessment: Practice issues and medical necessity NAN policy & planning committee
Archives of Clinical Neuropsychology
 , 
2005
, vol. 
20
 (pg. 
419
-
426
)
Etcoff
L. M.
Kampfer
K. M.
Practical guidelines in the use of symptom validity and other psychological tests to measure malingering and symptom exaggeration in traumatic brain injury cases
Neuropsychology Review
 , 
1996
, vol. 
6
 
4
(pg. 
171
-
201
)
Green
P.
Word Memory Test
 , 
2003
Edmonton, Alberta
Green Publishing
Green
P.
The pervasive influence of effort on neuropsychological tests
Physical Medicine Rehabilitation Clinics of North America
 , 
2007
, vol. 
18
 
1
(pg. 
43
-
68
)
Green
P.
Flaro
L.
Courtney
J.
Examining False Positives on the Word Memory Test in adults with mild traumatic brain injury
Brain Injury
 , 
2009
, vol. 
23
 (pg. 
741
-
750
)
Greiffenstein
M. F.
Baker
W. J.
Gola
T.
Validation of malingered amnesia measures with a large clinical sample
Psychological Assessment
 , 
1994
, vol. 
6
 (pg. 
218
-
224
)
Greve
K. W.
Bianchini
K. J.
Setting empirical cut-offs on psychometric indicators of negative response bias: A methodological commentary with recommendations
Archives of Clinical Neuropsychology
 , 
2004
, vol. 
19
 (pg. 
533
-
541
)
Greve
K. W.
Bianchini
K. J.
Etherton
J. L.
Meyers
J. E.
Curtis
K. L.
Ord
J. S.
The Reliable Digit Span Test in chronic pain: Classification accuracy in detecting malingered pain-related disability
The Clinical Neuropsychologist
 , 
2009
, vol. 
24
 (pg. 
137
-
152
)
Grimes
D. A.
Schulz
K. F.
Epidemiology 3. Refining clinical diagnosis with likelihood ratios
Lancet
 , 
2005
, vol. 
365
 (pg. 
1500
-
1505
)
Hamberger
M. J.
Tamny
T. R.
Auditory naming and temporal lobe epilepsy
Epilepsy Research
 , 
1999
, vol. 
35
 (pg. 
229
-
243
)
Larrabee
G. J.
Detection of malingering using atypical performance patterns on standard neuropsychological tests
The Clinical Neuropsychologist
 , 
2003
, vol. 
17
 (pg. 
410
-
425
)
Larrabee
G. J.
Aggregation across multiple indicators improves the detection of malingering: Relationship to likelihood ratios
The Clinical Neuropsychologist
 , 
2008
, vol. 
22
 (pg. 
666
-
679
)
Larrabee
G. J.
Performance validity and symptom validity in neuropsychological assessment
Journal of the International Neuropsychological Society
 , 
2012
, vol. 
18
 
4
pg. 
625
 
Larrabee
G. J.
Millis
S. R.
Meyers
J. E.
40 plus or minus 10, a new magical number: Reply to Russell
The Clinical Neuropsychologist
 , 
2009
, vol. 
23
 
5
(pg. 
841
-
849
)
Meyers
J. E.
Boone
K.
Malingering mild traumatic brain injury: Behavioral approaches used by both malingering actors and probable malingerers
Assessment of feigned cognitive impairment: A neuropsychological perspective
 , 
2007
New York
Guilford Press
Meyers
J. E.
2013
 
Electronic Manual for the Meyers Neuropsychological Battery/System www.meyersneuropsychological.com
Meyers
J. E.
Bayless
J.
Meyers
K. R.
The Rey Complex Figure: Memory error patterns and functional abilities
Applied Neuropsychology
 , 
1996
, vol. 
3
 (pg. 
89
-
92
)
Meyers
J. E.
Galinsky
A.
Volbrecht
M.
Malingering and mild brain injury: How low is too low
Applied Neuropsychology
 , 
1999
, vol. 
6
 (pg. 
208
-
216
)
Meyers
J. E.
Meyers
K. R.
Rey complex figure test and recognition trial: Professional manual
 , 
1995
Odessa, FL
Psychological Assessment Resource
Meyers
J. E.
Miller
R. M.
Haws
N. A.
Murphy-Tafiti
J. L.
Curtis
T. D.
Rupp
Z. W.
, et al.  . 
An Adaptation of the MMPI-2 Meyers Index for the MMPI-2-RF
Applied Neuropsychology: Adult
 , 
2013
Meyers
J. E.
Millis
S. R.
Volkert
K.
A validity index for the MMPI-2
Archives of Clinical Neuropsychology
 , 
2002
, vol. 
17
 
2
(pg. 
157
-
169
)
Meyers
J. E.
Morrison
A. L.
Miller
J. C.
How low is too low, Revisited: Sentence Repetition and AVLT-Recognition in the detection of malingering
Applied Neuropsychology
 , 
2001
, vol. 
8
 
4
(pg. 
234
-
241
)
Meyers
J. E.
Roberts
R. J.
Bayless
J. D.
Volkert
K. T.
Evitts
P. E.
Dichotic listening: Expanded norms and clinical application
Archives of Clinical Neuropsychology
 , 
2002
, vol. 
17
 (pg. 
79
-
90
)
Meyers
J. E.
Rohling
M. L.
Validation of the Meyers Short Battery on mild TBI patients
Archives of Clinical Neuropsychology
 , 
2004
, vol. 
19
 (pg. 
637
-
651
)
Meyers
J. E.
Volbrecht
M.
Validation of reliable digits for detection of malingering
Assessment
 , 
1998
, vol. 
5
 (pg. 
301
-
305
)
Meyers
J. E.
Volbrecht
M.
Detection of malingerers using the Rey Complex Figure and recognition trial
Applied Neuropsychology
 , 
1999
, vol. 
6
 (pg. 
201
-
207
)
Meyers
J. E.
Volbrecht
M.
Axelrod
B. N.
Reinsch-Boothby
L.
Embedded Symptom Validity Tests and Overall Neuropsychological Test performance
Archives of Clinical Neuropsychology
 , 
2011
, vol. 
26
 (pg. 
8
-
15
)
Meyers
J. E.
Volbrecht
M. E.
A validation of multiple malingering detection methods in a large clinical sample
Archives of Clinical Neuropsychology
 , 
2003
, vol. 
18
 (pg. 
261
-
276
)
Mittenberg
W.
Patton
C.
Canyock
E. M.
Condit
D. C.
Base rates of malingering and symptom exaggeration
Journal of Clinical and Experimental Neuropsychology
 , 
2002
, vol. 
24
 (pg. 
1094
-
1102
)
Reitan
R.
Wolfson
D.
The Halstead-Reitan Neuropsychological Test Battery: Theory and interpretation
 , 
1985
Tucson
Neuropsychology Press
Rohling
M. L.
Meyers
J. E.
Millis
S. R.
Neuropsychological impairment following traumatic brain injury: A dose-response analysis
The Clinical Neuropsychologist
 , 
2003
, vol. 
17
 (pg. 
289
-
302
)
Slick
D. J.
Sherman
E. M. S.
Iverson
G. L.
Diagnostic criteria for malingered neurocognitive dysfunction: Proposed standards for clinical practice and research
Clinical Neuropsychologist
 , 
1999
, vol. 
13
 
4
(pg. 
545
-
561
)
Strauss
E.
Sherman
E. M.
Spreen
O.
A compendium of neuropsychological tests: Administration, norms, and commentary
 , 
2006
3rd ed.
New York
Oxford University Press
Vickery
C. D.
Berry
D. T. R.
Inman
T. H.
Harris
M. J.
Orey
S. A.
Detection of inadequate effort on neuropsychological testing: A meta-analytic review of selected procedures
Archives of Clinical Neuropsychology
 , 
2001
, vol. 
16
 (pg. 
45
-
73
)
Victor
T. L.
Boone
K. B.
Serpa
J.
Buehler
J.
Ziegler
E. A.
Interpreting the Meaning of Multiple Symptom Validity Test Failure
The Clinical Neuropsychologist
 , 
2009
, vol. 
23
 
2
(pg. 
297
-
313
)
Wechsler
D.
WAIS-III: Administration and scoring manual
 , 
1997
San Antonio, TX
The Psychological Corporation