Abstract

Response consistency (CNS) is considered in free-standing performance validity measures like the Medical Symptom Validity Test (MSVT). This study examined the utility of CNS scores on the Test of Memory Malingering (TOMM). CNS indices were derived in a non-clinical undergraduate sample randomized to control (n = 73), naïve simulator (n = 73), and coached simulator (n = 73) groups. Two of the three TOMM CNS measures showed higher classification rates identifying naïve simulators than the standard TOMM criteria; CNS measures classified coached simulators better than the standard TOMM criteria. Coached simulators outperformed naïve simulators on the standard TOMM scores, but not on CNS measures, suggesting their resilience to coaching. In a separate clinical sample of veterans (N = 92), TOMM CNS scores exhibited comparable classification rates with the standard TOMM scoring using the MSVT as the performance validity criterion. Overall, findings support TOMM CNS scores, especially in settings in which examinee coaching is likely.

Introduction

Assessing performance validity via multiple methods is recommended by practice guidelines and policy statements of national professional organizations in neuropsychology (Bush et al., 2005; Heilbronner, Sweet, Morgan, Larrabee, & Millis, 2009). Performance validity data can be drawn from a number of sources including qualitative consideration of examinee behavior (Larrabee, 1990), embedded (Greiffenstein, Baker, & Gola, 1994) and composite measures (Schutte, Millis, Axelrod, & VanDyke, 2011), and free-standing instruments (Green, 2002; Tombaugh, 1996).

Consideration of response consistency (CNS) is a component of some free-standing performance validity indicators. The Word Memory Test (WMT; Green, 2002) and Medical Symptom Validity Test (MSVT; Green, 2004) both include a measure of response CNS as one of the three validity indices. The CNS score is a calculation of the percentage of items answered similarly on Immediate Recognition (IR) and Delayed Recognition (DR) trials. Although research specifically examining the incremental validity of the CNS trial on the WMT is lacking, there has been substantial research on the measure as a whole (Gervais, Rohling, Green, & Ford, 2004; Gorissen, Sanz, & Schmand, 2005; Green, 2007; Green, Rohling, Lees-Haley, & Allen, 2001; Rienstra, Spaan, & Schmand, 2009). The WMT has been reported to be more sensitive than the other free-standing performance validity measures (Gervais et al., 2004).

The MSVT is similar to the WMT but contains fewer items and requires less administration time (Green, 2004). The MSVT has shown low false-positive rates in cases involving memory impairment (Howe, Anderson, Kaufman, Sachs, & Loring, 2007; Singhal, Green, Ashaye, Shankar, & Gill, 2009) and neurologic dysfunction (Carone, 2008). Green, Montijo, and Brockhaus (2011) examined WMT and MSVT performance in clinical referrals for dementia evaluation. Data from both the WMT and the MSVT were available for a subset of the sample (n = 23), and the measures showed 100% agreement in classifying individuals as showing a genuine memory impairment (GMI) profile. Failure rates have been reported that range from 17% (Whitney, Shepard, Williams, Davis, & Adams, 2009) to 37% (Axelrod & Schutte, 2011) in samples of veterans, which are consistent with rates reported on other performance validity measures.

The Test of Memory Malingering (TOMM; Tombaugh, 1996) is a frequently used performance validity measure that has been well-validated in a variety of populations (Ashendorf, Constantinou, & McCaffrey, 2004; Iverson, Le Page, Koehler, Shojania, & Badii, 2007; Rees, Tombaugh, Gansler, & Moczynski, 1998; Tombaugh, 1997). Additional research has explored abbreviated (Hilsabeck, Gordon, Hietpas-Wilson, & Zartman, 2011) and computer-administered (Yantz & McCaffrey, 2007) formats of the measure. Similar to concerns with other performance validity indicators in dementia evaluations (cf. Dean, Victor, Boone, Philpott, & Hess, 2009), the false-positive rate on the TOMM has been reported to be unacceptably high, approaching 75% in one study (Teichner & Wagner, 2004). Other authors have been critical of the TOMM for low sensitivity (Armistead-Jehle & Gervais, 2011; Green, 2011). In addition, as with any published performance validity indicator, reduced sensitivity over time is possible due to attorney coaching (Youngjohn, 1995) or information obtained from the Internet (Ruiz, Drake, Glass, Marcotte, & Van Gorp, 2002).

Given the concerns regarding classification accuracy and the ease with which information can be obtained on the TOMM, it would appear to be a candidate for modified scoring with the aim of improving classification accuracy and lowering its susceptibility to coaching. To this end, Davis, Ramos, Sherer, Bertram, and Wall (2009) developed CNS indices on the TOMM and noted improved classification accuracy over the standard TOMM cutoffs in one group of undergraduate volunteers randomized to a coached simulator group. These initial findings are described in detail below in combination with data from a second phase of data collection. Subsequently, Gunner, Miele, Lynch, and McCaffrey (2010, 2012) reported preliminary data on CNS measures on the TOMM in a sample (N = 48) of disability claimants and litigants the majority of whom reported a history of mild traumatic brain injury (TBI). Using an approach similar to Davis and colleagues (2009), Gunner and colleagues examined CNS on three sets of two-trial comparisons (i.e., Trial 1 and Trial 2, Trial 1 and Retention, and Trial 2 and Retention). They also examined a three-trial CNS measure (i.e., Trial 1, Trial 2, and Retention). Using the WMT as a performance validity criterion, a cut score of 10 or more inconsistent responses on the three-trial CNS measure demonstrated high sensitivity (0.71) and perfect specificity.

This report presents findings from two studies examining CNS indices on the TOMM. The first study combined data from two prospective experiments with undergraduate volunteers, which served as the derivation sample to examine initial findings of classification accuracy of TOMM CNS indices. The second study explored the external validity of CNS indices by examining their utility in a moderately sized, well-characterized clinical sample presenting with a range of neurologic and psychiatric conditions. The goal of this two-part research process was to complement the high internal validity of an experimental design with clinical data to improve external validity.

Study 1: Derivation Sample

Method

Participants

The derivation sample was comprised of undergraduate volunteers recruited from a research pool at a Midwestern University who participated in a larger simulation study examining performance validity. Inclusion criteria were age of at least 18 years, current enrollment, and English fluency. History of a neurologic or psychiatric condition associated with neurocognitive consequence (e.g., TBI, learning disability) was an exclusion criterion for this study. Participants were randomly assigned to control (n = 73), naïve simulator (n = 73), and coached simulator (n = 73) groups. The sample was 87% women with the average age of 21.8 years (SD = 6.4; range 18–54). Eighty-one percent of participants were Caucasian, 15% were African American, and 4% were Latino/Hispanic or Asian American. The average educational level was 12.8 years (SD = 1.1), and 94% of participants were right-handed.

Measures

Background questionnaire

The background questionnaire provided information on socio-demographic characteristics (e.g., age, gender, education level, and handedness).

Reading material and quiz

A brief reading passage was administered and was followed by a nine-item, multiple choice measure. This method was used by Rapport, Farchione, Coleman and Axelrod (1998) to examine compliance with study instructions. Participants assigned to the control and naïve simulator groups received information describing spinal cord injury, whereas those in the coached simulator condition were given material on mild TBI.

Test of Memory Malingering

The TOMM (Tombaugh, 1996) is a 50-item recognition memory task involving line drawings of everyday objects. All three trials (Trial 1, Trial 2, and Retention) were administered and scored according to the standard procedure detailed in the test manual. In addition, three CNS measures were developed that examined responses in three pairwise comparisons: Trial 1 and Trial 2 (TOMM-C1), Trial 1 and Retention (TOMM-C2), and Trial 2 and Retention (TOMM-C3). An Excel spreadsheet was developed to facilitate CNS scoring after entry of responses to items on all three TOMM trials. Pairs of responses were dichotomously scored as inconsistent or consistent. The sum of inconsistent responses was subtracted from 50 to produce a total CNS score following the same convention as the standard TOMM scores. With this scoring procedure, a CNS score of 50 could be obtained by getting all the items correct or by missing all of the items. Given that consideration of CNS would not be necessary in the latter case, differentiation of consistently correct or consistently incorrect responses was not necessary. For example, in a case involving scores of zero on Trial 2 and Retention, which would lead to a TOMM-C3 score of 50, there would be no need to examine CNS because the standard TOMM scores are so low that both norm-referenced and below-chance failure criteria are met.

Word Memory Test

The WMT (Green, 2002) is a computer-administered test of learning and performance validity involving a 40-item word list that is presented twice. After stimulus presentation, additional trials include IR, DR, Multiple Choice, Paired Associates (PA), Free Recall (FR), and Long-Delay FR. CNS is calculated based on IR and DR performance. The test was administered and scored in compliance with instructions provided in the test manual.

Adherence questionnaire

The adherence questionnaire included two Likert-scale items on which participants rated the extent to which they followed instructions and believed their efforts were successful on a scale from 1 (“not at all”) to 5 (“very much so”).

Procedure

Two data collection efforts occurred, one in 2007–2009 (n = 99) and the other in 2009–2011 (n = 120). Institutional review board approval was obtained before initiating recruitment and data collection. Participants provided informed consent after verification that they met inclusion criteria and then completed a demographic questionnaire. Next, participants were assigned to groups in a randomized manner (i.e., study packets containing group-specific instructions had been prepared in advance and randomly distributed according to the participant identification number). Research assistants collecting data were graduate students in clinical psychology and were blind to participant condition. The reading passage and quiz were administered as described above, and then participants read instructions that varied by group. Instructions for control participants asked that they provide optimal performance on all measures. Instructions for naïve simulators included a scenario involving legal proceedings related to a motor vehicle accident and a request that they feign symptoms due to the accident without more specific guidance on how to accomplish the task. Instructions for coached simulators included the same scenario used for naïve simulators with additional guidance provided on how to feign impairment and avoid detection. The participant instructions and coaching paradigm were adapted from other simulation studies (Martin, Hayes, & Gouvier, 1996). However, in order to protect test security, these instructions will not be described further.

After the provision of instructions, the test administration of the TOMM, WMT, and other neuropsychological measures that were part of the larger research endeavor proceeded in a standard order. Following the administration of all tests, participants completed the adherence questionnaire and were debriefed. Volunteers currently enrolled in introductory psychology courses earned research participation credit and those who completed the study received compensation with a $10 gift card to a local store.

Results

Control, naïve simulator, and coached simulator groups were not significantly different with regard to gender, ethnicity, age, education, or handedness (Table 1). Scores on the quiz assessing comprehension of reading passages were not different by group, F(2, 216) = 0.53, p = .59. All participants reported moderate instruction adherence or better (i.e., ≥3 on the five-point scale) and moderate or better success at following the instructions (Table 1). Group differences were observed on all performance validity measures (Table 2). On post hoc analysis (Sidak), control participants outperformed both simulator groups. Coached simulators outperformed naïve simulators on WMT-IR, WMT-DR, and all standard TOMM trials. WMT-CNS was not different between simulator groups. Turning to TOMM CNS measures, group differences were noted on: TOMM-C1, F(2, 216) = 26.75, p < .001; TOMM-C2, F(2, 216) = 26.18, p < .001; and TOMM-C3, F(2, 216) = 21.48, p < .001. Post hoc analyses (Sidak) showed that the control participants scored higher on all three CNS measures than both simulator groups, which were not different from each other.

Table 1.

Demographic characteristics of the undergraduate sample

Variable Control (n = 73) Naïve (n = 73) Coached (n = 73) F/χ2-value p-value 
Age (M [SD]) 21.8 (6.3) 22.6 (8.0) 21.1 (4.5) 1.08 .34 
Education (M [SD]) 12.8 (1.1) 12.8 (1.1) 12.8 (1.2) 0.06 .94 
% Women 89.0 84.9 87.7 0.57 .75 
% Caucasian 76.7 86.1 79.5 2.17 .34 
% Right-handed 93.2 97.3 90.4 3.64 .46 
Reading quiz (M [SD]) 7.5 (1.3) 7.5 (1.3) 7.3 (1.4) 0.53 .59 
% Instructions 100.0 97.2 98.6 2.06 .36 
% Successful 100.0 93.1 93.2 5.28 .07 
Variable Control (n = 73) Naïve (n = 73) Coached (n = 73) F/χ2-value p-value 
Age (M [SD]) 21.8 (6.3) 22.6 (8.0) 21.1 (4.5) 1.08 .34 
Education (M [SD]) 12.8 (1.1) 12.8 (1.1) 12.8 (1.2) 0.06 .94 
% Women 89.0 84.9 87.7 0.57 .75 
% Caucasian 76.7 86.1 79.5 2.17 .34 
% Right-handed 93.2 97.3 90.4 3.64 .46 
Reading quiz (M [SD]) 7.5 (1.3) 7.5 (1.3) 7.3 (1.4) 0.53 .59 
% Instructions 100.0 97.2 98.6 2.06 .36 
% Successful 100.0 93.1 93.2 5.28 .07 

Notes: Naïve = naïve simulators; Coached = coached simulators. Reading quiz = raw scores on nine-item measure; % Instructions = percentage of participants with moderate endorsement or higher of the statement, “I tried to follow the instructions I was given”; % Successful = percentage of participants with moderate endorsement or higher of the statement, “I was successful at producing the results asked of me in the instructions.”

Table 2.

Performance validity indicator scores by group in the undergraduate sample

Measure 1. Control
 
2. Naïve simulators
 
3. Coached simulators
 
F-value* 1 versus 2 (d1 versus 3 (d
n M (SDn M (SDn M (SD
WMT-IR 73 98.7 (2.9)a 72 64.0 (32.7)b 73 81.7 (21.4)c 42.9 1.50 1.11 
WMT-DR 73 99.1 (2.0)a 72 65.1 (33.6)b 73 80.7 (20.3)c 41.0 1.43 1.28 
WMT-CNS 73 98.2 (3.4)a 72 72.6 (24.3)b 73 79.6 (17.2)b 42.5 1.48 1.50 
TOMM-T1 73 48.7 (2.2)a 73 38.3 (12.5)b 73 43.0 (8.2)c 26.0 1.16 0.95 
TOMM-T2 73 49.9 (0.5)a 73 39.2 (14.6)b 73 45.6 (8.5)c 22.1 1.04 0.71 
TOMM-R 73 49.9 (0.3)a 73 38.5 (15.1)b 73 44.5 (9.5)c 22.6 1.07 0.80 
TOMM-C1 73 48.6 (2.5)a 73 39.9 (9.5)b 73 42.5 (8.1)b 26.8 1.25 1.02 
TOMM-C2 73 48.6 (2.4)a 73 39.5 (10.1)b 73 42.0 (8.8)b 26.2 1.24 1.02 
TOMM-C3 73 49.8 (0.7)a 73 41.2 (10.8)b 73 43.9 (9.0)b 21.5 1.12 0.92 
Measure 1. Control
 
2. Naïve simulators
 
3. Coached simulators
 
F-value* 1 versus 2 (d1 versus 3 (d
n M (SDn M (SDn M (SD
WMT-IR 73 98.7 (2.9)a 72 64.0 (32.7)b 73 81.7 (21.4)c 42.9 1.50 1.11 
WMT-DR 73 99.1 (2.0)a 72 65.1 (33.6)b 73 80.7 (20.3)c 41.0 1.43 1.28 
WMT-CNS 73 98.2 (3.4)a 72 72.6 (24.3)b 73 79.6 (17.2)b 42.5 1.48 1.50 
TOMM-T1 73 48.7 (2.2)a 73 38.3 (12.5)b 73 43.0 (8.2)c 26.0 1.16 0.95 
TOMM-T2 73 49.9 (0.5)a 73 39.2 (14.6)b 73 45.6 (8.5)c 22.1 1.04 0.71 
TOMM-R 73 49.9 (0.3)a 73 38.5 (15.1)b 73 44.5 (9.5)c 22.6 1.07 0.80 
TOMM-C1 73 48.6 (2.5)a 73 39.9 (9.5)b 73 42.5 (8.1)b 26.8 1.25 1.02 
TOMM-C2 73 48.6 (2.4)a 73 39.5 (10.1)b 73 42.0 (8.8)b 26.2 1.24 1.02 
TOMM-C3 73 49.8 (0.7)a 73 41.2 (10.8)b 73 43.9 (9.0)b 21.5 1.12 0.92 

Notes: Analysis of variance of performance validity measures by group. Cells sharing subscripts were not significantly different on post hoc analysis (Sidak). WMT = Word Memory Test; IR = Immediate Recognition; DR = Delayed Recognition; CNS = Consistency; TOMM = Test of Memory Malingering; T1 = Trial 1; T2 = Trial 2; R = Retention; C1 = Trial 1 versus Trial 2 Consistency; C2 = Trial 1 versus Retention Consistency; C3 = Trial 2 versus Retention Consistency.

*p< .001 for all F-values shown.

Receiver operating characteristic (ROC) analysis was used to examine the classification accuracy of TOMM CNS scores in the derivation sample. In differentiating naïve simulators from control participants, the area under the curve (AUC) of TOMM-C1 was 0.80 (95% confidence interval: 0.72–0.87). The AUC of TOMM-C2 was 0.79 (0.72–0.86), and that of TOMM-C3 was 0.74 (0.68–0.81). In differentiating coached simulators from control participants, the AUC of TOMM-C1 was 0.76 (0.69–0.84). The AUC of TOMM-C2 was 0.77 (0.69–0.84) and that of TOMM-C3 was 0.72 (0.65–0.78).

ROC curves of TOMM CNS scores were compared with the standard TOMM criteria using the rocgold procedure in Stata 9 (StataCorp, 2005). In differentiating control and naïve simulator participants, the standard TOMM criteria demonstrated AUC of 0.73, which was significantly lower than the AUC of TOMM-C1, χ2(1, N = 219) = 5.01, p = .02, and TOMM-C2, χ2(1, N = 219) = 4.25, p = .04, but not TOMM-C3, χ2(1, N = 219) = 0.58, p = .45. In differentiating control and coached simulator participants, the standard TOMM criteria demonstrated AUC of 0.66, which was significantly lower than the AUC of TOMM-C1, χ2(1, N = 219) = 7.96, p = .005, TOMM-C2, χ2(1, N = 219) = 8.93, p = .003, and TOMM-C3, χ2(1, N = 219) = 4.50, p = .03.

Additional ROC comparisons were conducted to examine the standard TOMM criteria and CNS scores in relation to the classification accuracy of the WMT. In differentiating control and naïve simulator participants, the AUC of the standard TOMM criteria was significantly lower than the AUC of the WMT (0.81), χ2(1, N = 218) = 9.40, p = .002. The AUC of the WMT was not significantly different from that of TOMM-C1, χ2(1, N = 218) = 0.06, p = .80, or TOMM-C2, χ2(1, N = 218) = 0.16, p = .69; the AUC of TOMM-C3 was significantly lower than that of the WMT, χ2(1, N = 218) = 4.50, p = .03. In differentiating control and coached simulator participants, the AUC of the standard TOMM criteria was significantly lower than the AUC of the WMT (0.76), χ2(1, N = 219) = 10.48, p = .001, but the AUC of TOMM-C1, TOMM-C2, and TOMM-C3 did not differ significantly from the WMT (p > .05 in all cases).

The standard TOMM criteria correctly identified all control participants (100% specificity), 45% of naïve simulators, and 33% of coached simulators. A TOMM-C1 cutoff (<45) correctly identified 90% of control participants, 51% of naïve simulators, and 41% of coached simulators. TOMM-C2 (<45) correctly identified 92% of control participants, 51% of naïve simulators, and 42% of coached simulators. TOMM-C3 (<48) correctly identified 96% of control participants, 47% of naïve simulators, and 38% of coached simulators. Among naïve simulators, CNS indices reduced the number of false negatives by four cases using TOMM-C1 and TOMM-C2 and one case using TOMM-C3. Among coached simulators, CNS indices reduced the number of false negatives by six, seven, and four cases using TOMM-C1, TOMM-C2, and TOMM-C3, respectively.

Study 2: Clinical Validation

Method

Participants

Consecutive clinical referrals to one of the authors (K.A.W.) were reviewed for cases that were administered the TOMM and the MSVT as part of the clinical neuropsychological evaluation. To lessen the chance of familiarity with the performance validity tests and the associated chance that the patient would have coached themselves on these measures using the Internet, only patients who were referred for initial evaluation rather than a neuropsychological re-test were included in the study. Thus, the sample was presumably uncoached. The patients included in the present study overlap with those in a study examining the relationship of digit span variables and performance on the TOMM and the MSVT (Whitney, Shepard, & Davis, in press). The sample (N = 92) was 82% Caucasian and 91% men with an average age of 48.6 (SD = 13.5) and average educational level of 12.5 years (SD = 2.6). No patients were diagnosed with mental retardation. Participants were categorized into groups according to whether or not they passed (n = 63) or failed (n = 29) the MSVT (Green, 2004). Additional demographic information and the most common reasons for referral are provided in Table 3.

Table 3.

Demographic characteristics and presenting diagnoses in the VA sample

 Pass MSVT (n = 63) Fail MSVT (n = 29) t2-value p-value 
Age (M [SD]) 48.8 (13.3) 48.1 (14.2) 0.23 .81 
Education (M [SD]) 13.0 (2.7) 11.6 (2.1) 2.34 .02 
% Women 4.8 17.2 3.90 .05 
% Caucasian 85.7 72.4 3.71 .16 
% Presenting diagnoses 
 Neurologic 30.1 24.2   
 Mild TBIa 27.0 31.0   
 Neurologic and psychiatric 17.5 27.6   
 Moderate to severe TBIa 12.7 10.3   
 Psychiatric only 11.1 6.9   
 None 1.6 0.0   
 Pass MSVT (n = 63) Fail MSVT (n = 29) t2-value p-value 
Age (M [SD]) 48.8 (13.3) 48.1 (14.2) 0.23 .81 
Education (M [SD]) 13.0 (2.7) 11.6 (2.1) 2.34 .02 
% Women 4.8 17.2 3.90 .05 
% Caucasian 85.7 72.4 3.71 .16 
% Presenting diagnoses 
 Neurologic 30.1 24.2   
 Mild TBIa 27.0 31.0   
 Neurologic and psychiatric 17.5 27.6   
 Moderate to severe TBIa 12.7 10.3   
 Psychiatric only 11.1 6.9   
 None 1.6 0.0   

Notes: VA = Veterans Affairs; MSVT = Medical Symptom Validity Test; TBI = traumatic brain injury.

aTBI severity determined using primarily self-report in conjunction with the Mayo classification system (Malec, Brown, Leibson, Flaada, & Mandrekar, 2007).

Measures

The TOMM was administered as described above. CNS score cutoffs derived using the student sample were examined in the clinical sample. Cutoffs were revised as needed to maintain a conservative (i.e., ≤10%) false-positive rate.

Medical Symptom Validity Test

The MVST (Green, 2004) is a computer-administered measure of learning and performance validity that is similar to the WMT but requires less time for administration. After two presentations of a 20-word list, four trials are administered resulting in five test scores: IR, DR, CNS, PA, and FR. MSVT scores were analyzed according to criteria outlined in the Advanced Interpretation (AI) Program (Green, 2010). The AI Program uses profile analysis to categorize test-takers into three groups: those who pass the MSVT, those who fail the MSVT due to poor effort, and those who fail the MSVT with a GMI profile. Using a variety of normative databases, the AI Program aims to reduce false-positive categorizations of poor effort on the MSVT by identifying those individuals whose performances are similar to those of individuals with severe memory impairment typical of the GMI profile. The main criteria that qualify an individual for the GMI profile are that they (a) score below cutoff on IR, DR, or CNS, (b) score an average of 20 points higher on the easy subtests than the hard subtests, (c) exhibit no scores below chance, and (d) evidence clinical correlates of memory difficulty. Individuals identified as having GMI were excluded from the present study (n = 6) to improve the homogeneity of groups and to reduce the potential confound of false positives on the MSVT.

Procedure

Data were retrospectively collected from the files of 98 outpatients who were clinically referred to one of the authors (K.A.W.) for neuropsychological evaluation over approximately a 9-month period at a Department of Veterans Affairs (VA) Medical Center. An institutional review board approved the research protocol. The classification accuracy of standard and CNS indices on the TOMM was examined in groups defined by performance on the MSVT.

Statistical analyses

Statistical analyses were conducted using SPSS, Version 19 (SPSS, 2010). Initial analyses consisted of conducting two-tailed independent t-tests to compare group differences in age and education among persons either failing or passing the MSVT. Alpha was set at 0.05 for all analyses. Effect sizes and confidence intervals were calculated using a computer program (Devilly, 2004). ROC curve analyses were used to evaluate the classification accuracy of TOMM variables in identifying MSVT failure. The Youden Index (Youden, 1950) was considered in choosing optimal cutoffs for CNS measures. The Youden Index is calculated by this equation: Youden Index = sensitivity + specificity − 1. The resulting values range from 0 to 1 with higher scores demonstrating greater classification accuracy.

Results

Participants who failed the MSVT (N = 29) were not different from those who passed the MSVT (N = 63) in age or ethnicity (Table 3). The group failing the MSVT had a larger proportion of women with a significance level at the cutoff. There are no empirically identified gender effects on the TOMM, so this almost significant finding was not examined further. The educational level was significantly higher among participants who passed the MSVT than those who failed the MSVT (Table 3). Given the observed educational differences, additional analyses were conducted to examine the relationship of education and TOMM performance. Years of education and TOMM standard scores were neither significantly correlated in the sample as a whole (r = .09 to .14, NS), nor were they significantly correlated in the groups that passed (r = .09, NS) or failed (r = − .26 to −.20, NS) the MSVT. Since education was not related to TOMM scores, the group difference was not considered in additional analyses.

Participants who passed the MSVT scored significantly higher on all standard TOMM scores and CNS scores than those who failed the MSVT (Table 4). The largest effect size was demonstrated for TOMM-C1. With regard to ROC analyses, in differentiating the group who passed the MSVT and the group who failed the MSVT, the AUC of TOMM-C1 was 0.89 (95% confidence interval: 0.82–0.96). The AUC of TOMM-C2 was 0.88 (0.81–0.95) and that of TOMM-C3 was 0.85 (0.75–0.95). The standard TOMM criteria demonstrated AUC of 0.80 (0.69–0.92), sensitivity of 0.66 and specificity of 0.95.

Table 4.

Performance validity indicator scores by group in the VA sample

Measure Pass MSVT (M [SD]) Fail MSVT (M [SD]) t d 95% CI 
TOMM-T1 45.1 (4.4) 33.7 (8.9) 8.5* 1.68 1.17–2.18 
TOMM-T2 49.2 (2.2) 38.8 (9.9) 8.0* 1.45 0.96–1.94 
TOMM-R 49.1 (2.2) 39.2 (9.4) 8.0* 1.45 0.97–1.94 
TOMM-C1 44.7 (5.1) 33.4 (7.8) 8.2* 1.71 1.20–2.21 
TOMM-C2 44.6 (4.9) 33.7 (7.6) 8.3* 1.70 1.20–2.21 
TOMM-C3 48.7 (3.2) 37.8 (8.7) 8.8* 1.67 1.17–2.17 
Measure Pass MSVT (M [SD]) Fail MSVT (M [SD]) t d 95% CI 
TOMM-T1 45.1 (4.4) 33.7 (8.9) 8.5* 1.68 1.17–2.18 
TOMM-T2 49.2 (2.2) 38.8 (9.9) 8.0* 1.45 0.96–1.94 
TOMM-R 49.1 (2.2) 39.2 (9.4) 8.0* 1.45 0.97–1.94 
TOMM-C1 44.7 (5.1) 33.4 (7.8) 8.2* 1.71 1.20–2.21 
TOMM-C2 44.6 (4.9) 33.7 (7.6) 8.3* 1.70 1.20–2.21 
TOMM-C3 48.7 (3.2) 37.8 (8.7) 8.8* 1.67 1.17–2.17 

Notes: Performance validity scores of participants passing (n = 63) and failing (n = 32) the MSVT. VA = Veterans Affairs; MSVT = Medical Symptom Validity Test; TOMM = Test of Memory Malingering; T1 = Trial 1; T2 = Trial 2; R = Retention; C1 = Trial 1 versus Trial 2 Consistency; C2 = Trial 1 versus Retention Consistency; C3 = Trial 2 versus Retention Consistency.

*p < .001.

The CNS cutoffs identified in the derivation sample showed elevated false-positive rates in the clinical sample with specificity values of 0.68, 0.67, and 0.84 for TOMM-C1, TOMM-C2, and TOMM-C3, respectively. Sensitivity and specificity for a range of CNS score cutoffs are shown in Table 5. The Youden Index was used to identify optimal CNS index cutoffs in the clinical sample. In the clinical sample, for TOMM-C1, using the optimal cutoff (<37), as determined by the Youden Index, resulted in a sensitivity of 0.69 and a specificity of 0.92. For TOMM-C2, using the optimal cutoff (<37), as determined by the Youden Index, resulted in a sensitivity of 0.66 and a specificity of 0.90. For TOMM-C3, the Youden Index was equivalent for two cutoffs (<47 and <46). Using a cutoff of <46, sensitivity for TOMM-C3 was 0.72 and specificity was 0.92. In general, the CNS scores showed very similar sensitivities and specificities to the standard TOMM scores (Table 6).

Table 5.

Sensitivity and specificity of CNS variables by sample

Variable Undergraduate
 
VA
 
Sens
 
Spec (Con) Sens (fail MSVT) Spec (pass MSVT) 
NS CS 
TOMM-C1 
 <50 0.81 0.84 0.51 1.00 0.08 
 <48 0.68 0.53 0.84 1.00 0.37 
 <46 0.55 0.45 0.89 0.97 0.62 
 <45 0.51 0.41 0.90 0.86 0.68 
 <44 0.48 0.40 0.96 0.83 0.71 
 <42 0.47 0.37 0.97 0.76 0.78 
 <40 0.42 0.33 0.99 0.72 0.82 
 <38 0.40 0.29 0.99 0.69 0.90 
<37 0.38 0.29 0.99 0.69 0.92 
 <36 0.36 0.26 1.00 0.66 0.92 
 <34 0.33 0.18 1.00 0.59 0.95 
TOMM-C2 
 <50 0.81 0.84 0.48 1.00 0.08 
 <48 0.70 0.56 0.84 1.00 0.33 
 <46 0.56 0.45 0.89 0.93 0.60 
 <45 0.51 0.42 0.92 0.83 0.67 
 <44 0.48 0.41 0.96 0.79 0.70 
 <42 0.45 0.41 0.97 0.79 0.79 
 <40 0.41 0.34 0.99 0.72 0.82 
 <38 0.38 0.30 0.99 0.69 0.89 
<37 0.37 0.27 1.00 0.66 0.90 
 <36 0.36 0.23 1.00 0.62 0.90 
 <34 0.33 0.21 1.00 0.55 0.95 
TOMM-C3 
 <50 0.55 0.51 0.89 0.83 0.63 
 <48 0.47 0.38 0.96 0.76 0.84 
 <47 0.44 0.38 0.99 0.76 0.89 
<46 0.44 0.34 1.00 0.72 0.92 
 <45 0.41 0.33 1.00 0.69 0.92 
 <44 0.41 0.33 1.00 0.69 0.95 
Variable Undergraduate
 
VA
 
Sens
 
Spec (Con) Sens (fail MSVT) Spec (pass MSVT) 
NS CS 
TOMM-C1 
 <50 0.81 0.84 0.51 1.00 0.08 
 <48 0.68 0.53 0.84 1.00 0.37 
 <46 0.55 0.45 0.89 0.97 0.62 
 <45 0.51 0.41 0.90 0.86 0.68 
 <44 0.48 0.40 0.96 0.83 0.71 
 <42 0.47 0.37 0.97 0.76 0.78 
 <40 0.42 0.33 0.99 0.72 0.82 
 <38 0.40 0.29 0.99 0.69 0.90 
<37 0.38 0.29 0.99 0.69 0.92 
 <36 0.36 0.26 1.00 0.66 0.92 
 <34 0.33 0.18 1.00 0.59 0.95 
TOMM-C2 
 <50 0.81 0.84 0.48 1.00 0.08 
 <48 0.70 0.56 0.84 1.00 0.33 
 <46 0.56 0.45 0.89 0.93 0.60 
 <45 0.51 0.42 0.92 0.83 0.67 
 <44 0.48 0.41 0.96 0.79 0.70 
 <42 0.45 0.41 0.97 0.79 0.79 
 <40 0.41 0.34 0.99 0.72 0.82 
 <38 0.38 0.30 0.99 0.69 0.89 
<37 0.37 0.27 1.00 0.66 0.90 
 <36 0.36 0.23 1.00 0.62 0.90 
 <34 0.33 0.21 1.00 0.55 0.95 
TOMM-C3 
 <50 0.55 0.51 0.89 0.83 0.63 
 <48 0.47 0.38 0.96 0.76 0.84 
 <47 0.44 0.38 0.99 0.76 0.89 
<46 0.44 0.34 1.00 0.72 0.92 
 <45 0.41 0.33 1.00 0.69 0.92 
 <44 0.41 0.33 1.00 0.69 0.95 

Notes: Optimal cut scores based on VA sample are bolded. CNS = Consistency; VA = Veterans Affairs; Sens = sensitivity; Spec = specificity; NS = naïve simulator; CS = coached simulator; Con = control; MSVT = Medical Symptom Validity Test; TOMM = Test of Memory Malingering; C1 = Trial 1 versus Trial 2 Consistency; C2 = Trial 1 versus Retention Consistency; C3 = Trial 2 versus Retention Consistency.

Table 6.

Classification accuracy of TOMM and CNS variables

Variable Undergraduate
 
VA
 
Sens
 
Spec (Con) Sens (fail MSVT) Spec (pass MSVT) 
NS CS 
TOMMa 0.45 0.33 1.00 0.66 0.95 
Trial 1a 0.10 0.03 1.00 0.00 1.00 
Trial 2a 0.42 0.25 1.00 0.59 0.95 
Retentiona 0.44 0.33 1.00 0.62 0.95 
TOMM-C1 <37b 0.38 0.29 0.99 0.69 0.92 
TOMM-C2 <37b 0.37 0.27 1.00 0.66 0.90 
TOMM-C3 <46b 0.44 0.34 1.00 0.72 0.92 
Variable Undergraduate
 
VA
 
Sens
 
Spec (Con) Sens (fail MSVT) Spec (pass MSVT) 
NS CS 
TOMMa 0.45 0.33 1.00 0.66 0.95 
Trial 1a 0.10 0.03 1.00 0.00 1.00 
Trial 2a 0.42 0.25 1.00 0.59 0.95 
Retentiona 0.44 0.33 1.00 0.62 0.95 
TOMM-C1 <37b 0.38 0.29 0.99 0.69 0.92 
TOMM-C2 <37b 0.37 0.27 1.00 0.66 0.90 
TOMM-C3 <46b 0.44 0.34 1.00 0.72 0.92 

Notes: Classification accuracy of standard TOMM and CNS variables by group. CNS = Consistency; TOMM = Test of Memory Malingering; VA = Veterans Affairs; Sens = sensitivity; Spec = specificity; NS = naïve simulator; CS = coached simulator; Con = control; MSVT = Medical Symptom Validity Test; C1 = Trial 1 versus Trial 2 Consistency; C2 = Trial 1 versus Retention Consistency; C3 = Trial 2 versus Retention Consistency.

aUsing cut scores detailed in test manual.

bCNS cut scores from present study.

Discussion

This study presented findings from derivation and clinical validation studies of CNS scores on the TOMM. An adequately sized sample of undergraduate participants provided the basis for deriving TOMM CNS scores. In two phases of data collection, participants were randomly assigned to control or simulator conditions, which resulted in comparable groups in demographic characteristics. Groups were also similar on measures of protocol adherence.

As expected, control participants outperformed participants in both naïve and coached simulator groups on multiple performance validity measures and on TOMM CNS scores. Consistent with previous research, the findings demonstrated the vulnerability of performance validity measures to coaching as participants in the coached simulator group showed increased scores on most standard performance validity measures compared with the naïve simulator group. Consideration of effect sizes between groups also highlights the influence of coaching as effect sizes between control and coached simulators were generally smaller than those between control and naïve simulators. Notable exceptions to these differences were observed on CNS measures. On the WMT, the naïve and coached simulator groups were not different on CNS despite differences on IR and DR. On the TOMM, coached simulators outperformed naïve simulators on all three standard TOMM trials, but the naïve and coached simulator groups were not different on any of the TOMM CNS scores. Thus, consideration of CNS appears to reduce the influence of coaching on performance validity indicators.

These findings in the undergraduate sample provide a robust demonstration of the utility of the CNS scores in a research design with high internal validity. A limitation of simulation studies, however, is reduced external validity. To this end, the CNS scores were also examined in a mixed clinical sample of veterans referred for neuropsychological evaluation. With the VA participants grouped by score performance on a free-standing performance validity measure, the group that failed the MSVT performed worse than those who passed the MSVT on the TOMM standard and CNS scores. Of all the TOMM scores, the largest effect size between groups was shown for the TOMM CNS measures, C1 (Trials 1 and 2) and C2 (Trial 1 and Retention). The standard TOMM cutoffs showed high specificity and adequate sensitivity in the clinical sample. However, TOMM CNS score cutoffs that showed acceptable specificity (i.e., ≤ 10% false positives) in the undergraduate sample had to be lowered to achieve similar specificity in the VA sample. Nonetheless, using cutoffs with specificities ≥0.90 in the VA sample, TOMM CNS scores yielded acceptable sensitivities (ranging from 0.66 to 0.72) and specificities (ranging from 0.90 to 0.92) in the presumably uncoached VA sample when compared with the standard TOMM failure criteria, which yielded a sensitivity of 0.66 and a specificity of 0.95.

In the presumably uncoached clinical sample, TOMM CNS indices did not demonstrate a substantial advantage over the standard TOMM failure criteria in terms of boosting sensitivity, with an increase in sensitivity scores over the standard TOMM scores ranging from 0% to 6%. In slight contrast, in the derivation sample, the use of the CNS scores increased the sensitivity among coached simulators by 5%–9%. As noted above, in the clinical sample, all standard TOMM scores and all TOMM CNS scores were significantly higher among persons passing versus failing the MSVT. Similarly, in the derivation sample, all standard TOMM scores, along with IR and DR from the WMT, were significantly higher in coached than in naïve simulators. In contrast, however, coached and naïve simulators did not significantly differ on any TOMM CNS scores or the WMT-CNS score, suggesting that CNS scores may be more resistant to the effects of coaching. These findings suggest that CNS scores may be more helpful than the standard performance validity scores in cases involving sophisticated and potentially coached examinees.

Different approaches to defining the groups in the two studies may have contributed to the latter observations. Although the derivation study employed the WMT and involved random assignment and performance instructions, the clinical sample defined groups using the MSVT. Certainly, criterion group validation studies are limited by the accuracy of the procedure used to define the groups (Frederick, 2000). Although the results might have differed if the WMT had been used as the criterion, the available research on the MSVT is generally supportive of its similarity to the WMT, especially in false-positive rate (Howe et al., 2007; Singhal et al., 2009). Considering the convention in performance validity research of prioritizing a low false-positive rate, the classification variable of utmost importance is specificity. Overall, these findings suggest that, in settings where examinee coaching is possible, it may be of benefit to examine CNS scores in addition to the standard TOMM cut scores. Additional research and supportive cross-validation of CNS indices is necessary prior to their clinical use.

Strengths of the present study include the combination of research designs that together have high internal and external validity. While simulation studies might have reduced applicability to general clinical populations, they serve an important role in the initial phase of performance validity measure development due to the high internal validity that can be achieved using a randomized experimental design. The fact that our CNS measures were also examined in a clinical sample provides evidence of the external validity of these measures that cannot be determined with simulation studies alone. Together, the combination of simulation and clinical studies may represent a model for future research aiming to study novel performance validity measures.

Limitations of the present study include the fact that the findings based on the clinical sample may have optimal applicability in VA settings with potentially reduced generalizability to non-VA settings. The clinical sample was also almost entirely men, which suggests that additional research using a gender balanced sample may be helpful. Future research might also explore the individuals who met criteria for GMI on the MSVT. Although this sample subset was too small to include as a separate group in this study, characterization of these individuals on other performance validity measures would be of clinical interest. Another potentially useful goal of future research would be examining the role of TOMM CNS scores in cases that are borderline failures on the TOMM (e.g., Trial 2 scores of 43 or 44). The present sample was not large enough to identify a group of individuals in the borderline failure range. It may also be helpful to conduct additional research examining the classification accuracy of combinations of CNS and the standard TOMM scores, especially those that require less administration time (e.g., Trial 1, Trial 2, and C1). Such an approach might be of clinical interest due to reduced administration time.

Funding

This work was partially supported by a Faculty Summer Research Grant from the University of Indianapolis to J.R.W.

Conflict of Interest

None declared.

Acknowledgements

A number of individuals assisted with data collection including Stephanie Dreschler, Michelle Leslie, Zak Michaels, Claire Patterson, Crystal K. Ramos, Kara Shaneyfelt, Carolyn Sherer, Michelle Stone, and Arial Treankler.

References

Armistead-Jehle
P.
Gervais
R. O.
Sensitivity of the Test of Memory Malingering and the Nonverbal Medical Symptom Validity Test: A replication study
Applied Neuropsychology
 , 
2011
, vol. 
18
 
4
(pg. 
284
-
290
)
Ashendorf
L.
Constantinou
M.
McCaffrey
R.J.
The effects of depression and anxiety on the TOMM in community dwelling older adults
Archives of Clinical Neuropsychology
 , 
2004
, vol. 
19
 (pg. 
125
-
130
)
Axelrod
B. N.
Schutte
C.
Concurrent validity of three forced-choice measures of symptom validity
Applied Neuropsychology
 , 
2011
, vol. 
18
 (pg. 
27
-
33
)
Bush
S. S.
Ruff
R. M.
Troster
A. I.
Barth
J. T.
Koffler
S. P.
Pliskin
N. H.
, et al.  . 
Symptom validity assessment: Practice issues and medical necessity
Archives of Clinical Neuropsychology
 , 
2005
, vol. 
20
 (pg. 
419
-
426
)
Carone
D. A.
Children with moderate/severe brain damage/dysfunction outperform adults with mild-to-no brain damage on the Medical Symptom Validity Test
Brain Injury
 , 
2008
, vol. 
22
 (pg. 
960
-
971
)
Davis
J. J.
Ramos
C. K.
Sherer
C. M.
Bertram
D. M.
Wall
J. R.
Use of consistency on the TOMM to assess effort
2009
 
November Poster session presented at the 29th annual meeting of the National Academy of Neuropsychology, New Orleans, LA. Abstract published in Archives of Clinical Neuropsychology, 24, 456
Dean
A. C.
Victor
T. L.
Boone
K. B.
Philpott
L. M.
Hess
R. A.
Dementia and effort test performance
The Clinical Neuropsychologist
 , 
2009
, vol. 
23
 (pg. 
133
-
152
)
Devilly
G. J.
Effect size generator for windows: Version 2.3
 , 
2004
Australia
Centre for Neuropsychology, Swinburne University
Frederick
R. I.
Mixed group validation: A method to address the limitations of criterion group validation in research on malingering detection
Behavioral Sciences and the Law
 , 
2000
, vol. 
18
 (pg. 
693
-
718
)
Gervais
R. O.
Rohling
M. L.
Green
P.
Ford
W.
A comparison of WMT, CARB, and TOMM failure rates in non-head injury disability claimants
Archives of Clinical Neuropsychology
 , 
2004
, vol. 
19
 (pg. 
475
-
487
)
Gorissen
M.
Sanz
J. C.
Schmand
B.
Effort and cognition in schizophrenia patients
Schizophrenia Research
 , 
2005
, vol. 
78
 (pg. 
199
-
208
)
Green
P.
Green's Word Memory Test for Window's user's manual
 , 
2002
Edmonton, Alberta
Green's Publishing
Green
P.
Medical Symptom Validity Test for Windows: User's manual and program
 , 
2004
Edmonton, Alberta
Green's Publishing
Green
P.
The pervasive influence of effort on neuropsychological tests
Physical Medicine and Rehabilitation Clinics of North America
 , 
2007
, vol. 
18
 (pg. 
43
-
68
)
Green
P.
Advanced Interpretation Program for Window's user's manual
 , 
2010
Edmonton, Alberta: Green's Publishing
Green
P.
Comparison between the Test of Memory Malingering (TOMM) and the Nonverbal Medical Symptom Validity Test in adults with disability claims
Applied Neuropsychology
 , 
2011
, vol. 
18
 
1
(pg. 
18
-
26
)
Green
P.
Montijo
J.
Brockhaus
R.
High specificity of the Word Memory Test and Medical Symptom Validity Test in groups with severe verbal memory impairment
Applied Neuropsychology
 , 
2011
, vol. 
18
 (pg. 
86
-
94
)
Green
P.
Rohling
M. L.
Lees-Haley
O. R.
Allen
L. M.
Effort has a greater effect on test scores than severe injury in compensation claimants
Brain Injury
 , 
2001
, vol. 
15
 (pg. 
1045
-
1060
)
Greiffenstein
M. F.
Baker
W. J.
Gola
T.
Validation of malingered amnesia measures with a large clinical sample
Psychological Assessment
 , 
1994
, vol. 
6
 
3
(pg. 
218
-
224
)
Gunner
J.
Miele
A.
Lynch
J.
McCaffrey
R.
The incremental utility of a novel Test of Memory Malingering consistency index in assessing effort among litigants
2010
 
October Poster presented at the presented at the 30th annual meeting of the National Academy of Neuropsychology, Vancouver, Canada. Abstract published in Archives of Clinical Neuropsychology, 25, 501
Gunner
J.
Miele
A.
Lynch
J.
McCaffrey
R.
The Albany consistency index for the Test of Memory Malingering
Archives of Clinical Neuropsychology
 , 
2012
, vol. 
27
 (pg. 
1
-
9
)
Heilbronner
R. L.
Sweet
J. J.
Morgan
J. E.
Larrabee
G. J.
Millis
S. R.
American Academy of Clinical Neuropsychology Consensus Conference Statement on the neuropsychological assessment of effort, response bias, and malingering
The Clinical Neuropsychologist
 , 
2009
, vol. 
23
 (pg. 
1093
-
1129
)
Hilsabeck
R. C.
Gordon
S. N.
Hietpas-Wilson
T.
Zartman
A. L.
Use of trial 1 of the Test of Memory Malingering (TOMM) as a screening measure of effort: Suggested discontinuation rules
The Clinical Neuropsychologist
 , 
2011
, vol. 
25
 (pg. 
1228
-
1238
)
Howe
L. L. S.
Anderson
A. M.
Kaufman
D. A. S.
Sachs
B. C.
Loring
D. W.
Characterization of the Medical Symptom Validity Test in evaluation of clinically referred memory disorders clinic patients
Archives of Clinical Neuropsychology
 , 
2007
, vol. 
22
 (pg. 
753
-
761
)
Iverson
G. L.
Le Page
J.
Koehler
B. E.
Shojania
K.
Badii
M.
Test of Memory Malingering (TOMM) scores are not affected by chronic pain or depression in patients with fibromyalgia
The Clinical Neuropsychologist
 , 
2007
, vol. 
21
 
3
(pg. 
532
-
546
)
Larrabee
G. J.
Cautions in the use of neuropsychological evaluation in legal settings
Neuropsychology
 , 
1990
, vol. 
4
 (pg. 
239
-
247
)
Malec
J. F.
Brown
A. W.
Leibson
C. L.
Flaada
J. T.
Mandrekar
J. N.
The Mayo classification system for traumatic brain injury severity
Journal of Neurotrauma
 , 
2007
, vol. 
24
 (pg. 
1417
-
1424
)
Martin
R. C.
Hayes
J. S.
Gouvier
W. D.
Differential vulnerability between postconcussion self-report and objective malingering tests in identifying simulated mild head injury
Journal of Clinical and Experimental Neuropsychology
 , 
1996
, vol. 
18
 (pg. 
265
-
275
)
Rapport
L. J.
Farchione
T. J.
Coleman
R. D.
Axelrod
B. N.
Effects of coaching on malingered motor function profiles
Journal of Clinical and Experimental Neuropsychology
 , 
1998
, vol. 
20
 
1
(pg. 
89
-
97
)
Rees
L. M.
Tombaugh
T. N.
Gansler
D. A.
Moczynski
N. P.
Five validation experiments of the Test of Memory Malingering (TOMM)
Psychological Assessment
 , 
1998
, vol. 
10
 (pg. 
10
-
20
)
Rienstra
A.
Spaan
P. E. J.
Schmand
B.
Reference data for the word memory test
Archives of Clinical Neuropsychology
 , 
2009
, vol. 
24
 (pg. 
255
-
262
)
Ruiz
M. A.
Drake
E. B.
Glass
A.
Marcotte
D.
Van Gorp
W. G.
Trying to beat the system: Misuse of the internet to assist in avoiding the detection of psychological symptom dissimulation
Professional Psychology Research and Practice
 , 
2002
, vol. 
33
 (pg. 
294
-
299
)
Schutte
C.
Millis
S.
Axelrod
B.
VanDyke
S.
Derivation of a composite measure of embedded symptom validity indices
The Clinical Neuropsychologist
 , 
2011
, vol. 
25
 
3
(pg. 
454
-
462
)
Singhal
A.
Green
P.
Ashaye
K.
Shankar
K.
Gill
D.
High specificity of the Medical Symptom Validity Test in patients with very severe memory impairment
Archives of Clinical Neuropsychology
 , 
2009
, vol. 
24
 (pg. 
721
-
728
)
SPSS
SPSS for Windows, Rel 19.0.0
2010
Chicago
Author
StataCorp
2005
College Station, TX
 
Stata Statistical Software: Release (Vol. 9) Author
Teichner
G.
Wagner
M. T.
The Test of Memory Malingering (TOMM): Normative data from cognitively intact, cognitively impaired, and elderly patients with dementia
Archives of Clinical Neuropsychology
 , 
2004
, vol. 
19
 
3
(pg. 
455
-
464
)
Tombaugh
T. N.
Test of memory malingering (TOMM)
 , 
1996
North Tonowanda, NY
Multi-Health Systems
Tombaugh
T. N.
The Test of Memory Malingering (TOMM): Normative data from cognitively intact and cognitively impaired individuals
Psychological Assessment
 , 
1997
, vol. 
9
 (pg. 
260
-
268
)
Whitney
K. A.
Shepard
P. H.
Davis
J. J.
WAIS-IV Digit Span variables: Are they valuable for use in predicting TOMM and MSVT failure?
Applied Neuropsychology
 
Whitney
K. A.
Shepard
P. H.
Williams
A. L.
Davis
J. J.
Adams
K. M.
The Medical Symptom Validity Test in the evaluation of Operation Iraqi Freedom/Operation Enduring Freedom soldiers: A preliminary study
Archives of Clinical Neuropsychology
 , 
2009
, vol. 
24
 (pg. 
145
-
152
)
Yantz
C. L.
McCaffrey
R. J.
Social facilitation effect of examiner attention of inattention to computer-administered neuropsychological test: First sign that the examiner may affect result
The Clinical Neuropsychologist
 , 
2007
, vol. 
21
 (pg. 
663
-
671
)
Youden
W. J.
Index for rating diagnostic tests
Cancer
 , 
1950
, vol. 
3
 (pg. 
32
-
35
)
Youngjohn
J. R.
Confirmed attorney coaching prior to neuropsychological evaluation
Assessment
 , 
1995
, vol. 
2
 
2
(pg. 
279
-
283
)