Abstract

It is becoming increasingly more important to study, use, and promote the utility of measures that are designed to detect non-compliance with testing (i.e., poor effort, symptom non-validity, response bias) as part of neuropsychological assessments with children and adolescents. Several measures have evidence for use in pediatrics, but there is a paucity of published support for the Victoria Symptom Validity Test (VSVT) in this population. The purpose of this study was to examine the performance on the VSVT in a sample of pediatric patients with known neurological disorders. The sample consisted of 100 consecutively referred children and adolescents between the ages of 6 and 19 years (mean = 14.0, SD = 3.1) with various neurological diagnoses. On the VSVT total items, 95% of the sample had performance in the “valid” range, with 5% being deemed “questionable” and 0% deemed “invalid”. On easy items, 97% were “valid”, 2% were “questionable”, and 1% was “invalid.” For difficult items, 84% were “valid,” 16% were “questionable,” and 0% was “invalid.” For those patients given two effort measures (i.e., VSVT and Test of Memory Malingering; n = 65), none was identified as having poor test-taking compliance on both measures. VSVT scores were significantly correlated with age, intelligence, processing speed, and functional ratings of daily abilities (attention, executive functioning, and adaptive functioning), but not objective performance on the measure of sustained attention, verbal memory, or visual memory. The VSVT has potential to be used in neuropsychological assessments with pediatric patients.

Introduction

Determination of whether a person is engaged in and/or complying with the demands of test taking, often referred to as response bias, effort, or symptom validity testing (seeSlick & Sherman, 2012, for a discussion on terminology), has been advocated as an important part of neuropsychological assessments by the National Academy of Neuropsychology (NAN; Bush et al., 2005) and the American Academy of Clinical Neuropsychology (AACN; Heilbronner, Sweet, Morgan, Larrabee, & Millis, 2009). Although the focus of these position papers was not necessarily geared toward a specific patient age group (i.e., adult vs. children), Heilbronner and colleagues (2009) acknowledged that pediatric data are lacking and recommended that “effort measures and embedded validity indicators should be applied to pediatric samples” (p. 1107) as part of the future scientific investigations for neuropsychologists.

Unlike the adult literature, evidence supporting the utility of effort tests in pediatric neuropsychological assessments remains in its infancy. There is a growing body of evidence that healthy children and adolescents can readily pass effort tests (Constantinou & McCaffrey, 2003; Rienstra, Spaan, & Schmand, 2010), that these measures can be used with individuals with neurological disorders (Carone, 2008; Donders, 2005; Donders & Boonstra, 2007; Green & Flaro, 2003; Green, Flaro, & Courtney, 2009; Kirk, Harris, Hutaff-Lee, Koelemay, Dinkins, & Kirkwood, 2011; Kirkwood & Kirk, 2010; MacAllister, Nakhutina, Bender, Karantzoulis, & Carlson, 2009), and that symptom validity tests can identify children and adolescents who are asked to simulate cognitive impairments (Blaskewitz, Merten, & Kathmann, 2008; Frazier, Frazier, Busch, Kerwood, & Demaree, 2008; Gunn, Batchelor, & Jones, 2010; Nagle, Everhart, Durham, McCammon, & Walker, 2006). One of the most comprehensive reviews on effort testing in pediatrics to date comes from Kirkwood (2012), who summarizes performance across several tests and supports using these measures with pediatric patients to help ensure that a patient is adhering to the test-taking process. Despite the advances in pediatric research over the past decade, there is still much to be studied with children and adolescents.

One measure of test-taking compliance, the Victoria Symptom Validity Test (VSVT; Slick, Hopp, Strauss, & Thompson, 1997), has not yet been studied regarding its utility in children and adolescents. To date, only studies with adult samples have been published and even the VSVT manual only provides case examples involving young adults (i.e., three 21-year olds) but not children. In published research to date, the VSVT has been shown to be insensitive to profound memory impairment in several adult case examples (Slick et al., 2003). In adult patients being evaluated prior to undergoing epilepsy surgery, the majority had “valid” profiles on the VSVT difficult items (i.e., 100% in Grote et al., 2000; 91.7% in Loring, Lee, & Meador, 2005). Macciocchi, Seel, Alderson, and Godsall (2006) reported that 100% of adult patients with a severe traumatic brain injury obtained “valid” profiles on the VSVT. In a heterogeneous adult and older adult sample, Loring, Larabee, Lee, and Meador (2007) reported an overall pass rate of 89% on the difficult items from the VSVT. Furthermore, across these studies, the rates of “invalid” profiles in adult clinical samples (i.e., below chance performance, based on binomial probabilities) is extremely low, ranging from 0% (Grote et al., 2000; Loring et al., 2005; Macciocchi et al., 2006; Slick et al., 2003) to only 0.3% (Loring et al., 2007). In comparison, obtaining an “invalid” profile in persons who were deemed to be at higher risk due to involvement with medicolegal litigation or worker's compensation was found in 5.7% (Grote et al., 2000) to 20% (Loring et al., 2007) of samples.

Studying VSVT performance in pediatrics is an important step in expanding the utility of this measure into an age group with limited overall empirical evidence (Heilbronner et al., 2009). There are some practical reasons to study VSVT performance in children and adolescents: first, this measure has demonstrated utility in adults, which means that it has potential to be a useable measure in pediatrics; and second, it provides a testing paradigm that differs from other current measures of test validity, which means that it can be added to a repertoire of tests to provide an alternative method for detecting poor compliance. As part of determining population-specific performances (Bianchini, Mathias, & Greve, 2001), it is important to demonstrate the utility of measures of test-taking non-compliance in pediatric patients with known neurological disorders. This will allow clinicians to use these measures with increased confidence in patient samples and in those situations where poor compliance with test-taking demands is suspected. The purpose of this study is to examine performance on the VSVT in pediatric patients with neurological disorders. It is hypothesized that children and adolescents with neurological disorders (a) will be able to pass the VSVT at rates similar to adults and similar to performance by pediatric samples on other effort measures (e.g., ≥95% pass rate on the overall indicator), (b) have better performance (more correct) on the easy items compared with the difficult items, and (c) have significant correlations between some cognitive abilities and effort test performance (i.e., based on previous studies that have demonstrated this finding with the VSVT [Loring et al., 2005; Macciocchi et al., 2006] and the Test of Memory Malingering [TOMM; MacAllister et al., 2009]).

Methods

Participants

Participants included 100 consecutively referred children and adolescents between the ages of 6 and 19 who underwent neuropsychological assessments at a tertiary care hospital. Inclusion criteria for this study were a diagnosis provided by the primary neurologist and administration of the VSVT as part of their neuropsychological assessment. Each participant in this study provided informed consent to have their clinical data included in this research study, with those under the age of 18 having their legal guardian provide signed consent and the child providing assent. The consent forms and process used for this research were granted ethical clearance by the University of Calgary Research Ethics Board.

Measures

The VSVT (Slick et al., 1997) is a computerized forced-choice recognition test. For information on the test format, the reader is referred to the VSVT technical manual (Slick et al., 1997). Scores generated for the VSVT include the number of items correct for the easy trials, the difficult trials, and overall (total). Classification of “valid,” “questionable,” or “invalid” profiles is based upon binomial probability scores, which is the likelihood of obtaining a score by chance alone (i.e., probability of obtaining a score when random guessing = 0.5). Additional scores, such as performances across the blocks of trials, response latencies, and left/right preference are also provided in the computer-generated printout.

Several other measures were also given to this sample, which are also presented in this study in order to (a) characterize the level of cognitive impairment found in this clinical sample and (b) to provide a means for better understanding the relation between/impact of cognitive problems on VSVT performance. A substantial proportion of the sample (n = 65) received another measure of test-taking compliance, the TOMM (Tombaugh, 1996). The TOMM was selected as a second measure of test-taking compliance in these pediatric patients because of the existing literature supporting its use with children and adolescents with various, and often quite substantial, neurological disorders (Brooks, Sherman, & Krol, 2012; Constantinou & McCaffrey, 2003; Donders, 2005; Kirk et al., 2011; Kirkwood, 2012; MacAllister et al., 2009). Readers are referred to the technical manual for more information on the TOMM, but the classification of performance was based on achieving scores above the established cutoff from the technical manual (Tombaugh, 1996) and previous research with pediatric neurology patients (Brooks, Sherman, & Krol, 2012).

In addition to the effort tests, patients were also administered several neurocognitive measures across multiple domains of functioning (the version for most tests is based on the age of the participant). The additional domains assessed included: intellectual abilities (Full-Scale IQ [FSIQ] from the Wechsler Intelligence Scale for Children, Fourth Edition [WISC-IV; Wechsler, 2003] or from the Wechsler Adult Intelligence Scale, Third/Fourth Edition [WAIS-III/-IV; Wechsler, 1997a, 2008]); sustained attention (Continuous Attention Test total hits; Seidel & Joschko, 1991); processing speed (Processing Speed Index from the WISC-IV, the WAIS-III, or the WAIS-IV; Wechsler, 1997a, 2003, 2008); memory for a list of words (Long Delay Free Recall from the California Verbal Learning Test, Children's Version [CVLT-C; Delis, Kramer, Kaplan, & Ober, 1994] or the CVLT-II [Delis, Kramer, Kaplan, & Ober, 2000]), stories (Stories Delayed from the Children's Memory Scale [Cohen, 1997] or Logical Memory II from the Wechsler Memory Scale, Third Edition [WMS-III; Wechsler, 1997b]), or faces (Faces Delayed subtest from the Children's Memory Scale [Cohen, 1997] or the WMS-III [Wechsler, 1997b]); day-to-day attention skills (Attention Deficit Hyperactivity Disorder [ADHD] Rating Scale-IV inattention subscale; DuPaul, Power, Anastopoulos, & Reid, 1998); day-to-day executive functions (Behavior Rating Inventory of Executive Function global executive composite [BRIEF; Gioia, Isquith, Guy, & Kenworthy, 2000]); and day-to-day adaptive functioning (Scales of Independent Behavior, Revised [SIB-R; Bruininks, Woodcock, Weatherman, & Hill, 1996]). Because the data used for this study were derived from clinical assessments, many tests (including the TOMM) do not have the full sample size.

Analyses

The primary outcome for this study was descriptive, based on upon performance on the VSVT. The determination of performance (i.e., “valid,” “questionable,” or “invalid”) was based on the interpretive guidelines provided in the VSVT manual, and subsequently on the VSVT computer-generated printouts. Examinations of the cognitive abilities, performance on another measure of test-taking compliance (TOMM), and correlations between effort testing and demographics/cognition were also included as secondary analyses. Because of the skewed nature of effort test scores, Spearman's rho (ρ) was used for correlations involving the number of correct items in the VSVT. Correlations for response latencies, however, involved Pearson's r correlations.

To better understand whether demographic and/or cognitive variables predict performance on the easy and difficult VSVT items, two stepwise regression analyses were conducted. The variables selected for the regression analyses were based on those variables that correlated significantly with either the number of correct easy items or the number of correct difficult items on the VSVT. The intercorrelations between variables were examined to ensure that there was not multicollinearity (ρ's < .60). The stepwise regression analyses allowed for inclusion of those variables that added a significant amount of variance to the model; therefore, final models only included statistically significant predictor variables (p < .05) and R2 indicated the amount of variance accounted for with performance on either the easy items or difficult items.

Results

Demographic variables are presented in Table 1. The mean age of this sample is 14.0 years (SD = 3.1, range = 6–19). The mean education for mothers is 13.2 years (SD = 2.3, range = 6–18) and the mean education for fathers is 13.7 years (SD = 2.4, range = 9–20). This sample was evenly split for males and females, with the majority of this sample being Caucasian. Regarding diagnoses, the majority of the sample had traumatic brain injuries (44%; ranging from mild to severe), followed by stroke (14%), epilepsy (12%), and hydrocephalus (9%). The “other” group, which comprised 21% of the sample, contained various diagnoses, such as neurofibromatosis-1 (no seizures), Wilson's disease, arachnoid cyst, posterior fossa tumor, and encephalitis not yet diagnosed.

Table 1.

Demographics for pediatric neurology patients

Demographics Mean SD Range 
Age (years) 14.0 3.1 6.2–19.1 
Parent Education (years) 
 Mom 13.2 2.3 6–18 
 Dad 13.7 2.4 9–20 
Sex Percentage 
 Male 50   
 Female 50   
Race 
 Caucasian 84   
 Asian 13   
 First Nation/Native American   
Diagnosis 
 Traumatic Brain Injury 44   
 Stroke 14   
 Epilepsy 12   
 Hydrocephalus   
 Other 21   
Demographics Mean SD Range 
Age (years) 14.0 3.1 6.2–19.1 
Parent Education (years) 
 Mom 13.2 2.3 6–18 
 Dad 13.7 2.4 9–20 
Sex Percentage 
 Male 50   
 Female 50   
Race 
 Caucasian 84   
 Asian 13   
 First Nation/Native American   
Diagnosis 
 Traumatic Brain Injury 44   
 Stroke 14   
 Epilepsy 12   
 Hydrocephalus   
 Other 21   

Performance on the VSVT is presented in Table 2. The overall pass rate in this pediatric neurology sample, based on the total score (i.e., “the most objective and quantifiable evidence regarding whether respondents are exhibiting biased responding on the VSVT”, p. 30, Slick et al., 1997) was 95%, with 5% falling within the “questionable” range and 0% of the patients falling within the “invalid” range. The mean performance on the total score was 42.7 (SD = 6.1) with scores ranging from 21 to the maximum. Percentile ranks for performance on the total items correct were as follows: 1st percentile = 21, 5th percentile = 29, 10th percentile = 33, 25th percentile = 41, 50th percentile = 45, 75th percentile = 47, and 90th percentile = maximum score. The mean response latency across all items on the VSVT was 3.0 s (SD = 1.4) with a range of 1.0–6.6 s. Percentile ranks for latency on the total items correct were as follows: 1st percentile = 6.57, 5th percentile = 6.10, 10th percentile = 5.39, 25th percentile = 3.70, 50th percentile = 2.71, 75th percentile = 2.06, 90th percentile = 1.54, 95th percentile = 1.27, and 99th percentile = 0.97. There was not a preference for right or left sided responses, with a mean preference of −0.03 (SD = 0.1) and a range from −0.4 to +0.2.

Table 2.

VSVT performance for pediatric neurology patients

VSVT performance Mean SD Range 
Easy Items Correct 22.7 2.6 7max 
 Block 1 Easy Items Correct 7.7 0.9 4–max 
 Block 2 Easy Items Correct 7.5 1.1 2–max 
 Block 3 Easy Items Correct 7.5 1.1 1–max 
Easy Items Response Latency 2.5 1.3 0.8–8.0 
 Block 1 Easy Items Response Latency 2.1 1.8 0.7–15.5 
 Block 2 Easy Items Response Latency 2.3 1.4 0.7–10.0 
 Block 3 Easy Items Response Latency 3.1 2.2 0.9–19.3 
Difficult Items Correct 20.1 4.0 8max 
 Block 1 Difficult Items Correct 6.7 1.5 1–max 
 Block 2 Difficult Items Correct 6.7 1.7 1–max 
 Block 3 Difficult Items Correct 6.7 1.4 3–max 
Difficult Items Response Latency 3.6 1.6 0.9–8.8 
 Block 1 Difficult Items Response Latency 3.2 1.8 0.7–13.3 
 Block 2 Difficult Items Response Latency 3.7 2.1 1.2–14.2 
 Block 3 Difficult Items Response Latency 4.0 2.2 1.0–16.7 
Total Items Correct 42.7 6.1 21max 
Total Items Response Latency 3.0 1.4 1.0–6.6 
Right-Left Preference −0.03 0.1 (−0.4)–0.2 
VSVT classification descriptors  “Valid”  “Questionable” “Invalid” 
 Easy Items (frequency of sample) 97 
 Difficult Items (frequency of sample) 84 16 
 Total Items (frequency of sample) 95 
VSVT performance Mean SD Range 
Easy Items Correct 22.7 2.6 7max 
 Block 1 Easy Items Correct 7.7 0.9 4–max 
 Block 2 Easy Items Correct 7.5 1.1 2–max 
 Block 3 Easy Items Correct 7.5 1.1 1–max 
Easy Items Response Latency 2.5 1.3 0.8–8.0 
 Block 1 Easy Items Response Latency 2.1 1.8 0.7–15.5 
 Block 2 Easy Items Response Latency 2.3 1.4 0.7–10.0 
 Block 3 Easy Items Response Latency 3.1 2.2 0.9–19.3 
Difficult Items Correct 20.1 4.0 8max 
 Block 1 Difficult Items Correct 6.7 1.5 1–max 
 Block 2 Difficult Items Correct 6.7 1.7 1–max 
 Block 3 Difficult Items Correct 6.7 1.4 3–max 
Difficult Items Response Latency 3.6 1.6 0.9–8.8 
 Block 1 Difficult Items Response Latency 3.2 1.8 0.7–13.3 
 Block 2 Difficult Items Response Latency 3.7 2.1 1.2–14.2 
 Block 3 Difficult Items Response Latency 4.0 2.2 1.0–16.7 
Total Items Correct 42.7 6.1 21max 
Total Items Response Latency 3.0 1.4 1.0–6.6 
Right-Left Preference −0.03 0.1 (−0.4)–0.2 
VSVT classification descriptors  “Valid”  “Questionable” “Invalid” 
 Easy Items (frequency of sample) 97 
 Difficult Items (frequency of sample) 84 16 
 Total Items (frequency of sample) 95 

Notes: VSVT = Victoria Symptom Validity Test; N = 100; SD = standard deviation. “Valid,” “questionable,” and “invalid” are the descriptive categories provided by the VSVT printout for interpretation of performance. “Max” refers to the maximum score that could be obtained.

Interpretation of performance on the easy and difficult items separately is also recommended by Slick and colleagues (1997) because each of these scores can “also provide objective and quantifiable evidence regarding whether respondents are exaggerating symptoms on the VSVT” (p. 32). On the easy items, 97% had performance in the “valid” range, 2% had performance in the “questionable” range, and 1% had performance in the “invalid” range. The mean performance on the easy items was 22.7 (SD = 2.6) with scores ranging from 7 to the maximum. Percentile ranks for performance on the easy items correct were as follows: 1st percentile = 7, 5th percentile = 18, 10th percentile = 20, 25th percentile = 23, and 50th percentile = maximum score. The mean response latency on the easy items was 2.5 s (SD = 1.3) with a range of 0.8–8.0 s. Percentile ranks for latency on the easy items correct were as follows: 1st percentile = 8.03, 5th percentile = 5.27, 10th percentile = 4.22, 25th percentile = 3.14, 50th percentile = 2.08, 75th percentile = 1.51, 90th percentile = 1.19, 95th percentile = 1.00, and 99th percentile = 0.79.

For the difficult items, 84% had performance in the “valid” range and 16% had performance in the “questionable” range. None of the participants had performance in the “invalid” range on the difficult items. In addition, no patient scored below the cutoff on the easy items and then went on to pass the difficult items. The mean performance on the difficult items was 20.1 (SD = 4.0) with scores ranging from 8 to the maximum. Percentile ranks for performance on the difficult items correct were as follows: 1st percentile = 8, 5th percentile = 11, 10th percentile = 13, 25th percentile = 18, 50th percentile = 21, 75th percentile = 23, and 90th percentile = maximum score. The mean response latency on the difficult items was 3.6 s (SD = 1.6) with a range of 0.9–8.8 s. Percentile ranks for latency on the difficult items correct were as follows: 1st percentile = 8.78, 5th percentile = 7.30, 10th percentile = 5.99, 25th percentile = 4.51, 50th percentile = 3.25, 75th percentile = 2.51, 90th percentile = 1.86, 95th percentile = 1.54, and 99th percentile = 0.94.

A portion of this sample (n = 65) was also administered the TOMM. In this subsample, 63 of 65 patients (96.9%) passed the TOMM. In this subsample, none of the patients were classified as having suboptimal performance on both effort measures. Of the 16 patients who were identified as having “questionable” or “invalid” performance on any portion of the VSVT, 8 patients were also given and passed the TOMM. There were not significant correlations between TOMM and VSVT performance in this pediatric sample (e.g., Spearman's ρ between TOMM Trial 1 and VSVT scores was ρ = .01 [p = .97] for easy items, ρ = .24 [p = .053] for difficult items, and ρ = .18 [p = .12] for total items).

Cognitive functioning across various domains is presented in Table 3. The mean intellectual functioning in this sample was 93.3 (SD = 15.7), with a range of 49–135 and 6.3% falling at or below the 2nd percentile. When considering rates of impaired performance across the domains, 49.1% had impaired sustained visual attention, 20.2% had clinically elevated problems with inattention on parent ratings, 6.8% had impaired processing speed, 19.4% had impaired memory for a word list, 1.1% had impaired memory for stories, 5.6% had impaired memory for faces, 27.1% had impaired executive functioning on parent ratings, and 17.6% had impaired adaptive functioning.

Table 3.

Cognitive abilities in pediatric neurology patients administered the VSVT

Cognitive domains n Mean SD Range % Impaired 
Overall Intelligence (index score) 95 93.3 15.7 49–135 6.3 
Sustained Attention (percentile) 55 18.0 25.2 1–99 49.1 
Attention, Parent Rating (percentile) 89 66.8 31.4 1–99 20.2 
Processing Speed (index score) 88 88.0 12.8 53–135 6.8 
Verbal Memory, Word List (z score) 67 −0.3 1.1 −4 to 2 19.4 
Verbal Memory, Story (scaled score) 89 10.9 2.9 1–17 1.1 
Visual Memory, Faces (scaled score) 71 9.7 3.1 1–18 5.6 
Executive Functioning, Parent Rating (t score) 96 59.3 13.9 32–92 27.1 
Adaptive Functioning, Parent Rating (index score) 74 98.1 30.1 15–145 17.6 
Cognitive domains n Mean SD Range % Impaired 
Overall Intelligence (index score) 95 93.3 15.7 49–135 6.3 
Sustained Attention (percentile) 55 18.0 25.2 1–99 49.1 
Attention, Parent Rating (percentile) 89 66.8 31.4 1–99 20.2 
Processing Speed (index score) 88 88.0 12.8 53–135 6.8 
Verbal Memory, Word List (z score) 67 −0.3 1.1 −4 to 2 19.4 
Verbal Memory, Story (scaled score) 89 10.9 2.9 1–17 1.1 
Visual Memory, Faces (scaled score) 71 9.7 3.1 1–18 5.6 
Executive Functioning, Parent Rating (t score) 96 59.3 13.9 32–92 27.1 
Adaptive Functioning, Parent Rating (index score) 74 98.1 30.1 15–145 17.6 

Notes: SD = standard deviation. “Impaired” is defined as being at or below the 2nd percentile (e.g., 2 SD below the mean). Index scores have a mean = 100 and SD = 15. Percentile scores have a mean = 50 and range from 1 to 99. T scores have a mean = 50 and SD = 10. Z scores have a mean = 0 and SD = 1. Scaled scores have a mean = 10 and SD = 3. Items marked with a “dagger” indicate that higher scores are reflective of lower functioning (or more problems).

Correlations between effort test performance (for the VSVT, this included number correct and response latencies; for the TOMM, this included number correct on Trial 1), age, and the various cognitive abilities are presented in Table 4. Fewer correct VSVT easy items was significantly correlated with younger age, lower overall intelligence, and more problems with daily attention based on parental ratings. Fewer correct VSVT difficult items was significantly correlated with younger age, lower overall intelligence, more problems with daily attention based on parental ratings, slower processing speed, more problems with daily executive abilities based on parental ratings, and more problems with adaptive functioning based on parental ratings. Slower response latencies on both the easy and difficult items were significantly correlated with younger age, lower overall intelligence, slower processing speed, and more problems with adaptive functioning based on parental ratings. None of the correlations between VSVT performance, sustained visual attention, verbal memory, or visual memory were significant in these patients. Additionally, nearly identical results were obtained when examining correlations between VSVT and age/intelligence/cognitive abilities in only those patients with “valid” profiles on the total, easy, and/or difficult items. For the TOMM, worse performance on Trial 1 (i.e., fewer correct items) was significantly correlated with lower overall intelligence, more problems with daily attention based on parental ratings, worse verbal memory based on the word list, and more problems with daily executive abilities based on parental ratings.

Table 4.

Correlations between age, cognitive domains, and effort testing for pediatric neurology patients

Demographic and Cognitive Domains n VSVT correct responses
 
VSVT response latency
 
TOMM Trial 1 
Easy Difficult Total Easy Difficult Total 
Age (years) 100 .42** .48** .49** −.61** −.61** −.66** .10 
Overall Intelligence 95 .29** .43** .40** −.29** −.28** −.31** .38** 
Sustained Attention 55 −.06 −.06 −.08 .08 .18 .14 .06 
Attention, Parent Rating 89 −.27* −.30** −.30** .18 .15 .18 −.33* 
Processing Speed 88 .19 .36** .34** −.22* −.25* −.26* .03 
Verbal Memory, Word List 67 −.01 .11 .11 −.08 −.07 −.08 .33* 
Verbal Memory, Story 89 .02 .04 .05 .05 .02 .04 .13 
Visual Memory, Faces 71 .19 .17 .17 −.13 −.17 −.17 .19 
Executive Functions, Parent Rating 96 −.13 −.25* −.25* .14 .09 .12 −.34** 
Adaptive Functioning, Parent Rating 74 .21 .24* .25* −.15 −.22 −.20 .09 
Demographic and Cognitive Domains n VSVT correct responses
 
VSVT response latency
 
TOMM Trial 1 
Easy Difficult Total Easy Difficult Total 
Age (years) 100 .42** .48** .49** −.61** −.61** −.66** .10 
Overall Intelligence 95 .29** .43** .40** −.29** −.28** −.31** .38** 
Sustained Attention 55 −.06 −.06 −.08 .08 .18 .14 .06 
Attention, Parent Rating 89 −.27* −.30** −.30** .18 .15 .18 −.33* 
Processing Speed 88 .19 .36** .34** −.22* −.25* −.26* .03 
Verbal Memory, Word List 67 −.01 .11 .11 −.08 −.07 −.08 .33* 
Verbal Memory, Story 89 .02 .04 .05 .05 .02 .04 .13 
Visual Memory, Faces 71 .19 .17 .17 −.13 −.17 −.17 .19 
Executive Functions, Parent Rating 96 −.13 −.25* −.25* .14 .09 .12 −.34** 
Adaptive Functioning, Parent Rating 74 .21 .24* .25* −.15 −.22 −.20 .09 

Notes: VSVT = Victoria Symptom Validity Test; TOMM = Test of Memory Malingering; On items marked with a dagger, higher scores represent more problems. Due to the ceiling effects with VSVT and TOMM scores, Spearman's ρ correlations were used for correlations with correct responses. Pearson's r correlations were used for response latencies on the VSVT.

Bold values indicate as either *P < .05 or ** P < .01.

Stepwise regression analyses were conducted to look at demographic and cognitive variables that may be predictors of VSVT performance on easy and difficult items. Based on the non-parametric correlations conducted, the following variables were entered as potential predictors of performance on the easy VSVT items for the first stepwise regression: age (ρ = .42); intelligence (ρ = .29); and parental ratings of daily attention abilities (ρ = −.27). Intercorrelations between these three variables were ρ < .60, which supports the inclusion of each variable (i.e., no multicollinearity). Results of the first stepwise regression indicate that only age was a significant predictor of performance on the easy VSVT items (t = 3.84, p < .001), which accounted for 39.1% of the variance. Intelligence (t = 1.71, p = .09) and daily attentional abilities (t = −1.47, p = .15) were not significant predictors of performance on the VSVT easy items.

When considering the second stepwise regression, which aimed to examine the potential predictors on the difficult VSVT items, the following variables were entered based on non-parametric correlations: age (ρ = .48); intelligence (ρ = .43); parental ratings of daily attention abilities (ρ = −.30); processing speed (ρ = .36); parental ratings of daily executive abilities (ρ = −.25); and parental ratings of daily adaptive abilities (ρ = .24). Intercorrelations between these six variables were ρ < .60, which supports the inclusion of each variable (i.e., no multicollinearity). Results of the second stepwise regression indicate that intelligence (t = 2.75, p = .008) and age (t = 2.40, p = .02) are significant predictors of performance on the difficult VSVT items, with these two variables contributing to 45.5% of the variance (intelligence, R2 = .351; age R2 change = .104). Daily attentional abilities (t = −0.23, p = .82), processing speed (t = 0.57, p = .57), daily executive abilities (t = −1.08, p = .27), and daily adaptive abilities (t = −0.89, p = .38) were not significant predictors of performance on the VSVT difficult items.

Discussion

Measuring a patient's compliance with test taking is an important and ever-growing aspect of neuropsychological assessments. In order to interpret data as adequately reflecting a patient's current functioning, the clinician must trust that the data are valid. Measures of test-taking compliance (i.e., effort, symptom validity, response bias) provide an indirect gauge of whether or not a person is adequately engaged in an assessment and allow the clinician to extrapolate that the overall results on the assessment are reflective of current abilities. There has been increasing research into pediatric patients' performance on several different measures of compliance, but not yet the VSVT. Because clinicians require a range of methods for evaluating test-taking compliance throughout any assessment, it is important to study the performance of different measures in various populations.

The purpose of this study was to examine performance on the VSVT in a heterogeneous pediatric neurology sample. When considering the total score on the VSVT, 95% had a “valid” performance and 5% were deemed “questionable” (i.e., based on probabilities derived for binomial distributions). None of the pediatric patients had an “invalid” total score. On the easy items, 97% obtained a “valid” score, 2% were deemed to be “questionable,” and 1% was within the “invalid” range. On the difficult items, 84% were deemed to have “valid” performances and 16% were “questionable.” None of the pediatric patients scored within the “invalid” range on the VSVT difficult items.

The difference in performance on the easy (97% “valid”) versus difficult (84% “valid”) items is worth exploring. In this pediatric sample, it is possible that the difficult items were in fact difficult for some patients. In the 13 patients who had “valid” easy items but “questionable” difficult items, 11 had intellectual abilities more than 1 SD below the mean and/or were under the age of 10 years. This does not mean that performance on the difficult items should be readily dismissed in children and adolescents who are also lower functioning (although it is possible that the easy items may be a more pure measure of test-taking compliance in similar samples of patients). For example, in the present study, patients as young as 7 years and patients with FSIQs as low as 49 were able to obtain “valid” scores on the difficult items (Figs 1 and 2). However, “questionable” performance does mean that it needs to be carefully considered in light of all patient-related factors (e.g., presence or absence of litigation, behavioral presentation, developmental level) and performance on other stand-alone or embedded measures of test-taking compliance. As suggested by Slick and colleagues (1997), “VSVT performance that is neither significantly above nor significantly below chance is viewed as questionable and suggests the need for additional symptom validity measures” (p. 29).

Fig. 1.

Age of patients on the VSVT (a) easy, (b) difficult, and (c) total items.

Fig. 1.

Age of patients on the VSVT (a) easy, (b) difficult, and (c) total items.

Fig. 2.

Intellectual abilities of patients on the VSVT (a) easy, (b) difficult, and (c) total items.

Fig. 2.

Intellectual abilities of patients on the VSVT (a) easy, (b) difficult, and (c) total items.

There were significant correlations between effort test performance, age, and some cognitive abilities, with regression analyses indicating that age accounts for 39% of the variance on easy item performance and age and intelligence account for 46% of the variance on difficult item performance. The relation between effort test performance, age, and some cognitive abilities is not exclusive to the present pediatric neurology sample. Significant correlations between VSVT scores, age, and intelligence were also found in a sample of adult epilepsy presurgery patients (Loring et al., 2005). Significant correlations between VSVT performance and neuropsychological scores were found in a sample of adults with severe traumatic brain injuries (Macciocchi et al., 2006), with VSVT scores also being predicted by performance on visual perception and verbal fluency measures. Other measures of effort testing, such as the TOMM, have good applicability with pediatric neurology patients, despite a significant correlation between effort test performance and intellectual abilities (MacAllister et al., 2009). Overall, it appears that having significant correlations between effort test performance, age, and intelligence may be a reality of administering these measures to patients with neurological disorders, which does have the potential for providing cautious interpretation when administered to the youngest and lowest functioning patients (i.e., <10 years and/or FSIQ scores <1st percentile). However, an interesting, and potentially very important finding from this study, is that performance on the VSVT in these pediatric neurology patients was not significantly correlated with sustained visual attention (objectively measured), memory for verbal information, or memory for visual information. The absence of correlations between memory abilities and performance on effort tests is perhaps most important, because the ability to avoid a memory confound on test performance is the raison d'être of symptom validity tests.

At this time, there are no other pediatric VSVT studies to compare the current results. In adult clinical samples, the rates of passing the VSVT based on the binomial cutoffs in the manual varied somewhat but overall the vast majority was able to pass this measure and “invalid” profiles were rarely obtained. Grote and colleagues (2000) reported that 100% of non-compensation-seeking patients with intractable seizures were able to score in the “valid” range on the difficult items, which was contrasted with only 58% of those compensation-seeking patients with a mild traumatic brain injury having performance on the difficult items deemed “valid.” Slick and colleagues (2003) presented six case examples of patients with severe memory impairment, with all patients readily passing the difficult VSVT items. Loring and colleagues (2005) examined performance in a sample of presurgical epilepsy patients and found that 91.7% had “valid” performance with 8.3% deemed to be “questionable.” Macciocchi and colleagues (2006) reported that 100% of patients with severe traumatic brain injuries obtained “valid” performance on the VSVT total score, with 99% having total performance greater than 44. Loring and colleagues (2007) examined performance on the difficult VSVT items in a large non-litigating clinical sample and reported that 89% had “valid” performance, 10.7% were “questionable,” and 0.3% were deemed “invalid.”

A secondary aspect of this study involved examining whether patients who were identified as having “questionable” or “invalid” performance on the VSVT also fell below the cutoff on another measure of test-taking compliance (TOMM). It is important to acknowledge that in those patients who were administered both the VSVT and the TOMM, none scored below the established cutoff scores on both measures of test-taking compliance. The two patients who did score below established cutoff scores on the TOMM had “valid” profiles on all components of the VSVT. For the eight patients who scored within “questionable” or “invalid” on any components of the VSVT and were also given the TOMM, all of these patients were deemed to have passed the TOMM. Even the patient who scored within the “invalid” range on the VSVT easy items passed the TOMM. These results reinforce that there can be incremental value in administering more than one measure of test-taking compliance in a pediatric neuropsychological evaluation.

There were some limitations to this study that should be identified. First, it is important to reiterate that this was a sample of patients evaluated through a neurology clinic at a tertiary care hospital. The results of this study may not translate to other clinical settings, notably those settings that have patients with different diagnoses, cognitive abilities, or base rates of effort test failure (e.g., a setting that provides assessments for medical-legal cases). Second, this sample is heterogeneous and does not represent one type of diagnosis. Although this means that the results might not translate directly to a specific clinic, it does provide a basis for starting to use the VSVT in pediatric neuropsychological assessments. Third, it may seem somewhat counter-intuitive to study performance on the VSVT in pediatric neurology patients prior to knowing how healthy children and adolescents perform on this measure. However, by determining that the VSVT can be administered to the present sample, this provides a strong rationale for extending the normative information into pediatrics. For example, it will be valuable to establish normative curves for pediatric performance on the VSVT so that percentile ranks can be used for interpretation in addition to the binomial probability rates (see Macciocchi et al., 2006, for a criticism of only using binomial probabilities and the potential for higher false-negative rates).

Overall, children and adolescents with neurological disorders have a 95% pass rate on the VSVT total items. The rates of obtaining “invalid” profiles on the VSVT in children and adolescents with neurological disorders are extremely low (ranging from 1% on the easy items to 0% on the difficult and total items). Obtaining “questionable” profiles is also relatively uncommon, particularly on the easy (2%) and total (5%) items, compared with slightly higher rates on the difficult items (16%). This study provides a first line of supportive research for the utility of the VSVT in pediatric populations, although more research on using the VSVT as a measure of test-taking compliance with children and adolescents is warranted in future research, particularly with healthy samples, other pediatric clinical groups, and using “simulated malingering” samples.

Funding

No external funding was obtained for this study. Support for research is provided by the Neurosciences program at the Alberta Children's Hospital.

Conflict of Interest

B.L.B. receives funding from Psychological Assessment Resources, Inc., which is the publisher of the VSVT. However, he does not have any financial interest in the VSVT. He also receives royalties from a book published by Oxford University Press.

Acknowledgements

The author would like to thank Emily Tam, Andrea Jubinville, and Julie Wershler for their assistance with data entry, Dr Helen Carlson for her assistance with manuscript formatting, and Dr Elisabeth Sherman for her helpful comments on an earlier draft.

References

Bianchini
K. J.
Mathias
C. W.
Greve
K. W.
Symptom validity testing: A critical review
The Clinical Neuropsychologist
 , 
2001
, vol. 
15
 
1
(pg. 
19
-
45
)
Blaskewitz
N.
Merten
T.
Kathmann
N.
Performance of children on symptom validity tests: TOMM, MSVT, and FIT
Archives of Clinical Neuropsychology
 , 
2008
, vol. 
23
 
4
(pg. 
379
-
391
)
Brooks
B. L.
Sherman
E. M.
Krol
A. L.
Utility of TOMM Trial 1 as an indicator of effort in children and adolescents
Archives of Clinical Neuropsychology
 , 
2012
, vol. 
27
 
1
(pg. 
23
-
29
)
Bruininks
R. H.
Woodcock
R. W.
Weatherman
R. F.
Hill
B. K.
SIB-R: Scales of Independent Behavior-Revised
 , 
1996
Itasca, IL
Riverside Publishing Company
Bush
S. S.
Ruff
R. M.
Troster
A. I.
Barth
J. T.
Koffler
S. P.
Pliskin
N. H.
, et al.  . 
Symptom validity assessment: Practice issues and medical necessity NAN policy & planning committee
Archives of Clinical Neuropsychology
 , 
2005
, vol. 
20
 
4
(pg. 
419
-
426
)
Carone
D. A.
Children with moderate/severe brain damage/dysfunction outperform adults with mild-to-no brain damage on the Medical Symptom Validity Test
Brain Injury
 , 
2008
, vol. 
22
 
12
(pg. 
960
-
971
)
Cohen
M. J.
Children's Memory Scale
 , 
1997
San Antonio, TX
The Psychological Corporation
Constantinou
M.
McCaffrey
R. J.
Using the TOMM for evaluating children's effort to perform optimally on neuropsychological measures
Child Neuropsychology
 , 
2003
, vol. 
9
 
2
(pg. 
81
-
90
)
Delis
D. C.
Kramer
J. H.
Kaplan
E.
Ober
B. A.
California Verbal Learning Test-Children's Version
 , 
1994
San Antonio, TX
The Psychological Corporation
Delis
D. C.
Kramer
J. H.
Kaplan
E.
Ober
B. A.
California Verbal Learning Test - Second edition
 , 
2000
San Antonio, TX
The Psychological Corporation
Donders
J.
Performance on the test of memory malingering in a mixed pediatric sample
Child Neuropsychology
 , 
2005
, vol. 
11
 
2
(pg. 
221
-
227
)
Donders
J.
Boonstra
T.
Correlates of invalid neuropsychological test performance after traumatic brain injury
Brain Injury
 , 
2007
, vol. 
21
 
3
(pg. 
319
-
326
)
DuPaul
G. J.
Power
T. J.
Anastopoulos
A. D.
Reid
R.
ADHD Rating Scale – IV; Checklists, norms, and clinical interpretation
 , 
1998
New York, NY
The Guildford Press
Frazier
T. W.
Frazier
A. R.
Busch
R. M.
Kerwood
M. A.
Demaree
H. A.
Detection of simulated ADHD and reading disorder using symptom validity measures
Archives of Clinical Neuropsychology
 , 
2008
, vol. 
23
 
5
(pg. 
501
-
509
)
Gioia
G. A.
Isquith
P. K.
Guy
S. C.
Kenworthy
L.
Behavior rating inventory of executive function
Child Neuropsychology
 , 
2000
, vol. 
6
 
3
(pg. 
235
-
238
)
Green
P.
Flaro
L.
Word Memory Test performance in children
Child Neuropsychology
 , 
2003
, vol. 
9
 
3
(pg. 
189
-
207
)
Green
P.
Flaro
L.
Courtney
J.
Examining false positives on the Word Memory Test in adults with mild traumatic brain injury
Brain Injury
 , 
2009
, vol. 
23
 
9
(pg. 
741
-
750
)
Grote
C. L.
Kooker
E. K.
Garron
D. C.
Nyenhuis
D. L.
Smith
C. A.
Mattingly
M. L.
Performance of compensation seeking and non-compensation seeking samples on the Victoria Symptom Validity Test: Cross-validation and extension of a standardization study
Journal of Clinical and Experimental Neuropsychology
 , 
2000
, vol. 
22
 
6
(pg. 
709
-
719
)
Gunn
D.
Batchelor
J.
Jones
M.
Detection of simulated memory impairment in 6- to 11-year-old children
Child Neuropsychology
 , 
2010
, vol. 
16
 
2
(pg. 
105
-
118
)
Heilbronner
R. L.
Sweet
J. J.
Morgan
J. E.
Larrabee
G. J.
Millis
S. R.
American Academy of Clinical Neuropsychology Consensus Conference Statement on the neuropsychological assessment of effort, response bias, and malingering
Clinical Neuropsychology
 , 
2009
, vol. 
23
 
7
(pg. 
1093
-
1129
)
Kirk
J. W.
Harris
B.
Hutaff-Lee
C. F.
Koelemay
S. W.
Dinkins
J. P.
Kirkwood
M. W.
Performance on the Test of Memory Malingering (TOMM) among a large clinic-referred pediatric sample
Child Neuropsychology
 , 
2011
, vol. 
17
 
3
(pg. 
242
-
254
)
Kirkwood
M. W.
Sherman
E. M. S.
Brooks
B. L.
Overview of tests and techniques to detect negative response bias in children
Pediatric Forensic Neuropsychology
 , 
2012
New York
Oxford University Press
(pg. 
136
-
161
)
Kirkwood
M. W.
Kirk
J. W.
The base rate of suboptimal effort in a pediatric mild TBI sample: Performance on the Medical Symptom Validity Test
The Clinical Neuropsychologist
 , 
2010
, vol. 
24
 
5
(pg. 
860
-
872
)
Loring
D. W.
Larrabee
G. J.
Lee
G. P.
Meador
K. J.
Victoria Symptom Validity Test performance in a heterogenous clinical sample
Clinical Neuropsychology
 , 
2007
, vol. 
21
 
3
(pg. 
522
-
531
)
Loring
D. W.
Lee
G. P.
Meador
K. J.
Victoria Symptom Validity Test performance in non-litigating epilepsy surgery candidates
Journal of Clinical and Experimental Neuropsychology
 , 
2005
, vol. 
27
 
5
(pg. 
610
-
617
)
MacAllister
W. S.
Nakhutina
L.
Bender
H. A.
Karantzoulis
S.
Carlson
C.
Assessing effort during neuropsychological evaluation with the TOMM in children and adolescents with epilepsy
Child Neuropsychology
 , 
2009
, vol. 
15
 
6
(pg. 
521
-
531
)
Macciocchi
S. N.
Seel
R. T.
Alderson
A.
Godsall
R.
Victoria Symptom Validity Test performance in acute severe traumatic brain injury: Implications for test interpretation
Archives of Clinical Neuropsychology
 , 
2006
, vol. 
21
 
5
(pg. 
395
-
404
)
Nagle
A. M.
Everhart
D. E.
Durham
T. W.
McCammon
S. L.
Walker
M.
Deception strategies in children: Examination of forced choice recognition and verbal learning and memory techniques
Archives of Clinical Neuropsychology
 , 
2006
, vol. 
21
 
8
(pg. 
777
-
785
)
Rienstra
A.
Spaan
P. E.
Schmand
B.
Validation of symptom validity tests using a "child-model" of adult cognitive impairments
Archives of Clinical Neuropsychology
 , 
2010
, vol. 
25
 
5
(pg. 
371
-
382
)
Seidel
W. T.
Joschko
M.
Assessment of attention in children
The Clinical Neuropsychologist
 , 
1991
, vol. 
5
 
1
(pg. 
53
-
66
)
Slick
D. J.
Hopp
G.
Strauss
E.
Thompson
G. B.
Victoria Symptom Validity Test
 , 
1997
Odessa, FL
Psychological Assessment Resources
Slick
D. J.
Sherman
E. M. S.
Sherman
E. M. S.
Brooks
B. L.
Differential diagnosis of malingering and related clinical presentations
Pediatric Forensic Neuropsychology
 , 
2012
New York
Oxford University Press
(pg. 
113
-
135
)
Slick
D. J.
Tan
J. E.
Strauss
E.
Mateer
C. A.
Harnadek
M.
Sherman
E. M.
Victoria Symptom Validity Test scores of patients with profound memory impairment: Nonlitigants case studies
Clinical Neuropsychology
 , 
2003
, vol. 
17
 
3
(pg. 
390
-
394
)
Tombaugh
T.
Test of Memory of Malingering (TOMM)
 , 
1996
North Tonawanda, NY
Multi-Health Systems
Wechsler
D.
Wechsler Adult Intelligence Scale, Third Edition, administration and scoring manual
 , 
1997
San Antonio, TX
The Psychological Corporation
Wechsler
D.
Wechsler Memory Scale, Third Edition
 , 
1997
San Antonio, TX
The Psychological Corporation
Wechsler
D.
Wechsler Intelligence Scale for Children, Fourth Edition
 , 
2003
San Antonio, TX
The Psychological Corporation
Wechsler
D.
Wechsler Adult Intelligence Scale, Fourth Edition
 , 
2008
San Antonio, TX
Pearson