Abstract

Objective

To assess agreement between four brief computerized neurocognitive assessment tools (CNTs), ANAM, CogState, CNS Vital Signs, and ImPACT, by comparing rates of low scores.

Methods

Four hundred and six US Army service members (SMs) with and without acute mild traumatic brain injury completed two randomly assigned CNTs with order of administration also randomly assigned. We performed a base rate analysis for each CNT to determine the proportions of SMs in the control and mTBI groups who had various numbers of scores that were 1.0+, 1.5+, and 2.0+ standard deviations below the normative mean. We used these results to identify a hierarchy of low score levels ranging from poorest to least poor performance. We then compared the agreement between every low score level from each CNT pair administered to the SMs.

Results

More SMs in the mTBI group had low scores on all CNTs than SMs in the control group. As performance worsened, the association with mTBI became stronger for all CNTs. Most if not all SMs who performed at the worst level on any given CNT also had low scores on the other CNTs they completed but not necessarily at an equally low level.

Conclusion

These results suggest that all of the CNTs we examined are broadly similar but still retain some psychometric differences that need to be better understood. Furthermore, the base rates of low scores we present could themselves be useful to clinicians and researchers as a guide for interpreting results from the CNTs.

Introduction

Brief computerized neurocognitive test (CNT) batteries, also known as computerized neuropsychological assessment devices (CNADs) and neurocognitive assessment tools (NCATs), are often used to assess athletes and US military service members following a mild traumatic brain injury (mTBI). One study of 178 US high schools employing at least one athletic trainer found that 39.9% used CNTs (Meehan, d’Hemecourt, Collins, Taylor, & Comstock, 2012). In the 2008 National Defense Authorization Act (NDAA), the United States Congress required the Department of Defense (DoD) to begin computerized baseline cognitive testing for all service members (SMs) deploying overseas for use as a comparison to assessments following a TBI, should one occur (United States Congress, 2008). CNTs were administered more than 1.7 million times from June 2007 through June 2015 to obtain pre-deployment baseline data from service members to be used as a control if they sustain an mTBI and more than 84,000 times for clinical and post-deployment evaluations (Defense Centers of Excellence [DCOE], 2015). CNTs are now also appearing in so-called “retail” urgent care centers intended specifically for concussion assessment and management (HeadFirst, 2018). CNTs have proliferated so much and so rapidly in the last 10 years that the American Academy of Clinical Neuropsychology (AACN) and National Academy of Neuropsychology (NAN) issued a joint position paper listing appropriate standards and conventions for CNTs regarding marketing and performance claims, administration and interpretation, technical issues, data security, psychometrics, and other aspects of CNTs (Bauer et al., 2012).

Over the last 15 years, several CNTs have been developed and marketed specifically for identifying cognitive deficits following mTBI or concussion especially those resulting from sports, and they often include a reference to mTBI, concussion, or sports in their names. Examples include CogSport, CogState Sport, Immediate Post-concussion Assessment and Cognitive Test (ImPACT), Automated Neuropsychological Assessment Metrics (ANAM) Sports Concussion Battery, ANAM version 4 Traumatic Brain Injury-Military (ANAM4 TBI-MIL), Concussion Sentinel, and Headminder Concussion Resolution Index. Other batteries, such as CNS Vital Signs (CNSVS) are more broadly focused but suggest that certain scores produced by the battery are well suited for assessing patients with mTBI (CNS Vital Signs, 2014). Such names and marketing suggest that these batteries are similar and perhaps substitutable for each other, especially to patients and para-professionals, who can now easily obtain information about them from the developers’ and/or marketers’ websites. Indeed, the first issue that the joint AACN and NAN position paper on CNTs addressed was marketing and performance claims, advising that they should be subject to and meet the same developmental and usage standards as traditional educational, psychological, and neuropsychological tests (Bauer et al., 2012).

A review of literature about four commercially available CNTs that were developed and/or promoted for assessing mTBI found little research that compared the batteries to each other within the same individual (Arrieux, Cole, & Ahrens, 2017). A study by Schatz and Putz (2006) correlated nominally similar domains from ImPACT, CogSport, and Headminder, and two long-standing paper-and-pencil tests that assess processing speed (Trails A and B and the WAIS-R Digit Symbol subtest). They found that all of the CNTs measured aspects of reaction time, processing speed, attention, and working memory with shared variance in the processing speed, simple reaction time, and complex reaction time domains, but little shared variance in the memory domain. However, in the domains where CNT scores were significantly correlated, the amount of shared variance was below 50% when we squared the correlation coefficients presented by Schatz and Putz (2006). They stated that it was premature to make any conclusions about whether any of the CNTs they studied were superior to the others based on their observed correlations and suggested numerous areas in which further comparisons of CNTs was needed. Though there have been a handful of other studies investigating multiple CNTs at once, primarily focused on test–retest reliability and/or the ability to distinguish between concussed patients and controls (Broglio, Ferrara, Macciocchi, Baumgartner, & Elliott, 2007; Cole et al., 2013; Gardner, Shores, Batchelor, & Honan, 2012; Nelson et al., 2016, 2017), these studies indirectly compared the CNTs rather than directly comparing specific scores from each CNT as Schatz and Putz (2006) did.

In this paper we compare performance on CNTs within the same individuals, as Schatz and Putz did. However, our study differs from theirs and others that have examined CNTs because we are examining their equivalence as used clinically. Specifically we are trying to determine the extent to which a selected group of CNTs agree with each other with respect to a range of low scores when administered to the same individual; that is how often low scores on one CNT would be identified by another CNT. Since the developers of these tools claim that they measure similar cognitive functions and are marketed for use in the same disease areas (e.g. concussion/mTBI), the expectation is agreement should be relatively high. Furthermore, because of our interest in examining the degree of clinical equivalence, our analytical methods differ from previous CNT studies. Whereas previous studies used familiar statistical methods such as correlation coefficients, comparison of group means, and/or ROC curves to examine reliability, validity, and the ability of CNTs to distinguish between cases and controls, we employ methods similar to those used in what are known as equivalence or noninferiority studies (Friedman, Furberg, & DeMets, 2010). These methods differ in that they are designed to show whether a new intervention or assessment tool performs no worse, by an acceptable prespecified margin, than the current standard intervention or assessment tool in terms of a relevant dichotomous clinical outcome. Thus, the statistics of interest in noninferiority studies are generally relative rates such as percentages and risk or odds ratios and their accompanying confidence intervals, which provide information about effect size and the precision of the estimated effect size as well as statistical significance.

Methods

Materials

Four CNTs were examined in this analysis: Automated Neuropsychological Assessment Metrics, Version 4, Traumatic Brain Injury-Military (ANAM4 TBI-MIL); CogState, also known as CogSport or Axon Sports; Central Nervous System Vital Signs (CNSVS); and Immediate Post-concussion Assessment and Cognitive Test (ImPACT). The specific cognitive tests from each CNT relevant to this analysis, domains measured, and scores produced are presented in Table 1. See Cole, Arrieux, Dennison, and Ivins (2016) for a more detailed description of these CNTs used in this study. In addition to the CNTs, SMs were also administered the Computerized Assessment of Response Bias (CARB) as an independent effort measure.

Table 1.

Descriptions of cognitive portions of the CNTs

BatteryTestCognitive domains assessedOutput
ANAM4 TBI-MILSimple reaction timeVisuomotor processing speed, simple motor speed, and attentionThroughput, percent correct, and reaction time for each test
Procedural reaction timeProcessing speed, visuomotor reaction time, and attention
Code substitution learningVisual scanning, visual perception, attention, associative learning, and processing speed
Code substitution delayedLearning and delayed visual recognition memory
Mathematical processingBasic computational skills, concentration, and working memory
Matching to sampleVisual-spatial processing, working memory, and visual recognition memory
Simple reaction time, repeatedIndex of attention (i.e., reaction time and vigilance)
CogStateDetectionSimple reaction timeStandard scores for each test based on reaction time or accuracy
IdentificationProcessing speed
One BackAttention, working memory
One Card LearningLearning and recognition memory
CNSVSVerbal Memory, ImmediateWord recognition and memory, immediate and delayed recallComposite domain scores derived from tests
  • Composite memory

  • Verbal Memory

  • Visual Memory

  • Psychomotor Speed

  • Reaction Time

  • Complex Attention

  • Cognitive Flexibility

  • Processing Speed

  • Executive Function

  • Simple Attention

  • Motor Speed

Visual Memory, ImmediateVisual recognition and memory, immediate and delayed recall
Finger TappingMotor speed, fine motor control
Symbol Digit CodingInformation processing speed, complex attention, visual perceptual speed
StroopSimple reaction time, complex reaction time, inhibition, executive skills, processing speed
Shifting AttentionExecutive functioning, reaction time
Continuous PerformanceSustained attention, choice reaction time, impulsivity
Verbal Memory, DelayedWord recognition, memory, and delayed recall
ImPACTWord Memory, ImmediateVerbal recognition memoryComposite domain scores derived from tests
  • Verbal Memory

  • Visual Memory

  • Visual Motor Speed

  • Reaction Time

Design Memory, ImmediateVisual recognition memory
X’s and O’sVisual working memory and visual processing/visual motor speed
Symbol MatchVisual processing speed, learning, and memory
Color MatchChoice reaction time and impulse control/response inhibition
Four LettersWorking memory and visual-motor response speed
Word Memory, DelayedVerbal recognition memory
Design Memory, DelayedVisual recognition memory
BatteryTestCognitive domains assessedOutput
ANAM4 TBI-MILSimple reaction timeVisuomotor processing speed, simple motor speed, and attentionThroughput, percent correct, and reaction time for each test
Procedural reaction timeProcessing speed, visuomotor reaction time, and attention
Code substitution learningVisual scanning, visual perception, attention, associative learning, and processing speed
Code substitution delayedLearning and delayed visual recognition memory
Mathematical processingBasic computational skills, concentration, and working memory
Matching to sampleVisual-spatial processing, working memory, and visual recognition memory
Simple reaction time, repeatedIndex of attention (i.e., reaction time and vigilance)
CogStateDetectionSimple reaction timeStandard scores for each test based on reaction time or accuracy
IdentificationProcessing speed
One BackAttention, working memory
One Card LearningLearning and recognition memory
CNSVSVerbal Memory, ImmediateWord recognition and memory, immediate and delayed recallComposite domain scores derived from tests
  • Composite memory

  • Verbal Memory

  • Visual Memory

  • Psychomotor Speed

  • Reaction Time

  • Complex Attention

  • Cognitive Flexibility

  • Processing Speed

  • Executive Function

  • Simple Attention

  • Motor Speed

Visual Memory, ImmediateVisual recognition and memory, immediate and delayed recall
Finger TappingMotor speed, fine motor control
Symbol Digit CodingInformation processing speed, complex attention, visual perceptual speed
StroopSimple reaction time, complex reaction time, inhibition, executive skills, processing speed
Shifting AttentionExecutive functioning, reaction time
Continuous PerformanceSustained attention, choice reaction time, impulsivity
Verbal Memory, DelayedWord recognition, memory, and delayed recall
ImPACTWord Memory, ImmediateVerbal recognition memoryComposite domain scores derived from tests
  • Verbal Memory

  • Visual Memory

  • Visual Motor Speed

  • Reaction Time

Design Memory, ImmediateVisual recognition memory
X’s and O’sVisual working memory and visual processing/visual motor speed
Symbol MatchVisual processing speed, learning, and memory
Color MatchChoice reaction time and impulse control/response inhibition
Four LettersWorking memory and visual-motor response speed
Word Memory, DelayedVerbal recognition memory
Design Memory, DelayedVisual recognition memory
Table 1.

Descriptions of cognitive portions of the CNTs

BatteryTestCognitive domains assessedOutput
ANAM4 TBI-MILSimple reaction timeVisuomotor processing speed, simple motor speed, and attentionThroughput, percent correct, and reaction time for each test
Procedural reaction timeProcessing speed, visuomotor reaction time, and attention
Code substitution learningVisual scanning, visual perception, attention, associative learning, and processing speed
Code substitution delayedLearning and delayed visual recognition memory
Mathematical processingBasic computational skills, concentration, and working memory
Matching to sampleVisual-spatial processing, working memory, and visual recognition memory
Simple reaction time, repeatedIndex of attention (i.e., reaction time and vigilance)
CogStateDetectionSimple reaction timeStandard scores for each test based on reaction time or accuracy
IdentificationProcessing speed
One BackAttention, working memory
One Card LearningLearning and recognition memory
CNSVSVerbal Memory, ImmediateWord recognition and memory, immediate and delayed recallComposite domain scores derived from tests
  • Composite memory

  • Verbal Memory

  • Visual Memory

  • Psychomotor Speed

  • Reaction Time

  • Complex Attention

  • Cognitive Flexibility

  • Processing Speed

  • Executive Function

  • Simple Attention

  • Motor Speed

Visual Memory, ImmediateVisual recognition and memory, immediate and delayed recall
Finger TappingMotor speed, fine motor control
Symbol Digit CodingInformation processing speed, complex attention, visual perceptual speed
StroopSimple reaction time, complex reaction time, inhibition, executive skills, processing speed
Shifting AttentionExecutive functioning, reaction time
Continuous PerformanceSustained attention, choice reaction time, impulsivity
Verbal Memory, DelayedWord recognition, memory, and delayed recall
ImPACTWord Memory, ImmediateVerbal recognition memoryComposite domain scores derived from tests
  • Verbal Memory

  • Visual Memory

  • Visual Motor Speed

  • Reaction Time

Design Memory, ImmediateVisual recognition memory
X’s and O’sVisual working memory and visual processing/visual motor speed
Symbol MatchVisual processing speed, learning, and memory
Color MatchChoice reaction time and impulse control/response inhibition
Four LettersWorking memory and visual-motor response speed
Word Memory, DelayedVerbal recognition memory
Design Memory, DelayedVisual recognition memory
BatteryTestCognitive domains assessedOutput
ANAM4 TBI-MILSimple reaction timeVisuomotor processing speed, simple motor speed, and attentionThroughput, percent correct, and reaction time for each test
Procedural reaction timeProcessing speed, visuomotor reaction time, and attention
Code substitution learningVisual scanning, visual perception, attention, associative learning, and processing speed
Code substitution delayedLearning and delayed visual recognition memory
Mathematical processingBasic computational skills, concentration, and working memory
Matching to sampleVisual-spatial processing, working memory, and visual recognition memory
Simple reaction time, repeatedIndex of attention (i.e., reaction time and vigilance)
CogStateDetectionSimple reaction timeStandard scores for each test based on reaction time or accuracy
IdentificationProcessing speed
One BackAttention, working memory
One Card LearningLearning and recognition memory
CNSVSVerbal Memory, ImmediateWord recognition and memory, immediate and delayed recallComposite domain scores derived from tests
  • Composite memory

  • Verbal Memory

  • Visual Memory

  • Psychomotor Speed

  • Reaction Time

  • Complex Attention

  • Cognitive Flexibility

  • Processing Speed

  • Executive Function

  • Simple Attention

  • Motor Speed

Visual Memory, ImmediateVisual recognition and memory, immediate and delayed recall
Finger TappingMotor speed, fine motor control
Symbol Digit CodingInformation processing speed, complex attention, visual perceptual speed
StroopSimple reaction time, complex reaction time, inhibition, executive skills, processing speed
Shifting AttentionExecutive functioning, reaction time
Continuous PerformanceSustained attention, choice reaction time, impulsivity
Verbal Memory, DelayedWord recognition, memory, and delayed recall
ImPACTWord Memory, ImmediateVerbal recognition memoryComposite domain scores derived from tests
  • Verbal Memory

  • Visual Memory

  • Visual Motor Speed

  • Reaction Time

Design Memory, ImmediateVisual recognition memory
X’s and O’sVisual working memory and visual processing/visual motor speed
Symbol MatchVisual processing speed, learning, and memory
Color MatchChoice reaction time and impulse control/response inhibition
Four LettersWorking memory and visual-motor response speed
Word Memory, DelayedVerbal recognition memory
Design Memory, DelayedVisual recognition memory

Participants

Data from 406 US military service members with (n = 167) and without (n = 239) an acute mTBI and who had complete and valid CNT data were used for this analysis. They were recruited for a validity study comparing CNTs to a battery of traditional neuropsychological tests the sampling methods and sample characteristics of which, such as missing and invalid data, were described in more detail in a previous paper (Cole, Arrieux, Ivins, Schwab, & Qashu, 2017). SMs without acute mTBI were recruited after being briefed about the study during their in-processing to Fort Bragg, NC. Service members with mTBI were recruited from WAMC’s Concussion Care Clinic within 7 days of a medically diagnosed mTBI. The study was approved by Womack Army Medical Center’s (WAMC) Institutional Review Board and all SMs provided written informed consent prior to participation.

Procedures

All SMs were randomly assigned two CNTs and completed them consecutively in the same session. The administration order was also randomly assigned. However, we found that administration order had little to no impact on performance (Cole et al., 2016). Ninety-eight SMs were excluded from the analysis for failing to demonstrate adequate effort on embedded effort measures on one or more of the CNTs they completed using guidelines provided by the CNT developers, or failing to meet cutoffs on the CARB (i.e., <94%) (Allen, Green, Cox, & Conder, 2006). All SMs included in the analyses were deemed to have given adequate effort on both CNTs as well as the CARB. The number of SMs completing each pair of CNTs is shown in Table 2. Performance on each CNT was assessed by using standardized scores (z-scores) that were automatically calculated using normative data embedded in the respective CNT software. We used the specific domain/test scores provided by each CNT, which are identified in Table 1. The only CNT that provides some flexibility in choice of scores is ANAM4 TBI-MIL which provides percent correct, reaction time, and throughput for each of the seven tests used. We chose to use throughput because it is a composite of percent correct and reaction time and is believed to best reflect the underlying processes at work in ANAM4 (Short, Cernich, Wilken, & Kane, 2007).

Table 2.

Number of SMs with valid data who completed each pair of CNTs (total N = 406)

ANAMCogStateCNSVS
CogState74
CNSVS6670
ImPACT676663
ANAMCogStateCNSVS
CogState74
CNSVS6670
ImPACT676663
Table 2.

Number of SMs with valid data who completed each pair of CNTs (total N = 406)

ANAMCogStateCNSVS
CogState74
CNSVS6670
ImPACT676663
ANAMCogStateCNSVS
CogState74
CNSVS6670
ImPACT676663

Analyses

None of the CNTs we examined offered any guidance about how to assess performance beyond flagging individual scores that were at or below two performance thresholds, usually 1.5 and 2.0 standard deviations below the normative mean and in one case suggesting which cognitive domains that might be pertinent to specific neurologic diseases. This is a problem because the CNTs have a varied number of individual test/domains scores ranging from 4 to 11. Because of this lack of guidance for assessing overall battery performance (for example what is the minimum number of low scores necessary to be clinically meaningful) and to account for obvious psychometric differences between the CNTs, particularly varying numbers of scores, we had to develop a way of assessing overall battery performance before we could attempt to compare agreement between the CNTs at performance levels that were likely to be clinically meaningful. To be as fair as possible to the CNTs we needed an empirically based way of assessing overall performance that accounted for both how low individual scores were as well as how many low scores there were at various low score levels. This required a two phase analysis. In phase one, we developed the performance classification levels. In Phase 2, we compared agreement levels between CNTs at each of the performance levels we identified in Phase 1. Additionally, we examined intercorrelations between individual scores between and within CNTs to determine if these associations might be related to agreement levels.

To accomplish Phase 1, we conducted a base rate analysis of data from each CNT to determine the proportions of service members in the control and mTBI groups who had various numbers of scores that were 1.0 or more, 1.5 or more, and 2.0 or more standard deviations (SDs) below the normative mean. We chose these cutoffs because multiple scores between 1.0 and 1.5 standard deviations below norm may be clinically relevant. Previous research on ANAM4 TBI-MIL found that having three or more scores 1.0+ SDs below the normative mean was both rare in a control group and strongly distinguished soldiers with mTBI from controls (Ivins et al., 2015). We chose 1.5 and 2.0 SDs below norm because they are the cutoffs the CNTs use to identify potentially clinically meaningful individual scores. This allowed us to create a cumulative hierarchy of performance levels for each CNT reflecting a spectrum of poor performance ranging from 1+ scores 1.0+ SDs below the normative mean to potentially having all scores on the CNT 2.0+ SDs below the normative mean with the proportions of service members reflecting the prevalence of each performance level in each group for every CNT.

Since one of the primary uses for the CNTs we examined is assessing the cognitive functioning of patients following an mTBI, we enhanced our base rate analysis by using crude prevalence ratios (PRs) to compare the prevalence of various CNT performance levels in the control and mTBI groups. PRs provided a clear and simple way to measure the strength of association between mTBI and the various low-score performance levels identified in the base rate analysis. We used the following guidelines for assessing the strength of association between mTBI and CNT performance: PR ≥ 2, minimum practical (MP) effect; PR ≥ 3, medium effect; PR ≥ 4, strong effect (Ferguson, 2009). Together, the prevalence of poor performance and the PRs augmented the hierarchical rank ordering of performance levels by allowing us to determine how rare the performance levels were in the control group and how strongly they were associated with mTBI. These could then be used as a way to gauge how well each low-score performance level of each CNT distinguished mTBI cases from controls and thereby serve as a crude way of assessing how equivalent the low score levels are between CNTs.

In Phase 2 of the analysis, we calculated the percent agreement between all of the low score levels for each of the CNT pairs we examined. We did not use intraclass correlation coefficients in this analysis because they can only be calculated for continuous measures. We did not use the popular kappa statistic as a measure of agreement because it is affected by the prevalence of the phenomenon under investigation (Feinstein & Cicchetti, 1990). Low prevalence can result in low kappa values despite high agreement levels between raters. Furthermore, the correction for chance agreement used in the kappa calculation, which is the main reason given for the use of kappa over percent agreement, assumes that the raters are humans who can be prone to subjectivity in their ratings (Feinstein & Cicchetti, 1990). However, in this study the raters are computerized tests, which are not prone to subjectivity when rating a patient. Finally, kappa is a single measure of overall agreement encompassing agreement on both positive and negative findings, which can each have different levels of agreement, similar to the phenomenon of a diagnostic test having high sensitivity but lower specificity or vice versa. When combining these aspects into a single overall measure, they can cancel each other out thereby rendering the overall measure unrepresentative of both attributes or the more prevalent attribute will be more heavily reflected. Since we are focused only on low scores, which are what are ultimately of interest in a clinical evaluation, kappa may not accurately reflect agreement on this one attribute.

We calculated 95% confidence intervals to assess the precision of our prevalence ratio estimates and to determine if the association between low scores and mTBI was statistically significant. A 95% confidence interval of a prevalence ratio that excludes the value one indicates that an association is statistically significant at the 0.05 level. We used Poisson regression from SPSS Version 22 to estimate the prevalence ratios and confidence intervals (IBM, 2013). All models were checked for over- and under-dispersion and, when necessary, we modified the standard error of the parameter estimate for the effect of mTBI by multiplying it by the square root of the chi-square dispersion parameter to adjust for inadequate model fit (Kianifard & Gallo, 1995). We used one-way ANOVA and chi-square statistics to determine if the groups of SMs who completed each pair of CNTs were statistically different in terms of demographic and military characteristics. We used Cohen’s f and w to determine the magnitude of any significant group differences on demographic and military characteristics and between selected mTBI characteristics (Cohen, 1992). Pearson correlation coefficients were used to examine the linear relationships between individual scores between CNTs in each pair and between scores within each CNT.

Results

The SMs completing each pair of CNTs were generally similar in terms of demographic, military, and mTBI characteristics (Table 3). There was a statistically significant difference for age but the effect size was small. There were no other statistically significant differences between the groups and the effect sizes were small to none.

Table 3.

Comparison of demographic, military, and injury characteristics of SMs in each CNT pair

ANAM-CogStateANAM-CNSVSANAM-ImPACTCogState-CNSVSCogState-ImPACTCNSVS-ImPACTp-valueEffect size
Age, mean (SD)30.81 (8.40)30.91 (7.91)33.72 (8.77)29.17 (7.30)31.85 (8.33)32.03 (7.79).037Small
% male86.586.485.187.181.890.5.826None
% white73.068.255.257.171.273.0.079Small
WAIS_FSIQ, mean (SD)106.85 (11.27)107.45 (11.97)108.11 (12.74)107.30 (13.33)111.97 (13.28)109.56 (13.63).176Small
Education
 % High school graduate27.022.713.427.119.714.3.176Small
 % Some college33.951.541.841.442.439.7
 % College graduate39.225.844.831.437.946.0
Years in Army, mean (SD)8.43 (7.43)8.50 (6.39)9.10 (6.73)7.74 (7.09)9.33 (7.29)8.92 (6.29).800None
Military rank
 % Junior enlisted39.231.823.932.924.220.6.411Small
 % NCO20.324.219.425.721.215.9
 % Senior NCO10.816.720.911.421.222.2
 % Officer29.727.335.830.033.341.3
 % with recent mTBI40.543.940.344.339.438.1.973None
Time since mTBI, mean (SD)5.58 (1.79)4.86 (1.79)5.00 (1.57)4.76 (1.98)4.88 (1.76)4.77 (1.82).511Small
ANAM-CogStateANAM-CNSVSANAM-ImPACTCogState-CNSVSCogState-ImPACTCNSVS-ImPACTp-valueEffect size
Age, mean (SD)30.81 (8.40)30.91 (7.91)33.72 (8.77)29.17 (7.30)31.85 (8.33)32.03 (7.79).037Small
% male86.586.485.187.181.890.5.826None
% white73.068.255.257.171.273.0.079Small
WAIS_FSIQ, mean (SD)106.85 (11.27)107.45 (11.97)108.11 (12.74)107.30 (13.33)111.97 (13.28)109.56 (13.63).176Small
Education
 % High school graduate27.022.713.427.119.714.3.176Small
 % Some college33.951.541.841.442.439.7
 % College graduate39.225.844.831.437.946.0
Years in Army, mean (SD)8.43 (7.43)8.50 (6.39)9.10 (6.73)7.74 (7.09)9.33 (7.29)8.92 (6.29).800None
Military rank
 % Junior enlisted39.231.823.932.924.220.6.411Small
 % NCO20.324.219.425.721.215.9
 % Senior NCO10.816.720.911.421.222.2
 % Officer29.727.335.830.033.341.3
 % with recent mTBI40.543.940.344.339.438.1.973None
Time since mTBI, mean (SD)5.58 (1.79)4.86 (1.79)5.00 (1.57)4.76 (1.98)4.88 (1.76)4.77 (1.82).511Small
Table 3.

Comparison of demographic, military, and injury characteristics of SMs in each CNT pair

ANAM-CogStateANAM-CNSVSANAM-ImPACTCogState-CNSVSCogState-ImPACTCNSVS-ImPACTp-valueEffect size
Age, mean (SD)30.81 (8.40)30.91 (7.91)33.72 (8.77)29.17 (7.30)31.85 (8.33)32.03 (7.79).037Small
% male86.586.485.187.181.890.5.826None
% white73.068.255.257.171.273.0.079Small
WAIS_FSIQ, mean (SD)106.85 (11.27)107.45 (11.97)108.11 (12.74)107.30 (13.33)111.97 (13.28)109.56 (13.63).176Small
Education
 % High school graduate27.022.713.427.119.714.3.176Small
 % Some college33.951.541.841.442.439.7
 % College graduate39.225.844.831.437.946.0
Years in Army, mean (SD)8.43 (7.43)8.50 (6.39)9.10 (6.73)7.74 (7.09)9.33 (7.29)8.92 (6.29).800None
Military rank
 % Junior enlisted39.231.823.932.924.220.6.411Small
 % NCO20.324.219.425.721.215.9
 % Senior NCO10.816.720.911.421.222.2
 % Officer29.727.335.830.033.341.3
 % with recent mTBI40.543.940.344.339.438.1.973None
Time since mTBI, mean (SD)5.58 (1.79)4.86 (1.79)5.00 (1.57)4.76 (1.98)4.88 (1.76)4.77 (1.82).511Small
ANAM-CogStateANAM-CNSVSANAM-ImPACTCogState-CNSVSCogState-ImPACTCNSVS-ImPACTp-valueEffect size
Age, mean (SD)30.81 (8.40)30.91 (7.91)33.72 (8.77)29.17 (7.30)31.85 (8.33)32.03 (7.79).037Small
% male86.586.485.187.181.890.5.826None
% white73.068.255.257.171.273.0.079Small
WAIS_FSIQ, mean (SD)106.85 (11.27)107.45 (11.97)108.11 (12.74)107.30 (13.33)111.97 (13.28)109.56 (13.63).176Small
Education
 % High school graduate27.022.713.427.119.714.3.176Small
 % Some college33.951.541.841.442.439.7
 % College graduate39.225.844.831.437.946.0
Years in Army, mean (SD)8.43 (7.43)8.50 (6.39)9.10 (6.73)7.74 (7.09)9.33 (7.29)8.92 (6.29).800None
Military rank
 % Junior enlisted39.231.823.932.924.220.6.411Small
 % NCO20.324.219.425.721.215.9
 % Senior NCO10.816.720.911.421.222.2
 % Officer29.727.335.830.033.341.3
 % with recent mTBI40.543.940.344.339.438.1.973None
Time since mTBI, mean (SD)5.58 (1.79)4.86 (1.79)5.00 (1.57)4.76 (1.98)4.88 (1.76)4.77 (1.82).511Small

Tables 47 show the results of base rate analyses of data from ANAM, CogState, CNSVS, and ImPACT, respectively. As performance declined, the number of controls at each performance level decreased relative to the number of SMs with mTBI. This was true for all CNTs. For example, Table 4 shows that 51.2% of SMs in the control group who completed ANAM had one or more scores 1+ SDs below the normative mean. However, only 7.0% had four or more scores 1+ SDs below the normative mean. There was a similar pattern among SMs in the mTBI group, however, the proportions of SMs with mTBI who met any given performance level were higher than in the control group on all CNTs. For example, Table 4 shows that while only 7.0% of SMs in the control group had four or more ANAM scores 1+ SDs below the normative mean, 32.7% of SMs in the mTBI group performed at this level. Furthermore, as the number of scores meeting each performance cutoff increased, the association between those performance levels and mTBI generally became stronger. For example, Table 4 shows that the prevalence of SMs with mTBI who had one or more ANAM scores 1+SDs below the normative mean was 1.52 times higher than in the control group. The prevalence of SMs with two or more scores meeting this cutoff was 1.94 times higher in the mTBI group. The prevalence of having three or more scores at this cutoff was 2.14 times higher in the mTBI group. Finally, the prevalence of having 6 or more ANAM scores that were 1+ SDs below the normative mean was 4.83 time higher in the mTBI group. This pattern was observed at all SD cutoffs for all CNTs. Additionally, when the prevalence of low scores was below 10% in the control group, the strongest associations between low scores and mTBI occurred on all of the CNTs. These performance levels were generally characterized by multiple low scores at each three of performance cutoffs we examined.

Table 4.

Prevalence of service members with low ANAM scores by number of scores at selected cutoffs

# of scores meeting cutoffCumulative % control (n = 129)Cumulative % mTBI (n = 98)Crude PR95% CI (PR)Effect sizeLow score level
Cutoff1+ SDs below normative mean
1+51.277.61.52(1.34–1.71)None7
2+29.557.11.94(1.52–2.48)None6
3+18.639.82.14(1.47–3.10)MP4
4+7.032.74.68(2.50–8.77)Strong1
5+4.720.44.39(1.92–10.02)Strong1
6+2.311.24.83(1.43–16.28)Strong1
70.05.11
Cutoff1.5+ SDs below normative mean
1+26.760.22.10(1.65–2.67)MP5
2+13.237.82.87(1.83–4.49)MP2
3+5.425.44.51(2.14–9.52)Strong1
4+2.312.25.27(1.58–17.5)Strong1
5+0.85.16.58(0.80–54.36)Strong1
6+0.02.01
70.01.01
Cutoff2+ SDs below normative mean
1+16.338.82.38(1.60–3.56)MP3
2+5.419.43.57(1.64–7.79)Medium1
3+2.36.12.63(0.69–10.05)MP1
4+0.83.13.95(0.42–37.00)Medium1
5+0.01.01
6+0.01.01
70.00.01
# of scores meeting cutoffCumulative % control (n = 129)Cumulative % mTBI (n = 98)Crude PR95% CI (PR)Effect sizeLow score level
Cutoff1+ SDs below normative mean
1+51.277.61.52(1.34–1.71)None7
2+29.557.11.94(1.52–2.48)None6
3+18.639.82.14(1.47–3.10)MP4
4+7.032.74.68(2.50–8.77)Strong1
5+4.720.44.39(1.92–10.02)Strong1
6+2.311.24.83(1.43–16.28)Strong1
70.05.11
Cutoff1.5+ SDs below normative mean
1+26.760.22.10(1.65–2.67)MP5
2+13.237.82.87(1.83–4.49)MP2
3+5.425.44.51(2.14–9.52)Strong1
4+2.312.25.27(1.58–17.5)Strong1
5+0.85.16.58(0.80–54.36)Strong1
6+0.02.01
70.01.01
Cutoff2+ SDs below normative mean
1+16.338.82.38(1.60–3.56)MP3
2+5.419.43.57(1.64–7.79)Medium1
3+2.36.12.63(0.69–10.05)MP1
4+0.83.13.95(0.42–37.00)Medium1
5+0.01.01
6+0.01.01
70.00.01
Table 4.

Prevalence of service members with low ANAM scores by number of scores at selected cutoffs

# of scores meeting cutoffCumulative % control (n = 129)Cumulative % mTBI (n = 98)Crude PR95% CI (PR)Effect sizeLow score level
Cutoff1+ SDs below normative mean
1+51.277.61.52(1.34–1.71)None7
2+29.557.11.94(1.52–2.48)None6
3+18.639.82.14(1.47–3.10)MP4
4+7.032.74.68(2.50–8.77)Strong1
5+4.720.44.39(1.92–10.02)Strong1
6+2.311.24.83(1.43–16.28)Strong1
70.05.11
Cutoff1.5+ SDs below normative mean
1+26.760.22.10(1.65–2.67)MP5
2+13.237.82.87(1.83–4.49)MP2
3+5.425.44.51(2.14–9.52)Strong1
4+2.312.25.27(1.58–17.5)Strong1
5+0.85.16.58(0.80–54.36)Strong1
6+0.02.01
70.01.01
Cutoff2+ SDs below normative mean
1+16.338.82.38(1.60–3.56)MP3
2+5.419.43.57(1.64–7.79)Medium1
3+2.36.12.63(0.69–10.05)MP1
4+0.83.13.95(0.42–37.00)Medium1
5+0.01.01
6+0.01.01
70.00.01
# of scores meeting cutoffCumulative % control (n = 129)Cumulative % mTBI (n = 98)Crude PR95% CI (PR)Effect sizeLow score level
Cutoff1+ SDs below normative mean
1+51.277.61.52(1.34–1.71)None7
2+29.557.11.94(1.52–2.48)None6
3+18.639.82.14(1.47–3.10)MP4
4+7.032.74.68(2.50–8.77)Strong1
5+4.720.44.39(1.92–10.02)Strong1
6+2.311.24.83(1.43–16.28)Strong1
70.05.11
Cutoff1.5+ SDs below normative mean
1+26.760.22.10(1.65–2.67)MP5
2+13.237.82.87(1.83–4.49)MP2
3+5.425.44.51(2.14–9.52)Strong1
4+2.312.25.27(1.58–17.5)Strong1
5+0.85.16.58(0.80–54.36)Strong1
6+0.02.01
70.01.01
Cutoff2+ SDs below normative mean
1+16.338.82.38(1.60–3.56)MP3
2+5.419.43.57(1.64–7.79)Medium1
3+2.36.12.63(0.69–10.05)MP1
4+0.83.13.95(0.42–37.00)Medium1
5+0.01.01
6+0.01.01
70.00.01
Table 5.

Prevalence of service members with low CogState scores by number of scores at selected cutoffs

# of scores meeting cutoffCumulative % control (n = 128)Cumulative % mTBI (n = 98)Crude PR95% CI (PR)Effect sizeLow score level
Cutoff1+ SDs below normative mean
1+53.974.51.38(1.22–1.56)None5
2+25.853.12.06(1.56–2.71)MP3
3+8.628.63.33(1.84–6.00)Medium1
40.81.01.31(0.08–20.71)None1
Cutoff1.5+ SDs below normative mean
1+33.657.11.70(1.36–2.13)None4
2+6.334.75.55(2.87–10.65)Strong1
3+0.817.322.2(3.23–152.52)Strong1
40.00.01
Cutoff2+ SDs below normative mean
1+16.438.82.36(1.58–3.53)MP2
2+2.320.48.71(2.82–26.89)Strong1
3+0.08.21
40.00.01
# of scores meeting cutoffCumulative % control (n = 128)Cumulative % mTBI (n = 98)Crude PR95% CI (PR)Effect sizeLow score level
Cutoff1+ SDs below normative mean
1+53.974.51.38(1.22–1.56)None5
2+25.853.12.06(1.56–2.71)MP3
3+8.628.63.33(1.84–6.00)Medium1
40.81.01.31(0.08–20.71)None1
Cutoff1.5+ SDs below normative mean
1+33.657.11.70(1.36–2.13)None4
2+6.334.75.55(2.87–10.65)Strong1
3+0.817.322.2(3.23–152.52)Strong1
40.00.01
Cutoff2+ SDs below normative mean
1+16.438.82.36(1.58–3.53)MP2
2+2.320.48.71(2.82–26.89)Strong1
3+0.08.21
40.00.01
Table 5.

Prevalence of service members with low CogState scores by number of scores at selected cutoffs

# of scores meeting cutoffCumulative % control (n = 128)Cumulative % mTBI (n = 98)Crude PR95% CI (PR)Effect sizeLow score level
Cutoff1+ SDs below normative mean
1+53.974.51.38(1.22–1.56)None5
2+25.853.12.06(1.56–2.71)MP3
3+8.628.63.33(1.84–6.00)Medium1
40.81.01.31(0.08–20.71)None1
Cutoff1.5+ SDs below normative mean
1+33.657.11.70(1.36–2.13)None4
2+6.334.75.55(2.87–10.65)Strong1
3+0.817.322.2(3.23–152.52)Strong1
40.00.01
Cutoff2+ SDs below normative mean
1+16.438.82.36(1.58–3.53)MP2
2+2.320.48.71(2.82–26.89)Strong1
3+0.08.21
40.00.01
# of scores meeting cutoffCumulative % control (n = 128)Cumulative % mTBI (n = 98)Crude PR95% CI (PR)Effect sizeLow score level
Cutoff1+ SDs below normative mean
1+53.974.51.38(1.22–1.56)None5
2+25.853.12.06(1.56–2.71)MP3
3+8.628.63.33(1.84–6.00)Medium1
40.81.01.31(0.08–20.71)None1
Cutoff1.5+ SDs below normative mean
1+33.657.11.70(1.36–2.13)None4
2+6.334.75.55(2.87–10.65)Strong1
3+0.817.322.2(3.23–152.52)Strong1
40.00.01
Cutoff2+ SDs below normative mean
1+16.438.82.36(1.58–3.53)MP2
2+2.320.48.71(2.82–26.89)Strong1
3+0.08.21
40.00.01
Table 6.

Prevalence of service members with low CNSVS scores by number of scores at selected cutoffs

# of scores meeting cutoffCumulative % control (n = 118)Cumulative % mTBI (n = 86)Crude PR95% CI (PR)Effect sizeLow score level
Cutoff1+ SDs below normative mean
1+52.587.21.66(1.49–1.85)None8
2+29.775.62.55(2.05–3.17)MP6
3+18.666.33.56(2.58–4.89)Medium5
4+15.350.03.28(2.20–4.89)Medium3
5+9.339.54.24(2.45–7.36)Strong1
6+6.832.64.80(2.46–9.39)Strong1
7+5.120.94.17(1.79–9.46)Strong1
8+2.517.46.86(2.15–21.88)Strong1
9+2.511.64.57(1.34–15.57)Strong1
10+0.85.86.86(0.83–56.46)Strong1
110.01.21
Cutoff1.5+ SDs below normative mean
1+34.768.61.97(1.61–2.42)None7
2+15.357.03.74(2.55–5.47)Medium2
3+8.534.94.12(2.27–7.46)Strong1
4+6.826.73.95(1.96–7.93)Medium1
5+2.518.67.32(2.32–23.1)Strong1
6+1.715.18.92(2.17–36.71)Strong1
7+0.811.613.72(1.87–100.43)Strong1
8+0.08.11
9+0.04.71
10+0.03.51
110.00.01
Cutoff2+ SDs below normative mean
1+16.954.73.22(2.24–4.65)Medium4
2+8.532.63.84(2.1–7.03)Medium1
3+5.120.94.12(1.79–9.46)Strong1
4+3.416.34.81(1.71–13.48)Strong1
5+0.812.815.09(2.09–108.95)1
6+0.07.01
7+0.04.71
8+0.03.51
9+0.03.51
10+0.01.21
110.00.01
# of scores meeting cutoffCumulative % control (n = 118)Cumulative % mTBI (n = 86)Crude PR95% CI (PR)Effect sizeLow score level
Cutoff1+ SDs below normative mean
1+52.587.21.66(1.49–1.85)None8
2+29.775.62.55(2.05–3.17)MP6
3+18.666.33.56(2.58–4.89)Medium5
4+15.350.03.28(2.20–4.89)Medium3
5+9.339.54.24(2.45–7.36)Strong1
6+6.832.64.80(2.46–9.39)Strong1
7+5.120.94.17(1.79–9.46)Strong1
8+2.517.46.86(2.15–21.88)Strong1
9+2.511.64.57(1.34–15.57)Strong1
10+0.85.86.86(0.83–56.46)Strong1
110.01.21
Cutoff1.5+ SDs below normative mean
1+34.768.61.97(1.61–2.42)None7
2+15.357.03.74(2.55–5.47)Medium2
3+8.534.94.12(2.27–7.46)Strong1
4+6.826.73.95(1.96–7.93)Medium1
5+2.518.67.32(2.32–23.1)Strong1
6+1.715.18.92(2.17–36.71)Strong1
7+0.811.613.72(1.87–100.43)Strong1
8+0.08.11
9+0.04.71
10+0.03.51
110.00.01
Cutoff2+ SDs below normative mean
1+16.954.73.22(2.24–4.65)Medium4
2+8.532.63.84(2.1–7.03)Medium1
3+5.120.94.12(1.79–9.46)Strong1
4+3.416.34.81(1.71–13.48)Strong1
5+0.812.815.09(2.09–108.95)1
6+0.07.01
7+0.04.71
8+0.03.51
9+0.03.51
10+0.01.21
110.00.01
Table 6.

Prevalence of service members with low CNSVS scores by number of scores at selected cutoffs

# of scores meeting cutoffCumulative % control (n = 118)Cumulative % mTBI (n = 86)Crude PR95% CI (PR)Effect sizeLow score level
Cutoff1+ SDs below normative mean
1+52.587.21.66(1.49–1.85)None8
2+29.775.62.55(2.05–3.17)MP6
3+18.666.33.56(2.58–4.89)Medium5
4+15.350.03.28(2.20–4.89)Medium3
5+9.339.54.24(2.45–7.36)Strong1
6+6.832.64.80(2.46–9.39)Strong1
7+5.120.94.17(1.79–9.46)Strong1
8+2.517.46.86(2.15–21.88)Strong1
9+2.511.64.57(1.34–15.57)Strong1
10+0.85.86.86(0.83–56.46)Strong1
110.01.21
Cutoff1.5+ SDs below normative mean
1+34.768.61.97(1.61–2.42)None7
2+15.357.03.74(2.55–5.47)Medium2
3+8.534.94.12(2.27–7.46)Strong1
4+6.826.73.95(1.96–7.93)Medium1
5+2.518.67.32(2.32–23.1)Strong1
6+1.715.18.92(2.17–36.71)Strong1
7+0.811.613.72(1.87–100.43)Strong1
8+0.08.11
9+0.04.71
10+0.03.51
110.00.01
Cutoff2+ SDs below normative mean
1+16.954.73.22(2.24–4.65)Medium4
2+8.532.63.84(2.1–7.03)Medium1
3+5.120.94.12(1.79–9.46)Strong1
4+3.416.34.81(1.71–13.48)Strong1
5+0.812.815.09(2.09–108.95)1
6+0.07.01
7+0.04.71
8+0.03.51
9+0.03.51
10+0.01.21
110.00.01
# of scores meeting cutoffCumulative % control (n = 118)Cumulative % mTBI (n = 86)Crude PR95% CI (PR)Effect sizeLow score level
Cutoff1+ SDs below normative mean
1+52.587.21.66(1.49–1.85)None8
2+29.775.62.55(2.05–3.17)MP6
3+18.666.33.56(2.58–4.89)Medium5
4+15.350.03.28(2.20–4.89)Medium3
5+9.339.54.24(2.45–7.36)Strong1
6+6.832.64.80(2.46–9.39)Strong1
7+5.120.94.17(1.79–9.46)Strong1
8+2.517.46.86(2.15–21.88)Strong1
9+2.511.64.57(1.34–15.57)Strong1
10+0.85.86.86(0.83–56.46)Strong1
110.01.21
Cutoff1.5+ SDs below normative mean
1+34.768.61.97(1.61–2.42)None7
2+15.357.03.74(2.55–5.47)Medium2
3+8.534.94.12(2.27–7.46)Strong1
4+6.826.73.95(1.96–7.93)Medium1
5+2.518.67.32(2.32–23.1)Strong1
6+1.715.18.92(2.17–36.71)Strong1
7+0.811.613.72(1.87–100.43)Strong1
8+0.08.11
9+0.04.71
10+0.03.51
110.00.01
Cutoff2+ SDs below normative mean
1+16.954.73.22(2.24–4.65)Medium4
2+8.532.63.84(2.1–7.03)Medium1
3+5.120.94.12(1.79–9.46)Strong1
4+3.416.34.81(1.71–13.48)Strong1
5+0.812.815.09(2.09–108.95)1
6+0.07.01
7+0.04.71
8+0.03.51
9+0.03.51
10+0.01.21
110.00.01
Table 7.

Prevalence of service members with low ImPACT scores by number of scores at selected cutoffs

# of scores meeting cutoffCumulative % control (n = 124)Cumulative % mTBI (n = 85)Crude PR95% CI (PR)Effect sizeLow score level
Cutoff1+ SDs below normative mean
1+37.152.91.43(1.13–1.8)None3
2+12.924.71.92(1.12–3.28)None2
3+2.411.84.87(1.43–16.58)Strong1
40.02.41
Cutoff1.5+ SDs below normative mean
1+12.924.71.92(1.12–3.28)None2
2+4.09.42.33(0.81–6.7)MP1
3+0.82.42.92(0.27–31.49)MP1
40.01.21
Cutoff2+ SDs below normative mean
1+4.89.41.95(0.72–5.24)None1
2+0.00.01
3+0.00.01
40.00.01
# of scores meeting cutoffCumulative % control (n = 124)Cumulative % mTBI (n = 85)Crude PR95% CI (PR)Effect sizeLow score level
Cutoff1+ SDs below normative mean
1+37.152.91.43(1.13–1.8)None3
2+12.924.71.92(1.12–3.28)None2
3+2.411.84.87(1.43–16.58)Strong1
40.02.41
Cutoff1.5+ SDs below normative mean
1+12.924.71.92(1.12–3.28)None2
2+4.09.42.33(0.81–6.7)MP1
3+0.82.42.92(0.27–31.49)MP1
40.01.21
Cutoff2+ SDs below normative mean
1+4.89.41.95(0.72–5.24)None1
2+0.00.01
3+0.00.01
40.00.01
Table 7.

Prevalence of service members with low ImPACT scores by number of scores at selected cutoffs

# of scores meeting cutoffCumulative % control (n = 124)Cumulative % mTBI (n = 85)Crude PR95% CI (PR)Effect sizeLow score level
Cutoff1+ SDs below normative mean
1+37.152.91.43(1.13–1.8)None3
2+12.924.71.92(1.12–3.28)None2
3+2.411.84.87(1.43–16.58)Strong1
40.02.41
Cutoff1.5+ SDs below normative mean
1+12.924.71.92(1.12–3.28)None2
2+4.09.42.33(0.81–6.7)MP1
3+0.82.42.92(0.27–31.49)MP1
40.01.21
Cutoff2+ SDs below normative mean
1+4.89.41.95(0.72–5.24)None1
2+0.00.01
3+0.00.01
40.00.01
# of scores meeting cutoffCumulative % control (n = 124)Cumulative % mTBI (n = 85)Crude PR95% CI (PR)Effect sizeLow score level
Cutoff1+ SDs below normative mean
1+37.152.91.43(1.13–1.8)None3
2+12.924.71.92(1.12–3.28)None2
3+2.411.84.87(1.43–16.58)Strong1
40.02.41
Cutoff1.5+ SDs below normative mean
1+12.924.71.92(1.12–3.28)None2
2+4.09.42.33(0.81–6.7)MP1
3+0.82.42.92(0.27–31.49)MP1
40.01.21
Cutoff2+ SDs below normative mean
1+4.89.41.95(0.72–5.24)None1
2+0.00.01
3+0.00.01
40.00.01

Table 8 shows the low score levels of each CNT derived from the base rate analysis results shown in Table 4 through 7 and serve as the basis for the CNT comparison. The data in Table 8 were obtained by aggregating the results of the base rate analysis for each CNT into a single hierarchy of low scores reflecting both the number of low scores and how low the scores were. For example, in the case of ANAM (Table 4), there were a large number of performance levels that were rare in the control group (prevalence less than 10%) and that distinguished the mTBI group from controls rather well (generally medium and strong effect sizes) (e.g. 4+ scores 1+ SDs below the normative mean, 3+ scores 1.5+ SDs below the normative mean, and 2+ scores 2+ SDs below the normative mean, and so on). These reflected the worst ANAM performance. However, these performance levels are cumulative and overlap considerably. For example, the proportions of SMs in the control and mTBI groups who had four or more ANAM scores that were 2+ SDs below the normative mean (0.8% and 3.1%, respectively), were also reflected in the proportions of SMs who had four or more ANAM scores that were 1.5+ SDs below the normative mean (2.3% and 12.2%, respectively) and reflected in the proportions SMs who had four or more ANAM scores that were 1+ SDs below the normative mean (7.0% and 32.7%, respectively). Therefore, to reduce the number of low score levels, we collapsed these performance levels into a single level (low score level 1) reflecting the worst overall performance. Table 8 shows that for each CNT, generally only about 10% of the control group overall performed at the worst low score level, level 1, after aggregating the numerous worst performance levels identified in the base rate analysis. As expected from the base rate analysis results, this aggregated worst performance level was also the most strongly associated with mTBI as reflected by the following prevalence ratios: 1.90 on ImPACT, 3.72 on CogState, 3.73 on ANAM, and 4.33 on CNSVS. The association between mTBI and low score level 1 was statistically significant at the 0.05 level for ANAM, CogState, and CNSVS as indicated by 95% confidence intervals that exclude the value 1. The association between mTBI and low score level 1 was not significant for ImPACT (95% CI = 0.91–3.96) however, this is due largely to the small number of SMs who performed at this level in each group and would have been statistically significant with a larger sample size.

Table 8.

Low score levels by CNT rank ordered by prevalence in the control group

ANAM
Cumulative % control (N = 129)Cumulative % mTBI (n = 98)Crude PR95% CIEffect sizeLow score levelPoor performance ranking
51.277.61.52(1.34–1.71)None7Least
38.364.31.66(1.38–2.00)None6
31.061.21.97(1.57–2.48)None5
24.849.01.97(1.47–2.64)None4
20.948.02.29(1.66–3.17)MP3
14.040.82.93(1.91–4.47)MP2
9.334.73.73(2.17–6.41)Medium1Most
ANAM
Cumulative % control (N = 129)Cumulative % mTBI (n = 98)Crude PR95% CIEffect sizeLow score levelPoor performance ranking
51.277.61.52(1.34–1.71)None7Least
38.364.31.66(1.38–2.00)None6
31.061.21.97(1.57–2.48)None5
24.849.01.97(1.47–2.64)None4
20.948.02.29(1.66–3.17)MP3
14.040.82.93(1.91–4.47)MP2
9.334.73.73(2.17–6.41)Medium1Most
CogState
Cumulative % control (N = 128)Cumulative % mTBI (n = 98)Crude PR95% CIEffect sizeLow score levelPoor performance ranking
53.974.51.38(1.22–1.56)None5Least
37.563.31.69(1.39–2.05)None4
29.757.11.93(1.51–2.46)None3
18.846.92.50(1.77–3.55)MP2
10.237.83.72(2.23–6.19)Medium1Most
CogState
Cumulative % control (N = 128)Cumulative % mTBI (n = 98)Crude PR95% CIEffect sizeLow score levelPoor performance ranking
53.974.51.38(1.22–1.56)None5Least
37.563.31.69(1.39–2.05)None4
29.757.11.93(1.51–2.46)None3
18.846.92.50(1.77–3.55)MP2
10.237.83.72(2.23–6.19)Medium1Most
CNSVS
Cumulative % control (N = 118)Cumulative % mTBI (n = 86)Crude PR95% CIEffect sizeLow score levelPoor performance ranking
52.587.21.66(1.49–1.85)None8Least
39.875.61.90(1.60–2.25)None7
33.175.62.29(1.87–2.79)MP6
22.969.83.05(2.32–4.01)Medium5
21.267.43.18(2.38–4.26)Medium4
17.861.63.46(2.47–4.86)Medium3
15.359.33.89(2.67–5.66)Medium2
11.047.74.33(2.67–7.00)Strong1Most
CNSVS
Cumulative % control (N = 118)Cumulative % mTBI (n = 86)Crude PR95% CIEffect sizeLow score levelPoor performance ranking
52.587.21.66(1.49–1.85)None8Least
39.875.61.90(1.60–2.25)None7
33.175.62.29(1.87–2.79)MP6
22.969.83.05(2.32–4.01)Medium5
21.267.43.18(2.38–4.26)Medium4
17.861.63.46(2.47–4.86)Medium3
15.359.33.89(2.67–5.66)Medium2
11.047.74.33(2.67–7.00)Strong1Most
ImPACT
Cumulative % control (N = 124)Cumulative % mTBI (n = 85)Crude PR95% CIEffect sizeLow score levelPoor performance ranking
37.152.91.43(1.13–1.8)None3Least
18.531.81.71(1.12–2.61)None2
8.115.31.90(0.91–3.96)None1Most
ImPACT
Cumulative % control (N = 124)Cumulative % mTBI (n = 85)Crude PR95% CIEffect sizeLow score levelPoor performance ranking
37.152.91.43(1.13–1.8)None3Least
18.531.81.71(1.12–2.61)None2
8.115.31.90(0.91–3.96)None1Most
Table 8.

Low score levels by CNT rank ordered by prevalence in the control group

ANAM
Cumulative % control (N = 129)Cumulative % mTBI (n = 98)Crude PR95% CIEffect sizeLow score levelPoor performance ranking
51.277.61.52(1.34–1.71)None7Least
38.364.31.66(1.38–2.00)None6
31.061.21.97(1.57–2.48)None5
24.849.01.97(1.47–2.64)None4
20.948.02.29(1.66–3.17)MP3
14.040.82.93(1.91–4.47)MP2
9.334.73.73(2.17–6.41)Medium1Most
ANAM
Cumulative % control (N = 129)Cumulative % mTBI (n = 98)Crude PR95% CIEffect sizeLow score levelPoor performance ranking
51.277.61.52(1.34–1.71)None7Least
38.364.31.66(1.38–2.00)None6
31.061.21.97(1.57–2.48)None5
24.849.01.97(1.47–2.64)None4
20.948.02.29(1.66–3.17)MP3
14.040.82.93(1.91–4.47)MP2
9.334.73.73(2.17–6.41)Medium1Most
CogState
Cumulative % control (N = 128)Cumulative % mTBI (n = 98)Crude PR95% CIEffect sizeLow score levelPoor performance ranking
53.974.51.38(1.22–1.56)None5Least
37.563.31.69(1.39–2.05)None4
29.757.11.93(1.51–2.46)None3
18.846.92.50(1.77–3.55)MP2
10.237.83.72(2.23–6.19)Medium1Most
CogState
Cumulative % control (N = 128)Cumulative % mTBI (n = 98)Crude PR95% CIEffect sizeLow score levelPoor performance ranking
53.974.51.38(1.22–1.56)None5Least
37.563.31.69(1.39–2.05)None4
29.757.11.93(1.51–2.46)None3
18.846.92.50(1.77–3.55)MP2
10.237.83.72(2.23–6.19)Medium1Most
CNSVS
Cumulative % control (N = 118)Cumulative % mTBI (n = 86)Crude PR95% CIEffect sizeLow score levelPoor performance ranking
52.587.21.66(1.49–1.85)None8Least
39.875.61.90(1.60–2.25)None7
33.175.62.29(1.87–2.79)MP6
22.969.83.05(2.32–4.01)Medium5
21.267.43.18(2.38–4.26)Medium4
17.861.63.46(2.47–4.86)Medium3
15.359.33.89(2.67–5.66)Medium2
11.047.74.33(2.67–7.00)Strong1Most
CNSVS
Cumulative % control (N = 118)Cumulative % mTBI (n = 86)Crude PR95% CIEffect sizeLow score levelPoor performance ranking
52.587.21.66(1.49–1.85)None8Least
39.875.61.90(1.60–2.25)None7
33.175.62.29(1.87–2.79)MP6
22.969.83.05(2.32–4.01)Medium5
21.267.43.18(2.38–4.26)Medium4
17.861.63.46(2.47–4.86)Medium3
15.359.33.89(2.67–5.66)Medium2
11.047.74.33(2.67–7.00)Strong1Most
ImPACT
Cumulative % control (N = 124)Cumulative % mTBI (n = 85)Crude PR95% CIEffect sizeLow score levelPoor performance ranking
37.152.91.43(1.13–1.8)None3Least
18.531.81.71(1.12–2.61)None2
8.115.31.90(0.91–3.96)None1Most
ImPACT
Cumulative % control (N = 124)Cumulative % mTBI (n = 85)Crude PR95% CIEffect sizeLow score levelPoor performance ranking
37.152.91.43(1.13–1.8)None3Least
18.531.81.71(1.12–2.61)None2
8.115.31.90(0.91–3.96)None1Most

Table 912 show percent agreement between the various low score levels of each CNT pair. The worst overall performance on one CNT (low score level 1) was reflected somewhere in the low score spectrum of the other CNTs. Across all of the CNT pairs, the percentage of SMs with the worst performance that was reflected somewhere in the low score spectrum ranged from 66.7% to 100.0%. This percentage was above 80% in 10 of the 12 CNT pairwise comparisons. However, performing at the worst level on one CNT was not always reflected by performance at the worst level on another CNT. The percentage of SMs who performed at the worst level on one CNT who also performed at the worst level on the other CNT they completed (low score level 1 to low score level 1) ranged from 13.3% to 84.6%. This percentage was below 60% in eight of the 12 CNT pairwise comparisons.

Table 9.

Percent of service members with low ANAM scores matched by low scores on other CNTs by low score level

Poor ANAM performance matched by CogStatePoor ANAM performance matched by CNSVSPoor ANAM performance matched by ImPACT
CogState low score levelCNSVS low score levelImPACT low score level
Poor performance rankingPoor performance rankingPoor performance ranking
MostLeastMostLeastMostLeast
1234512345678123
Poor performance rankingANAM low score level% matchingANAM low score level% matchingANAM low score level% matching
Most1 (n = 17)58.870.676.582.488.21 (n = 13)84.692.392.3100.0100.0100.0100.0100.01 (n = 8)25.037.587.5
2 (n = 20)75.080.085.090.02 (n = 19)78.978.989.589.5100.0100.0100.02 (n = 11)45.581.8
3 (n = 28)71.478.685.73 (n = 22)81.890.990.9100.0100.0100.03 (n = 16)81.3
4 (n = 29)79.386.24 (n = 23)87.087.095.795.795.74 (n = 19)73.7
5 (n = 38)86.25 (n = 28)78.692.996.496.45 (n = 23)69.6
6 (n = 41)87.86 (n = 32)87.590.693.86 (n = 28)64.3
Least7 (n = 45)86.77 (n = 39)79.589.77 (n = 42)54.8
Poor ANAM performance matched by CogStatePoor ANAM performance matched by CNSVSPoor ANAM performance matched by ImPACT
CogState low score levelCNSVS low score levelImPACT low score level
Poor performance rankingPoor performance rankingPoor performance ranking
MostLeastMostLeastMostLeast
1234512345678123
Poor performance rankingANAM low score level% matchingANAM low score level% matchingANAM low score level% matching
Most1 (n = 17)58.870.676.582.488.21 (n = 13)84.692.392.3100.0100.0100.0100.0100.01 (n = 8)25.037.587.5
2 (n = 20)75.080.085.090.02 (n = 19)78.978.989.589.5100.0100.0100.02 (n = 11)45.581.8
3 (n = 28)71.478.685.73 (n = 22)81.890.990.9100.0100.0100.03 (n = 16)81.3
4 (n = 29)79.386.24 (n = 23)87.087.095.795.795.74 (n = 19)73.7
5 (n = 38)86.25 (n = 28)78.692.996.496.45 (n = 23)69.6
6 (n = 41)87.86 (n = 32)87.590.693.86 (n = 28)64.3
Least7 (n = 45)86.77 (n = 39)79.589.77 (n = 42)54.8
Table 9.

Percent of service members with low ANAM scores matched by low scores on other CNTs by low score level

Poor ANAM performance matched by CogStatePoor ANAM performance matched by CNSVSPoor ANAM performance matched by ImPACT
CogState low score levelCNSVS low score levelImPACT low score level
Poor performance rankingPoor performance rankingPoor performance ranking
MostLeastMostLeastMostLeast
1234512345678123
Poor performance rankingANAM low score level% matchingANAM low score level% matchingANAM low score level% matching
Most1 (n = 17)58.870.676.582.488.21 (n = 13)84.692.392.3100.0100.0100.0100.0100.01 (n = 8)25.037.587.5
2 (n = 20)75.080.085.090.02 (n = 19)78.978.989.589.5100.0100.0100.02 (n = 11)45.581.8
3 (n = 28)71.478.685.73 (n = 22)81.890.990.9100.0100.0100.03 (n = 16)81.3
4 (n = 29)79.386.24 (n = 23)87.087.095.795.795.74 (n = 19)73.7
5 (n = 38)86.25 (n = 28)78.692.996.496.45 (n = 23)69.6
6 (n = 41)87.86 (n = 32)87.590.693.86 (n = 28)64.3
Least7 (n = 45)86.77 (n = 39)79.589.77 (n = 42)54.8
Poor ANAM performance matched by CogStatePoor ANAM performance matched by CNSVSPoor ANAM performance matched by ImPACT
CogState low score levelCNSVS low score levelImPACT low score level
Poor performance rankingPoor performance rankingPoor performance ranking
MostLeastMostLeastMostLeast
1234512345678123
Poor performance rankingANAM low score level% matchingANAM low score level% matchingANAM low score level% matching
Most1 (n = 17)58.870.676.582.488.21 (n = 13)84.692.392.3100.0100.0100.0100.0100.01 (n = 8)25.037.587.5
2 (n = 20)75.080.085.090.02 (n = 19)78.978.989.589.5100.0100.0100.02 (n = 11)45.581.8
3 (n = 28)71.478.685.73 (n = 22)81.890.990.9100.0100.0100.03 (n = 16)81.3
4 (n = 29)79.386.24 (n = 23)87.087.095.795.795.74 (n = 19)73.7
5 (n = 38)86.25 (n = 28)78.692.996.496.45 (n = 23)69.6
6 (n = 41)87.86 (n = 32)87.590.693.86 (n = 28)64.3
Least7 (n = 45)86.77 (n = 39)79.589.77 (n = 42)54.8
Table 10.

Percent of service members with low CogState scores matched by low scores on other CNTs by low score level

Poor CogState performance matched by ANAMPoor CogState performance matched by CNSVSPoor CogState performance matched by ImPACT
ANAM low score levelCNSVS low score levelImPACT low score level
Poor performance rankingPoor performance rankingPoor performance ranking
MostLeastMostLeastMostLeast
123456712345678123
Poor performance rankingCogState low score level% matchingCogState low score level% matchingCogState low score level% matching
Most1 (n = 15)66.773.380.086.786.786.793.31 (n = 17)58.882.482.488.294.1100.0100.0100.01 (n = 12)50.058.383.3
2 (n = 25)60.072.076.076.084.096.02 (n = 21)66.766.776.281.085.785.795.22 (n = 17)52.982.4
3 (n = 33)60.663.675.881.890.93 (n = 27)63.070.474.188.988.996.33 (n = 23)73.9
4 (n = 39)59.071.976.984.54 (n = 31)67.771.283.987.193.54 (n = 28)67.9
Least5 (n = 50)66.072.078.05 (n = 43)55.867.469.876.75 (n = 36)54.5
Poor CogState performance matched by ANAMPoor CogState performance matched by CNSVSPoor CogState performance matched by ImPACT
ANAM low score levelCNSVS low score levelImPACT low score level
Poor performance rankingPoor performance rankingPoor performance ranking
MostLeastMostLeastMostLeast
123456712345678123
Poor performance rankingCogState low score level% matchingCogState low score level% matchingCogState low score level% matching
Most1 (n = 15)66.773.380.086.786.786.793.31 (n = 17)58.882.482.488.294.1100.0100.0100.01 (n = 12)50.058.383.3
2 (n = 25)60.072.076.076.084.096.02 (n = 21)66.766.776.281.085.785.795.22 (n = 17)52.982.4
3 (n = 33)60.663.675.881.890.93 (n = 27)63.070.474.188.988.996.33 (n = 23)73.9
4 (n = 39)59.071.976.984.54 (n = 31)67.771.283.987.193.54 (n = 28)67.9
Least5 (n = 50)66.072.078.05 (n = 43)55.867.469.876.75 (n = 36)54.5
Table 10.

Percent of service members with low CogState scores matched by low scores on other CNTs by low score level

Poor CogState performance matched by ANAMPoor CogState performance matched by CNSVSPoor CogState performance matched by ImPACT
ANAM low score levelCNSVS low score levelImPACT low score level
Poor performance rankingPoor performance rankingPoor performance ranking
MostLeastMostLeastMostLeast
123456712345678123
Poor performance rankingCogState low score level% matchingCogState low score level% matchingCogState low score level% matching
Most1 (n = 15)66.773.380.086.786.786.793.31 (n = 17)58.882.482.488.294.1100.0100.0100.01 (n = 12)50.058.383.3
2 (n = 25)60.072.076.076.084.096.02 (n = 21)66.766.776.281.085.785.795.22 (n = 17)52.982.4
3 (n = 33)60.663.675.881.890.93 (n = 27)63.070.474.188.988.996.33 (n = 23)73.9
4 (n = 39)59.071.976.984.54 (n = 31)67.771.283.987.193.54 (n = 28)67.9
Least5 (n = 50)66.072.078.05 (n = 43)55.867.469.876.75 (n = 36)54.5
Poor CogState performance matched by ANAMPoor CogState performance matched by CNSVSPoor CogState performance matched by ImPACT
ANAM low score levelCNSVS low score levelImPACT low score level
Poor performance rankingPoor performance rankingPoor performance ranking
MostLeastMostLeastMostLeast
123456712345678123
Poor performance rankingCogState low score level% matchingCogState low score level% matchingCogState low score level% matching
Most1 (n = 15)66.773.380.086.786.786.793.31 (n = 17)58.882.482.488.294.1100.0100.0100.01 (n = 12)50.058.383.3
2 (n = 25)60.072.076.076.084.096.02 (n = 21)66.766.776.281.085.785.795.22 (n = 17)52.982.4
3 (n = 33)60.663.675.881.890.93 (n = 27)63.070.474.188.988.996.33 (n = 23)73.9
4 (n = 39)59.071.976.984.54 (n = 31)67.771.283.987.193.54 (n = 28)67.9
Least5 (n = 50)66.072.078.05 (n = 43)55.867.469.876.75 (n = 36)54.5
Table 11.

Percent of service members with low CNSVS scores matched by low scores on other CNTs by low score level

Poor CNSVS performance matched by ANAMPoor CNSVS performance matched by CogStatePoor CNSVS performance matched by ImPACT
ANAM low score levelCogState low score levelImPACT low score level
Poor performance rankingPoor performance rankingPoor performance ranking
MostLeastMostLeastMostLeast
123456712345123
Poor performance rankingCNSVS low score level% matchingCNSVS low score level% matchingCNSVS low score level% matching
Most1 (n = 21)52.466.776.276.285.785.785.71 (n = 17)58.858.870.676.582.41 (n = 15)13.353.373.3
2 (n = 27)55.666.766.774.177.885.22 (n = 23)60.973.979.382.62 (n = 18)44.466.7
3 (n = 28)64.364.371.475.082.13 (n = 24)70.875.083.33 (n = 21)66.7
4 (n = 32)62.568.871.978.14 (n = 28)75.082.14 (n = 22)63.6
5 (n = 33)66.769.775.85 (n = 29)82.85 (n = 22)63.6
6 (n = 40)70.075.06 (n = 35)82.96 (n = 26)61.5
7 (n = 42)73.87 (n = 37)81.17 (n = 29)58.6
Least8 (n = 50)70.08 (n = 44)75.08 (n = 39)56.4
Poor CNSVS performance matched by ANAMPoor CNSVS performance matched by CogStatePoor CNSVS performance matched by ImPACT
ANAM low score levelCogState low score levelImPACT low score level
Poor performance rankingPoor performance rankingPoor performance ranking
MostLeastMostLeastMostLeast
123456712345123
Poor performance rankingCNSVS low score level% matchingCNSVS low score level% matchingCNSVS low score level% matching
Most1 (n = 21)52.466.776.276.285.785.785.71 (n = 17)58.858.870.676.582.41 (n = 15)13.353.373.3
2 (n = 27)55.666.766.774.177.885.22 (n = 23)60.973.979.382.62 (n = 18)44.466.7
3 (n = 28)64.364.371.475.082.13 (n = 24)70.875.083.33 (n = 21)66.7
4 (n = 32)62.568.871.978.14 (n = 28)75.082.14 (n = 22)63.6
5 (n = 33)66.769.775.85 (n = 29)82.85 (n = 22)63.6
6 (n = 40)70.075.06 (n = 35)82.96 (n = 26)61.5
7 (n = 42)73.87 (n = 37)81.17 (n = 29)58.6
Least8 (n = 50)70.08 (n = 44)75.08 (n = 39)56.4
Table 11.

Percent of service members with low CNSVS scores matched by low scores on other CNTs by low score level

Poor CNSVS performance matched by ANAMPoor CNSVS performance matched by CogStatePoor CNSVS performance matched by ImPACT
ANAM low score levelCogState low score levelImPACT low score level
Poor performance rankingPoor performance rankingPoor performance ranking
MostLeastMostLeastMostLeast
123456712345123
Poor performance rankingCNSVS low score level% matchingCNSVS low score level% matchingCNSVS low score level% matching
Most1 (n = 21)52.466.776.276.285.785.785.71 (n = 17)58.858.870.676.582.41 (n = 15)13.353.373.3
2 (n = 27)55.666.766.774.177.885.22 (n = 23)60.973.979.382.62 (n = 18)44.466.7
3 (n = 28)64.364.371.475.082.13 (n = 24)70.875.083.33 (n = 21)66.7
4 (n = 32)62.568.871.978.14 (n = 28)75.082.14 (n = 22)63.6
5 (n = 33)66.769.775.85 (n = 29)82.85 (n = 22)63.6
6 (n = 40)70.075.06 (n = 35)82.96 (n = 26)61.5
7 (n = 42)73.87 (n = 37)81.17 (n = 29)58.6
Least8 (n = 50)70.08 (n = 44)75.08 (n = 39)56.4
Poor CNSVS performance matched by ANAMPoor CNSVS performance matched by CogStatePoor CNSVS performance matched by ImPACT
ANAM low score levelCogState low score levelImPACT low score level
Poor performance rankingPoor performance rankingPoor performance ranking
MostLeastMostLeastMostLeast
123456712345123
Poor performance rankingCNSVS low score level% matchingCNSVS low score level% matchingCNSVS low score level% matching
Most1 (n = 21)52.466.776.276.285.785.785.71 (n = 17)58.858.870.676.582.41 (n = 15)13.353.373.3
2 (n = 27)55.666.766.774.177.885.22 (n = 23)60.973.979.382.62 (n = 18)44.466.7
3 (n = 28)64.364.371.475.082.13 (n = 24)70.875.083.33 (n = 21)66.7
4 (n = 32)62.568.871.978.14 (n = 28)75.082.14 (n = 22)63.6
5 (n = 33)66.769.775.85 (n = 29)82.85 (n = 22)63.6
6 (n = 40)70.075.06 (n = 35)82.96 (n = 26)61.5
7 (n = 42)73.87 (n = 37)81.17 (n = 29)58.6
Least8 (n = 50)70.08 (n = 44)75.08 (n = 39)56.4
Table 12.

Percent of service members with low ImPACT scores matched by low scores on other CNTs by low score level

Poor ImPACT performance matched by ANAMPoor ImPACT performance matched by CNSVSPoor ImPACT performance matched by CogState
ANAM low score levelCNSVS low score levelCogState low score level
Poor performance rankingPoor performance rankingPoor performance ranking
MostLeastMostLeastMostLeast
12345671234567812345
Poor performance rankingImPACT low score level% matchingImPACT low score level% matchingImPACT low score level% matching
Most1 (n = 9)22.233.344.455.666.766.788.91 (n = 3)66.766.766.766.766.766.766.766.71 (n = 8)75.075.075.075.087.5
2 (n = 16)31.343.850.056.356.381.32 (n = 14)57.157.157.157.164.364.385.72 (n = 15)60.073.373.380.0
Least3 (n = 30)43.346.753.360.076.73 (n = 27)51.951.951.959.363.081.53 (n = 28)60.767.975.0
Poor ImPACT performance matched by ANAMPoor ImPACT performance matched by CNSVSPoor ImPACT performance matched by CogState
ANAM low score levelCNSVS low score levelCogState low score level
Poor performance rankingPoor performance rankingPoor performance ranking
MostLeastMostLeastMostLeast
12345671234567812345
Poor performance rankingImPACT low score level% matchingImPACT low score level% matchingImPACT low score level% matching
Most1 (n = 9)22.233.344.455.666.766.788.91 (n = 3)66.766.766.766.766.766.766.766.71 (n = 8)75.075.075.075.087.5
2 (n = 16)31.343.850.056.356.381.32 (n = 14)57.157.157.157.164.364.385.72 (n = 15)60.073.373.380.0
Least3 (n = 30)43.346.753.360.076.73 (n = 27)51.951.951.959.363.081.53 (n = 28)60.767.975.0
Table 12.

Percent of service members with low ImPACT scores matched by low scores on other CNTs by low score level

Poor ImPACT performance matched by ANAMPoor ImPACT performance matched by CNSVSPoor ImPACT performance matched by CogState
ANAM low score levelCNSVS low score levelCogState low score level
Poor performance rankingPoor performance rankingPoor performance ranking
MostLeastMostLeastMostLeast
12345671234567812345
Poor performance rankingImPACT low score level% matchingImPACT low score level% matchingImPACT low score level% matching
Most1 (n = 9)22.233.344.455.666.766.788.91 (n = 3)66.766.766.766.766.766.766.766.71 (n = 8)75.075.075.075.087.5
2 (n = 16)31.343.850.056.356.381.32 (n = 14)57.157.157.157.164.364.385.72 (n = 15)60.073.373.380.0
Least3 (n = 30)43.346.753.360.076.73 (n = 27)51.951.951.959.363.081.53 (n = 28)60.767.975.0
Poor ImPACT performance matched by ANAMPoor ImPACT performance matched by CNSVSPoor ImPACT performance matched by CogState
ANAM low score levelCNSVS low score levelCogState low score level
Poor performance rankingPoor performance rankingPoor performance ranking
MostLeastMostLeastMostLeast
12345671234567812345
Poor performance rankingImPACT low score level% matchingImPACT low score level% matchingImPACT low score level% matching
Most1 (n = 9)22.233.344.455.666.766.788.91 (n = 3)66.766.766.766.766.766.766.766.71 (n = 8)75.075.075.075.087.5
2 (n = 16)31.343.850.056.356.381.32 (n = 14)57.157.157.157.164.364.385.72 (n = 15)60.073.373.380.0
Least3 (n = 30)43.346.753.360.076.73 (n = 27)51.951.951.959.363.081.53 (n = 28)60.767.975.0

Table 13 shows summaries of the intercorrelations between scores between CNTs in each CNT pair. Most of the correlations were small to moderate. For example, the correlations at the 75th percentile ranged from 0.247 to 0.515 while the highest correlations ranged from 0.481 to 0.715. Squaring these coefficients reveals that the most strongly associated scores between any CNTs have only 51.1% shared variance (0.7152 × 100 = 51.12) while most have 30.4% or less shared variance (0.5152 × 100 = 30.36).

Table 13.

Summary of intercorrelations between individual CNT scores between each CNT pair

StatisticCNT pairs
ANAM-CogStateANAM-CNSVSANAM-ImPACTCogState-CNSVSCogState-ImPACTCNSVS-ImPACT
(n = 74)(n = 66)(n = 67)(n = 70)(n = 66)(n = 63)
Mean0.3290.3890.3010.3930.3950.208
SD0.1930.1150.1490.1640.0800.107
Minimum0.0120.0370.0290.0110.2190.017
25th percentile0.1710.3100.1960.3110.3700.124
Median0.3590.4050.2760.4000.3990.212
75th percentile0.4700.4570.3910.5150.4560.247
Maximum0.6450.6800.5850.7150.5380.481
StatisticCNT pairs
ANAM-CogStateANAM-CNSVSANAM-ImPACTCogState-CNSVSCogState-ImPACTCNSVS-ImPACT
(n = 74)(n = 66)(n = 67)(n = 70)(n = 66)(n = 63)
Mean0.3290.3890.3010.3930.3950.208
SD0.1930.1150.1490.1640.0800.107
Minimum0.0120.0370.0290.0110.2190.017
25th percentile0.1710.3100.1960.3110.3700.124
Median0.3590.4050.2760.4000.3990.212
75th percentile0.4700.4570.3910.5150.4560.247
Maximum0.6450.6800.5850.7150.5380.481
Table 13.

Summary of intercorrelations between individual CNT scores between each CNT pair

StatisticCNT pairs
ANAM-CogStateANAM-CNSVSANAM-ImPACTCogState-CNSVSCogState-ImPACTCNSVS-ImPACT
(n = 74)(n = 66)(n = 67)(n = 70)(n = 66)(n = 63)
Mean0.3290.3890.3010.3930.3950.208
SD0.1930.1150.1490.1640.0800.107
Minimum0.0120.0370.0290.0110.2190.017
25th percentile0.1710.3100.1960.3110.3700.124
Median0.3590.4050.2760.4000.3990.212
75th percentile0.4700.4570.3910.5150.4560.247
Maximum0.6450.6800.5850.7150.5380.481
StatisticCNT pairs
ANAM-CogStateANAM-CNSVSANAM-ImPACTCogState-CNSVSCogState-ImPACTCNSVS-ImPACT
(n = 74)(n = 66)(n = 67)(n = 70)(n = 66)(n = 63)
Mean0.3290.3890.3010.3930.3950.208
SD0.1930.1150.1490.1640.0800.107
Minimum0.0120.0370.0290.0110.2190.017
25th percentile0.1710.3100.1960.3110.3700.124
Median0.3590.4050.2760.4000.3990.212
75th percentile0.4700.4570.3910.5150.4560.247
Maximum0.6450.6800.5850.7150.5380.481

Table 14 shows summaries of intercorrelations within each of the CNTs. As with the correlations between CNTs, half of those within CNTs were small to moderate but there were some strong correlations at the upper end of the distribution. For example, the correlations at the 75th percentile within CNTs ranged from 0.414 to 0.616 while the highest correlations ranged from 0.426 to 0.993. Squaring these correlations shows that the most strongly associated scores within one of the CNTs (CNSVS) had 98.6% shared variance but for the each of the other three CNTs, the maximum shared variance between scores within the CNT was 57.0% (0.7552 × 100 = 57.0). However, 75% of the correlations within all of the CNTs had at most only 38.8% shared variance (0.6162 × 100 = 38.8). Even on CNSVS, despite the extremely strong maximum correlation (0.993), 75% of the correlations within the battery were 0.443 or smaller with no more than 19.6% shared variance between scores.

Table 14.

Summary of intercorrelations between individual CNT scores within each CNT

StatisticsCNT
ANAMCogStateCNSVSImPACT
(n = 227)(n = 226)(n = 204)(n = 207)
Mean0.4780.4250.4380.359
SD0.1330.2270.1750.065
Minimum0.2120.1830.3050.276
25th percentile0.4030.2440.3280.304
Median0.4640.3890.3880.370
75th percentile0.5870.6160.4430.414
Maximum0.7550.7040.9930.426
StatisticsCNT
ANAMCogStateCNSVSImPACT
(n = 227)(n = 226)(n = 204)(n = 207)
Mean0.4780.4250.4380.359
SD0.1330.2270.1750.065
Minimum0.2120.1830.3050.276
25th percentile0.4030.2440.3280.304
Median0.4640.3890.3880.370
75th percentile0.5870.6160.4430.414
Maximum0.7550.7040.9930.426
Table 14.

Summary of intercorrelations between individual CNT scores within each CNT

StatisticsCNT
ANAMCogStateCNSVSImPACT
(n = 227)(n = 226)(n = 204)(n = 207)
Mean0.4780.4250.4380.359
SD0.1330.2270.1750.065
Minimum0.2120.1830.3050.276
25th percentile0.4030.2440.3280.304
Median0.4640.3890.3880.370
75th percentile0.5870.6160.4430.414
Maximum0.7550.7040.9930.426
StatisticsCNT
ANAMCogStateCNSVSImPACT
(n = 227)(n = 226)(n = 204)(n = 207)
Mean0.4780.4250.4380.359
SD0.1330.2270.1750.065
Minimum0.2120.1830.3050.276
25th percentile0.4030.2440.3280.304
Median0.4640.3890.3880.370
75th percentile0.5870.6160.4430.414
Maximum0.7550.7040.9930.426

Discussion

We compared four ostensibly similar commercially available CNTs by examining agreement between a range of empirically determined low score levels. The low score levels were developed by performing base rate analyses that identified various numbers of low scores at three different cutoffs to determine which performance levels were rare in the control group and which ones best distinguished SMs with acute concussion from a control group of nonconcussed SMs. We expected the CNTs would detect poor cognitive functioning within individuals at similar levels (i.e. high levels of agreement) since they purport to measure similar cognitive constructs. We found that more service members in the mTBI group had low scores on each of the CNTs than controls and at the worst performance levels, that is those characterized by multiple low scores, the association between poor performance and mTBI was strongest. We also found that most SMs who performed at the worst level (low score level 1) on any given CNT also had low scores somewhere within the low score spectrum on the other CNT they completed. However, performing at the worst level on one CNT was often not reflected by performance at the worst level on the other CNT they completed.

There are a number of possible explanations for these findings. First, the CNTs generate scores that are in different measurement units. ANAM uses throughput (i.e., speed and accuracy combination score) as the primary outcome measure for each subtest, although two of the subtests, Simple Reaction Time and Simple Reaction Time (repeated) are solely reaction time measures. CogState uses purely reaction time to calculate standard scores for three of its four subtests, and purely accuracy for the fourth subtest (One Card Learning). CNSVS and ImPACT use a combination of speed and accuracy measures from various subtests to calculate index (CNSVS) or domain (ImPACT) scores.

Second, each CNT uses different normative data. Some of the reference groups are based on age and gender (ANAM), while others are based on age only (CogState and CNSVS). ANAM has military-specific norms while other CNTs have civilian norms. Furthermore, there are large differences in the N’s for the normative databases. The version of ANAM4 we used in this study has normative data from 107,662 US military service members, whereas the manual for the on-line edition of ImPACT reports having normative data from 3,780 athletes aged 19–59 (ImPACT Applications, 2011; Vincent, Roebuck-Spencer, Gilliland, & Schlegel, 2012). It is not clear from available published literature or their manuals how many people were used to derive normative data for CogState and CNSVS.

Third, the CNTs assess the various cognitive domains in different ways. That is, some CNTs use different subtests to measure similar cognitive domains, while some use similarly structured subtests to measure different cognitive domains. For example, ANAM, CNSVS, and ImPACT all have a subtest that is very similar to the WAIS-IV Coding subtest, which is used to calculate the WAIS-IV’s well established Processing Speed Index (PSI). However, ANAM uses these subtests (Code Substitution and Code Substitution Delayed) primarily as a measure of attention and memory, CNSVS combines the accuracy score from that subtest (Symbol Digit Coding) with several other subtests to calculate Psychomotor Speed and Processing Speed Domains, and ImPACT uses speed and accuracy scores from the subtest (Symbol Match), along with scores from other subtests, to calculate the Word Memory domain and Reaction time domain.

Fourth, each CNT has a different number of scores (see Table 1). Research has shown that as the number of scores increases, the probability of obtaining a low score increases (Crawford, Garthwaite, & Gault, 2007; Ingraham & Aiken, 1996; Schretlen, Testa, Winicki, Pearlson, & Gordon, 2008), which is known as the Monte Carlo effect. By using the Monte Carlo simulation, Crawford and colleagues, estimated that 18.5% of individuals in a population would be expected to have one abnormally low score on a hypothetical battery with four tests when none of the tests were correlated (Crawford et al., 2007). However, that percentage increased to 26.5% with 6 tests and 40.1% when the number of tests was increased to 10. This suggests the possibility that participants were more likely to have some low scores merely by chance on a larger battery like CNSVS than CogState although the extent to which tests are correlated will mitigate this somewhat.

Finally, the domains that these CNTs tend to assess and that are most influenced by mTBI are themselves unreliable. Previous research on these CNTs found that the test–retest reliability of each was generally less than desirable (Cole et al., 2013). The intercorrelations between and CNTs that we examined in this study align with those earlier findings. Therefore, it is possible that variability in low score agreement levels that we observed between CNTs might be an extension of these reliability issues. That is there may be less than desirable test–retest reliability between CNTs as well as within CNTs driven by instability in the domains they measure in lieu of or in addition to other possible psychometric differences. Furthermore, the intercorrelations within CNTs are generally small to moderate and are generally not much higher than the intercorrelations between CNTs. This could be reflecting poor test-reliability within CNTs. But, it also reflects differences between the domains assessed within CNTs in addition to lower test–retest reliability. However, determining the extent to which lack of agreement between CNTs is due to poor test–retest reliability and due to other psychometric differences (e.g., different domains assessed or assessing the same domain differently) between CNTs requires further study that is beyond the scope of this paper.

A strength of this study is that we compared the CNTs in a way that is closer to how they are actually used clinically. We are not aware of any other study that compared CNTs in this way. Although the CNTs’ developers provide no guidance about how to interpret their results beyond highlighting low scores we believe that our use of base rate analysis to identify a hierarchy of performance levels based on the prevalence of low scores and how strongly each CNT distinguished SMs with acute mTBI from controls reasonably combined two of the major pieces of information a clinician would use to interpret CNT performance: how low individual scores were relative to the normative distribution and how many low scores there were across the battery. We acknowledge that there are other factors that a clinician might also consider, such as the specific cognitive domains the low scores are representing, and we acknowledge that we cannot account for the all of the possible low score combinations that some clinicians might deem meaningful. Doing so would result in an overwhelming amount of data beyond the scope of a single article. Furthermore, because of the uniqueness of the CNTs with respect to how they define and measure specific domains, we could not be certain that any nominally similar domains on different CNTs were indeed the same and might have rather low agreement when compared individually domain to domain. Therefore, we decided that it would be easier and more useful to initially compare overall agreement between CNTs.

This overall battery approach to our analysis is important because prospective users of CNTs need some way of determining if they might result in similar clinical conclusions; that is they could be substitutable with little to no effect on clinical decision making. Once the degree of clinical equivalence is known, prospective users can then weigh other factors such as cost and ease of use to determine which CNT might best serve their needs. The equivalence or noninferiority analytical design that we used as a guide for this study is applied within this context. Sometimes new drugs and devices are developed specifically to reduce the number and severity of side-effects of older drugs and devices. However, the new drug or device has to be shown to be at least as effective as the older drug or device or the reduction in side-effects becomes a moot point. This can also be useful in the context of monetary costs. If a new drug is effective but extremely expensive, an equivalence study may show that an older competing drug is effective enough clinically to warrant its continued use by patients who could not afford the new drug and would have to forego treatment if the older drug was removed from the market. The major question that motivated us to conduct this study of CNTs was the US DoD’s desire for evidence to help it determine whether its chose the best possible CNT for its post-mTBI and pre-deployment baseline assessments when it decided to use ANAM4 TBI-MIL.

Another strength of this analysis is that it involved a true “head to head” comparison of CNTs. That is we compared data from multiple CNTs administered to the same participants thereby eliminating between subject variability as a confounder. Few such comparisons of CNTs have been performed. The data presented in this paper can serve as a basis for designing future studies with larger numbers of participants with low scores so that agreement levels can be estimated with more precision and adequate statistical power can be achieved. Furthermore, the base rates of low scores we used in the first step of our analysis are useful in their own right and could potentially be used to inform the development of algorithms for decision making by clinicians and researchers using any of the CNTs we examined.

An important limitation of this study is that there were relatively small numbers of SMs in each CNT pair who performed at the poorest levels. Because of this, we did not separately compare agreement among SMs with mTBI and in the control group. It is possible the agreement levels might be different in each group. For now, our results should be treated as preliminary. Another limitation is that we could not explore all of the possible psychometric differences that may have contributed to differences in agreement levels between the CNTs. There are so many possible psychometric factors that could be used to compare the CNTs that multiple studies would be required. Even describing these in detail is beyond the scope of this paper. A final limitation is that we used data from military service members and our findings may not be generalizable to other populations CNTs are used to assess, especially high school and college athletes.

Conclusions

Our study suggests that four commercially available CNTs that are ostensibly similar and used for assessing patients with mTBI are sensitive in varying degrees to the effects of mTBI. It also suggests that most of the individuals who performed at the worse level on one CNT will also have low scores on the other CNTs they completed. However, their performance on the other CNTs may not be at an equally poor level. Our findings represent a starting point for future research on the CNTs rather than any definitive statement about the clinical utility or superiority of any of the CNTs we examined. Future investigation is necessary to determine what aspects of these CNTs may be responsible for imperfect agreement, and what CNTs seem to better predict post-mTBI symptoms and recovery trajectory. The psychometric comparability and clinical utility of these CNTs are not well understood and until such studies are done it will not be possible to make any judgments about which CNT, if any, is superior to the others. Until more evidence emerges, these CNTs should be used cautiously and only as one source of information from among many other types of clinical assessments. None of them should be used as a definitive or standalone diagnostic tool.

Funding

The study was internally funded by the Defense and Veterans Brain Injury Center through contract W91-YTZ-13-C-0015 with General Dynamics Health Solutions.

Conflict of Interest

The authors have no financial interests to disclose. The views expressed herein are those of the authors and do not reflect the official policy of the Department of the Army, Department of the Navy, Department of Defense, or the US government.

References

Allen
,
L. M.
,
Green
,
P.
,
Cox
,
D. R.
, &
Conder
,
R. L.
(
2006
).
Computerized assessment of response bias (CARB): A manual for computerized administration, reporting, and interpretation of CARB running under the CogShell assessment environment
.
Durham, NC
:
CogniSyst, Inc
.

Arrieux
,
J. P.
,
Cole
,
W. R.
, &
Ahrens
,
A. P.
(
2017
).
A review of the validity of computerized neurocognitive assessment tools in mild traumatic brain injury assessment
.
Concussion
,
2
,
CNC31
.

Bauer
,
R. M.
,
Iverson
,
G. L.
,
Cernich
,
A. N.
,
Binder
,
L. M.
,
Ruff
,
R. M.
, &
Naugle
,
R. I.
(
2012
).
Computerized neuropsychological assessment devices: Joint position paper of the American Academy of Clinical Neuropsychology and the National Academy of Neuropsychology
.
Archives of Clinical Neuropsychology
,
27
,
362
373
.

Broglio
,
S. P.
,
Ferrara
,
M. S.
,
Macciocchi
,
S. N.
,
Baumgartner
,
T. A.
, &
Elliott
,
R.
(
2007
).
Test-retest reliability of computerized concussion assessment programs
.
Journal of Athletic Training
,
42
,
509
.

Cohen
,
J.
(
1992
).
A power primer
.
Psychological Bulletin
,
112
,
155
159
.

Cole
,
W. R.
,
Arrieux
,
J. P.
,
Schwab
,
K.
,
Ivins
,
B. J.
,
Qashu
,
F. M.
, &
Lewis
,
S. C.
(
2013
).
Test–retest reliability of four computerized neurocognitive assessment tools in an active duty military population
.
Archives of Clinical Neuropsychology
,
28
,
732
742
.

Cole
,
W. R.
,
Arrieux
,
J. P.
,
Dennison
,
E. M.
, &
Ivins
,
B. J.
(
2016
).
The impact of administration order in studies of computerized neurocognitive assessment tools (NCATs)
.
Journal of Clinical and Experimental Neuropsychology
,
39
,
35
45
.

Cole
,
W. R.
,
Arrieux
,
J. P.
,
Ivins
,
B. J.
,
Schwab
,
K. A.
, &
Qashu
,
F. M.
(
2017
).
A comparison of four computerized neurocognitive assessment tools to a traditional neuropsychological test battery in service members with and without mild traumatic brain injury
.
Archives of Clinical Neuropsychology
,
33
,
102
119
.

CNS Vital Signs, LLC
. (
2014
).
CNS Vital Signs Interpretation guide
.
Morrisville, NC
:
CNS Vital Signs
.

Crawford
,
J. R.
,
Garthwaite
,
P. H.
, &
Gault
,
C. B.
(
2007
).
Estimating the percentage of the population with abnormally low scores (or abnormally large score differences) on standardized neuropsychological test batteries: A generic method with applications
.
Neuropsychology
,
21
,
419
430
.

Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury
. (
2015
). Info memo: First quarter calendar year 2015 report on the department of defense traumatic brain injury neurocognitive assessment tool program.

Feinstein
,
A. R.
, &
Cicchetti
,
D. V.
(
1990
).
High agreement but low kappa: I. The problems of two paradoxes
.
Journal of Clinical Epidemiology
,
43
,
543
549
.

Ferguson
,
C. J.
(
2009
).
An effect size primer: A guide for clinicians and researchers
.
Psychological Bulletin
,
40
,
532
538
.

Friedman
,
L. M.
,
Furberg
,
C. D.
, &
DeMets
,
D. L.
(
2010
).
Fundamentals of clinical trials
(4th ed).
New York
:
Springer
.

Gardner
,
A.
,
Shores
,
E. A.
,
Batchelor
,
J.
, &
Honan
,
C. A.
(
2012
).
Diagnostic efficiency of ImPACT and CogSport in concussed rugby union players who have not undergone baseline neurocognitive testing
.
Appl. Neuropsychology, Adult
,
19
,
90
97
.

HeadFirst
. (
2018
, February 9). What is Baseline and Post Injury Testing? Retrieved from https://www.myheadfirst.com/what-is-baseline-and-post-injury-testing.

IBM Corp
. (
2013
).
IBM SPSS Statistics for Windows, Version 22.0
.
Armonk, NY
:
IBM Corp
.

ImPACT Applications, Inc
. (
2011
).
Immediate Post-Concussion Assessment Testing (ImPACT) Test: Technical manual, Online ImPACT 2007–2012
.
San Diego, CA
:
ImPACT Applications
.

Ingraham
,
L. J.
, &
Aiken
,
C. B.
(
1996
).
An empirical approach to determining criteria for abnormality in test batteries with multiple measures
.
Neuropsychology
,
10
,
120
124
.

Ivins
,
B. J.
,
Lange
,
R. T.
,
Cole
,
W. R.
,
Kane
,
R.
,
Schwab
,
K.
, &
Iverson
,
G. L.
(
2015
).
Using base rates of low scores to interpret the ANAM4 TBI-MIL battery following mild traumatic brain injury
.
Archives of Clinical Neuropsychology
,
30
,
26
38
.

Kianifard
,
F.
, &
Gallo
,
P. P.
(
1995
).
Poisson regression analysis in clinical research
.
Journal of Biopharmaceutical Statistics
,
5
,
115
129
.

Meehan
,
W. P.
,
d’Hemecourt
,
P.
,
Collins
,
C. L.
,
Taylor
,
A. M.
, &
Comstock
,
R. D.
(
2012
).
Computerized neurocognitive testing for the management of sport-related concussions
.
Pediatrics
,
129
,
38
44
.

Nelson
,
L. D.
,
LaRoche
,
A. A.
,
Pfaller
,
A. Y.
,
Lerner
,
E. B.
,
Hammeke
,
T. A.
,
Randolph
,
C.
, et al. . (
2016
).
Prospective, head-to-head study of three computerized neurocognitive assessment tools (CNTs): Reliability and validity for the assessment of sport-related concussion
.
Journal of the International Neuropsychological Society
,
22
,
24
37
.

Nelson
,
L. D.
,
Furger
,
R. E.
,
Gikas
,
P.
,
Lerner
,
E. B.
,
Barr
,
W. B.
,
Hammeke
,
T. A.
, et al. . (
2017
).
Prospective, head-to-head study of three computerized neurocognitive assessment tools Part 2: Utility for assessment of mild traumatic brain injury in emergency department patients
.
Journal of the International Neuropsychological Society
,
23
,
293
303
.

Schatz
,
P.
, &
Putz
,
B. O.
(
2006
).
Cross-validation of measures used for computer-based assessment of concussion
.
Applied Neuropsychology
,
13
,
151
159
.

Schretlen
,
D. J.
,
Testa
,
S. M.
,
Winicki
,
J. M.
,
Pearlson
,
G. D.
, &
Gordon
,
B.
(
2008
).
Frequency and bases of abnormal performance by healthy adults on neuropsychological testing
.
Journal of the International Neuropsychological Society
,
14
,
436
445
.

Short
,
P.
,
Cernich
,
A.
,
Wilken
,
J. A.
, &
Kane
,
R. L.
(
2007
).
Initial construct validation of frequently employed ANAM measures through structural equation modeling
.
Archives of Clinical Neuropsychology
,
22S
,
S63
S77
.

United States Congress
. (
2008
). Public Law 110-181, National Defense Authorization Act for Fiscal Year 2008. Retrieved from https://www.congress.gov/bill/110th-congress/house-bill/4986.

Vincent
,
A. S.
,
Roebuck-Spencer
,
T.
,
Gilliland
,
K.
, &
Schlegel
,
R.
(
2012
).
Automated neuropsychological assessment metrics (v4) traumatic brain injury: Military normative data
.
Military Medicine
,
177
,
256
269
.

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)