Abstract

The Modified Six Elements Test (MSET) assesses several executive functions. We examined whether an adapted scoring method is appropriate for discriminating between brain-injured persons with and without executive deficits. A MSET was administered to 70 participants with acquired brain injury in the chronic phase. The group was divided into individuals with and without executive impairments based on several other executive tests. The discriminative value for both the conventional raw score and the adapted scoring method was evaluated using receiver operating characteristic analyses. Both scoring methods discriminated significantly between persons with impaired and unimpaired executive functions (raw score: area under the curve, AUC = 0.703, p = .004; adapted score: AUC = 0.780, p = .000). Only the adapted scoring method proved sensitive (81%) and specific (67%) within a clinically useful range. Within this range, an acceptable cut-off score could be determined. Altogether, the proposed MSET scoring index is a potentially clinically useful contribution to the measurement of executive functions.

Introduction

Brain-injured persons with executive deficits often have difficulties in formulating and achieving goals due to deficits in planning and strategy application (Damasio, 1995; Duncan, Emslie, Williams, Johnson, & Freer, 1996; Levine et al., 2000; Lezak, 1982; Stuss & Levine, 2002). As a consequence, unusual and unstructured situations become particularly difficult to handle. However, most neuropsychological tests consist of structured and closed tasks, making reliable assessment of executive functions challenging. A widely used test to assess executive functions is the Behavioural Assessment of the Dysexecutive Syndrome (BADS; Wilson, Alderman, Burgess, Emslie, & Evans, 1996). The aim of this test battery is to assess executive problems in a way that simulates aspects of daily living, by using unstructured and open tasks. The BADS consists of six subtests, measuring different executive processes. One of these subtests is the Modified Six Elements Test (MSET), a measure of multitasking ability in which multiple skills such as planning, working memory, prospective memory, rule learning, strategy application, and response monitoring are involved. The MSET has proved to be a good predictor for problems with planning and goal-directed behavior (Alderman, Burgess, Knight, & Henman, 2003; Burgess, Alderman, Evans, Emslie, & Wilson, 1998) and has consistently been found to be one of the most sensitive subtests of the BADS (Bennett, Ong, & Ponsford, 2005; Burgess et al., 1998). Renison, Ponsford, Testa, Richardson, and Brownfield (2012) found that the MSET could reliably predict everyday executive performance in individuals with traumatic brain injury, suggesting that the MSET is an ecologically valid tool. Everyday executive difficulties were measured with the Dysexecutive Questionnaire (Wilson et al., 1996), a 20-item checklist in which cognitive, behavioral, and emotional aspects of executive difficulties were reported by both participants and independent raters. In a recent study, Emmanouel, Kessels, Mouza, and Fasotti (2014) showed that the MSET was the most sensitive subtest of the BADS in discriminating between individuals with anterior and posterior lesions. The MSET has been used in various patient groups with different brain disorders in order to evaluate executive functions, for example, persons with MCI and Alzheimer's dementia (Espinosa et al., 2009), post-concussive symptoms (Chan, Hoosain, Lee, Fan, & Fong, 2003), substance abuse (Fernandez-Serrano, Perez-Garcia, Schmidt Rio-Valle, & Verdejo-Garcia, 2010), schizophrenia (Liu et al., 2011), and traumatic brain injury (Bennett et al., 2005; Chan & Manly, 2002; Manly, Hawkins, Evans, Woldt, & Robertson, 2002). The MSET (Wilson et al., 1996) consists of three tasks (simple arithmetic, written picture naming, and dictation), each of which consists of two parts (subtasks A and B). Participants are instructed that they have 10 min to work on at least a part of each of these six subtasks. However, they are also told that it is not allowed to switch between two parts (subtasks) of the same task. Thus, a participant is not allowed to switch from subtask A of the arithmetic task directly to subtask B of arithmetic, as first one of the other tasks, written picture naming or dictation, has to be dealt with (Burgess et al., 1998). This rule requires participants to engage in task switching (i.e., multitasking) throughout the test. An advantage of the MSET compared with other conventional executive function tests is its highly unstructured character. Therefore, it requires a considerable amount of planning and monitoring behavior. However, the MSET has some limitations as well. Raw scores have a limited range from 0 to 6 and the standardized profile scores, obtained in accordance with the manual of the BADS, range from 0 to 4. In previous studies using the MSET, brain-injured persons obtained mean profile scores of 3 or more (Gouveia, Brucki, Malheiros, & Bueno, 2007; Manly et al., 2002). Individuals with mild executive deficits easily perform at maximum level, indicating that mild executive impairments are not detected by the MSET due to ceiling effects. Furthermore, the dictation task of the MSET requires the use of a tape recorder to record a personal story of the participant. Conventional tape recorders are not only becoming increasingly obsolete, participants have also reported to feel embarrassed by talking about personal topics during this task (Gouveia et al., 2007).

In a recent study (Bertens, Fasotti, Egger, Boelen, & Kessels, 2014), the reliability of an adapted version of the MSET was examined in 60 healthy participants. In the MSET, the dictation subtask was replaced by a sorting task. Moreover, a revised scoring method was proposed, in which the distribution of time spent on the subtasks was taken into account as well. Also, a parallel version of the MSET was added, allowing the measurement of executive functions over the course of time. The test–retest- and parallel form reliability were found to be adequate. However, the validity and diagnostic utility of the adapted scoring method in brain-injured individuals has not been investigated yet.

Here, we examine the diagnostic utility of a recently proposed scoring method (Bertens et al., 2014) of an MSET in persons with brain injuries. Moreover, the adapted scoring method will be compared with the conventional scoring method as proposed in the BADS (Wilson et al., 1996). We also examine to what extent these scoring methods discriminate between individuals with and without executive deficits, based on their performance on other well-established executive tests. Although this comparison does not investigate the test's ecological validity, that is, its relation with everyday executive problems, our study allows a direct comparison of the sensitivity and specificity of both scoring methods. Furthermore, with the newly proposed scoring method, we aim to reduce ceiling effects, a highly relevant issue in clinical practice. In addition, the convergent validity of both scoring methods will be examined by calculating their correlations with other executive tests.

Methods

Participants

The data of the seventy participants who were included in the current study were collected as part of the recruitment procedure for a larger treatment study, approved by the Medical Review Ethics Committee region Arnhem-Nijmegen (Bertens, Fasotti, Boelen, & Kessels, 2013). All participants had an acquired brain injury of non-progressive nature, such as traumatic brain injury, stroke, or hypoxia. Minimal time since onset of the injury was 3 months. Other eligibility criteria were: age between 18 and 70 years, living independently at home and being fluent in the Dutch language. Participants were recruited from the outpatient department of the Rehabilitation Medical Centre Groot Klimmendaal in Arnhem, the Netherlands and the outpatient rehabilitation clinic for brain-injured individuals and the department of Neurorehabilitation of the Sint Maartenskliniek in Nijmegen, the Netherlands. Exclusion criteria were the presence of severe psychiatric problems; substance abuse (current or in the past); neurodegenerative disorders; severe cognitive comorbidity (e.g., aphasia, dementia). Recruitment was based on assessment by clinicians of the participating centers who excluded all potential participants not meeting the above-mentioned criteria. Executive functions were assessed using six other traditional executive tests.

Table 1 shows the demographic characteristics of the participants. All participants were Caucasian. Level of education was rated using seven categories based on the Dutch educational system, ranging from 1 (less than primary school) to 7 (university degree) (Duits & Kessels, in press). For descriptive purposes, these levels of education were converted to years of education, as is customary in the Anglo-Saxon world (see also Hochstenbach, Mulder, Van Limbeek, Donders, & Schoonderwaldt, 1998).

Table 1.

Characteristics of the participants with and without executive impairments

 Executively impaired group Executively unimpaired group 
N 31 39 
Age (mean (SD)) 51.7 (±10.8) 47.5 (±14.3) 
Sex (N male/female (%)) 17/14 (55%/45%) 25/14 (64%/36%) 
Education level (mode (range))** 5 (3–7) 5 (4–7) 
Education (years) (mean (SD))* 11.8 (±3.1) 14.3 (±3.4) 
Time past brain injury (months) (mean/median (SD; range)) 34.1/15.0 (±23.1; 4–324) 19.5/11.5 (±67.0; 3–89) 
Etiology (N) (TBI/stroke/other (%)) 8/22/1 (26%/71%/3%) 13/24/2 (33%/62%/5%) 
 Executively impaired group Executively unimpaired group 
N 31 39 
Age (mean (SD)) 51.7 (±10.8) 47.5 (±14.3) 
Sex (N male/female (%)) 17/14 (55%/45%) 25/14 (64%/36%) 
Education level (mode (range))** 5 (3–7) 5 (4–7) 
Education (years) (mean (SD))* 11.8 (±3.1) 14.3 (±3.4) 
Time past brain injury (months) (mean/median (SD; range)) 34.1/15.0 (±23.1; 4–324) 19.5/11.5 (±67.0; 3–89) 
Etiology (N) (TBI/stroke/other (%)) 8/22/1 (26%/71%/3%) 13/24/2 (33%/62%/5%) 

Notes: *p < .01; **p < .001.

Neuropsychological Tests

The conventional MSET, a subtest of the executive test battery BADS consists of three open-ended tasks, namely simple arithmetic, dictation, and written picture naming. Each of these tasks consists of two parts (subtask A and B). Participants are instructed to execute at least a part of each of these six subtasks within 10 min. However, it is not allowed to switch between two subtasks of the same type. In the adapted MSET that was used in the present study, the dictation task was replaced by a sorting task in which plastic wall plugs (part A) and small cable ties (part B) had to be sorted by color (three colors) (Bertens et al., 2014). The test was administered according to the instructions of the Dutch version (Krabbendam & Kalff, 1998) of the BADS test manual. The total administration time was ∼15 min (5 min for the instructions and 10 min for the execution). For each participant, the number of executed subtasks and the rule breaks were scored, as well as the amount of time spent on each subtask. By subtracting the rule breaks from the number of executed subtask, raw scores were obtained (range 0–6), that were also converted into profile scores (range 0–4) in agreement with the manual of the BADS. In addition, an adapted scoring method was calculated aimed at reducing ceiling effects in mildly impaired individuals (Bertens et al., 2014) using the following formula:  

(1)
adaptedMSETscore=timelongestsubtask(s)timeshortestsubtask(s)numberofexecutedsubtasksrulebreaks.

If the number of rule breaks is equal to or exceeds the number of executed subtasks, the denominator's value is set to 1 (i.e., the denominator can never be 0 or a negative number). This scoring index takes into account the distribution of time spent on the six subtasks by subtracting the shortest time spent on one of the six subtasks (i.e., the total time spent on the subtask) from the longest time spent on one of the six subtasks (i.e., the total time spent on the subtask). A more uniform distribution of time across the tasks indicates better multitasking abilities. A lower score indicates better planning performance.

In addition to the MSET, six other neuropsychological tests were administered to assess the main subdomains of executive function (Lezak, Howieson, Bigler, & Tranel, 2012). To assess planning, the Zoo Map test (subtest of BADS) was administered. Response generation was measured with the Category Fluency test and the Letter Fluency test. Response inhibition was assessed with the Go/No-go task from the computerized TAP 2.1 (Zimmermann & Fimm, 2007). The Brixton Spatial Anticipation test (Burgess & Shallice, 1997) was utilized to measure task switching and Letter–Number Sequencing (subtest of WAISIII; Wechsler, 1997) was administered to assess working memory. Scores on the tests were expressed in various standardized scores (i.e., percentile scores, T-scores), in agreement with the respective test manuals. All tests were administered by trained test assistants.

Procedure and Analyses

Based on six executive tests, the participants were divided into an executively impaired group and an executively unimpaired group. The criteria for executive impairments were: a standard score of 1.5 SD below the normative mean on at least two of the executive tests or a standard score between 1 and 1.5 SD below the normative mean on at least four of these tests or a standard score of 1.5 SD below the normative mean on one executive test and a standard score between 1 and 1.5 SD below the normative mean on at least 2 of the remaining executive tests (Bertens et al., 2013).

For statistical analyses, IBM SPSS 20.0 was used. Alpha was set at 0.05 for all analyses and two-tailed tests were used. A multivariate analysis of variance (general linear model) was performed to compare the performance on the MSET between the two groups (executively impaired, executively intact), with both the raw and adapted MSET scores as dependent variables. To compare the demographic characteristics between the groups, t-tests or nonparametric tests for nominal or ordinal variables (sex distribution and education level) were conducted. To control for possible effects of demographic differences, a multivariate analysis of covariance was performed when applicable.

The discriminative value of the MSET for executive deficits was evaluated by calculating receiver operating characteristic (ROC) curves for both the raw score and the adapted score separately. The area under the curve (AUC) indicates the discriminative power of the test, which varies between 0.5 (no discriminative power) and 1.0 (maximum discriminative power). Within this analysis, an optimal cut-off point was determined, fulfilling the criteria of a good sensitivity (>80%) and an acceptable specificity (>60%) rate (see Blake, McKinney, Treece, Lee, & Lincoln, 2002; Kessels, Mimpen, Melis, & Olde Rikkert, 2009; Oosterman, Molenveld, Olde Rikkert, & Kessels, 2010).

To examine the convergent validity of the MSET and the other executive tests, Pearson correlation coefficients (r) were calculated between both the raw and the adapted scores of the MSET and the other executive tests. For this analysis, the scores of the other executive tests were expressed in raw scores, with higher scores indicating better test performance. For this reason, the scores on the Brixton were expressed in number of correct items instead of errors, whereas the scores on the Go/No-go task were multiplied by −1.

Results

Based on the six executive tests, the participants were divided into an executively impaired group (N = 31) and an executively unimpaired group (N = 39). Intergroup comparisons showed no significant difference for age [t(68) = −1.36; p = .18], but a significant difference for education level (U = 336.00; p < .001) and years of education [t(68) = 3.26, p = .002]. No differences between the groups with respect to post onset time [t(42) = −0.97, p = .34] and sex distribution were present [χ²(1) = 0.62; p = .43].

Table 2 shows the MSET scores for both the executively impaired and unimpaired groups. Significant differences on the MSET scores between the executively impaired group and the executively unimpaired group were found [F(2, 67) = 6.30, p = .003], with the executively impaired group performing worse than the executively unimpaired group (i.e., as reflected by a lower raw score and a higher adapted score). These group differences were found for both the raw scores [F(1, 68) = 9.72, p = .003] with a moderate-to-large effect size (Cohen's d = 0.71) and the adapted MSET scoring method [F(1, 68) = 11.85, p = .001] with a moderate-to-large effect size (d = 0.79). As educational levels differed significantly between the groups, a multivariate analysis of covariance was conducted with education level as covariate. This analysis showed that education level [F(2, 66) = 1.11, p = .34] did not contribute significantly, and the overall group differences remained statistically significant [F(2, 66) = 7.20, p = .001].

Table 2.

Group differences for the raw and adapted scores on the Modified Six Elements Test (MSET)

 Raw MSET score
 
Adapted MSET score
 
M (SDRange M (SDRange 
Executively impaired 4.4 (±1.6) 0–6 52.4 (±50.4) 6–211 
Executively intact 5.4 (±1.2) 2–6 21.6 (±21.7) 2–110 
 Raw MSET score
 
Adapted MSET score
 
M (SDRange M (SDRange 
Executively impaired 4.4 (±1.6) 0–6 52.4 (±50.4) 6–211 
Executively intact 5.4 (±1.2) 2–6 21.6 (±21.7) 2–110 

Notes: The raw score (in accordance to the BADS manual) consists of a scale with a range from 0 (lowest) to 6 (highest). The adapted MSET score is a continuous variable in which a lower score indicates better performance.

The discriminative value of both scoring methods was evaluated using ROC analyses. The ROC graph of the raw MSET score as predictor of executive function deficits showed a significant AUC (AUC = 0.703, p = .004; Fig. 1A). However, no optimal cut-off score with a sensitivity of >80% and a specificity of >60% could be determined (Table 3). Moreover, an ROC curve with an AUC≤ 0.75 is generally interpreted as clinically not useful (Fan, Upadhye, & Worster, 2006). The ROC analysis of the adapted MSET score, as a predictor for executive deficits, also showed a significant area under the curve (AUC = 0.780, p = .000; Fig. 1B). Moreover, for this scoring index, it was possible to determine an optimal cut-off score (18.63) with a sensitivity of 81% and a specificity of 67%, indicating that a score of 18.63 or more is an indication for executive deficits (Table 4). Statistical comparison between the areas under the ROC curves of both scoring indices (Hanley & McNeil, 1982) did not show a significant difference between the AUCs (p = .35).

Table 3.

Possible cut-off scores for the conventional raw MSET scoring method with the corresponding sensitivity and specificity

Cut-off point Sensitivity (%) Specificity (%) 
13 95 
29 85 
52 85 
64 77 
Cut-off point Sensitivity (%) Specificity (%) 
13 95 
29 85 
52 85 
64 77 
Table 4.

Possible cut-off scores for the MSET adapted scoring method with the corresponding sensitivity and specificity

Cut-off point Sensitivity (%) Specificity (%) 
17.59 81 61 
18.09 81 64 
18.63a 81 67 
20.04 77 67 
21.67 77 69 
22.50 74 69 
Cut-off point Sensitivity (%) Specificity (%) 
17.59 81 61 
18.09 81 64 
18.63a 81 67 
20.04 77 67 
21.67 77 69 
22.50 74 69 

Note:aOptimal cut-off score.

Fig. 1.

Receiver operating characteristic curves for the conventional MSET raw scoring method (A) and for the adapted MSET scoring method (B).

Fig. 1.

Receiver operating characteristic curves for the conventional MSET raw scoring method (A) and for the adapted MSET scoring method (B).

With respect to the convergent validity of both scoring methods, an expected high negative correlation was found between the raw and the adapted MSET scores, r = −.745, p < .01, in which higher raw scores and lower adapted scores represent better test performance. Correlations between both scoring methods of the MSET and the other executive tests are reported in Table 5. Both scoring indices showed a similar pattern. That is, for the raw score significant, but moderate (cf. Cohen, 1992) correlations were found with three of six executive function tests. Four of the six tests correlated significantly with the adapted score, also in the moderate range.

Table 5.

Pearson correlations between the MSET raw and profile score and adapted MSET score and other executive function tests

 Zoo Map LFT CFT LNS Brixton Go/No-go 
Raw MSET score 0.246 0.410** 0.325** 0.463** 0.151 −0.225 
Adapted MSET score −0.243 −0.395** −0.262* −0.504*** −0.146 0.238* 
 Zoo Map LFT CFT LNS Brixton Go/No-go 
Raw MSET score 0.246 0.410** 0.325** 0.463** 0.151 −0.225 
Adapted MSET score −0.243 −0.395** −0.262* −0.504*** −0.146 0.238* 

Notes: LFT = Letter Fluency Test; CFT = Category Fluency Test; LNS = Letter–Number Sequencing; Brixton = Brixton Spatial Anticipation Test.

*p < .05, **p < .01, ***p < .001 (two tailed).

Finally, we examined the occurrence of ceiling effects by examining the frequency of participants who obtained the maximum score. Table 6 shows the distribution of the raw MSET scores, followed by the statistics of the corresponding adapted MSET scores. Forty-one of seventy participants had a maximum raw score of 6 (equivalent to a maximum profile score of 4), 11 of whom were in the “impaired” group (n = 31) and 30 in the “unimpaired” group (n = 39). The most frequently found score in both groups was the maximum score. The maximum adapted MSET score (reflecting an equal distribution of time spent on all 6 tasks without any rule breaks) is 0, a score which was not achieved by any of the participants. For the 41 participants who obtained the maximum raw score, the adapted MSET scores ranged from 2 to 51 (mean score 15.8; SD 10.2) indicating an improved variability and no ceiling effect.

Table 6.

Distribution of the adapted MSET scores compared with the raw MSET scores

Raw MSET score
 
Adapted MSET score
 
Score N Mean (SDRange 
139.0   
110.1 84.7 37–211 
64.2 32.7 30–110 
51.3 27.8 19–93 
27.5 7.1 21–39 
41 15.8 10.2 2–51 
Raw MSET score
 
Adapted MSET score
 
Score N Mean (SDRange 
139.0   
110.1 84.7 37–211 
64.2 32.7 30–110 
51.3 27.8 19–93 
27.5 7.1 21–39 
41 15.8 10.2 2–51 

Discussion

The aim of this study was to examine the diagnostic utility of an updated version of the MSET (Bertens et al., 2014) including an adapted scoring method. The results show that persons with executive function deficits performed significantly worse on the MSET than persons without executive deficits, regardless of scoring method. With respect to the discriminative value, the AUCs for the raw score and the adapted score were both statistically significant. Although the AUCs did not significantly differ, only the AUC for the adapted score was within a clinically useful range (Fan et al., 2006). An acceptable cut-off score, fulfilling the criteria of having a good sensitivity and an adequate specificity for discriminating between impaired and normal executive function, could only be determined for the adapted score.

An altered version of the MSET was used in which the dictation subtask was replaced by a sorting task, making the test more suitable for individuals with language impairments. Shallice and Burgess (1991) described that the aim of the original SET was to evaluate if a participant could devise a simple plan, scheduling the subtests (consisting of “fairly simple” activities) efficiently and keeping a check on time. The main objective was to evaluate the application of an efficient strategy to alternate between the tasks (switching without rule breaks). Results within subtasks are not taken into account. Therefore, changing a subtask for another simple task should have marginal effect on the construct of the test. Moreover, the dictation task is also replaced by a sorting task in the BADS-C, the children's version of the BADS (Emslie et al., 2003).

The compatibility between both MSET scoring indices and other executive tests is acceptable. For the adapted scoring method, significant correlations were found with four of six executive tests; however, these correlations were weak to moderate. No correlations were found between the MSET and the Zoo Map test, measuring planning (see also Oosterman, Wijers, &Kessels, 2013) and the Brixton Spatial Anticipation Test, measuring shifting (Van den Berg et al., 2009). Although planning and shifting may seem important aspects, the aim of the MSET was to measure executive functions in an ecologically validity way using unstructured and open-ended tasks which better mimics everyday demands. This may explain the low-to-moderate and even absent correlations between the MSET and traditional more structured executive tests.

A limitation of the current study is that the executive tests used to divide the participants in an executively impaired and executively intact group were traditional executive tests. As a result, we cannot conclude that our revised scoring method also optimizes the test's ecological validity. One could argue that a comparison with other ecologically valid (e.g., open-ended tasks which resemble real life duties) tasks is required to determine the convergent validity of the MSET. Examples of such tasks are the Executive Secretarial Task (Lamberts, Evans, & Spikman, 2010), the Multiple Errands Test (Alderman et al., 2003), and the Hotel Task (Manly et al., 2002). However, although these tasks are standardized assessment procedures, their psychometric properties (e.g., sensitivity and specificity) have not been examined thoroughly and normative data are not available. More importantly, administration of these tasks is very time-consuming and may last several hours. Therefore, these paradigms are less suitable for implementation in clinical practice (see also Lamberts et al., 2010).

Future studies should further examine which executive aspects the MSET is tapping. Moreover, the ecological validity of the proposed scoring method should be further investigated by including questionnaires such as the Dysexecutive Questionnaire (Wilson et al., 1996) or the Executive Function Index (Spinella, 2005) that assess everyday executive difficulties or by using established everyday executive function tests. The validity of the adapted scoring method should be investigated in other clinical populations as well. Another limitation of our study is that other cognitive domains were not assessed. Therefore, we cannot rule out that the two groups also differed on memory or attention performance. Eventual differences could have influenced the results on the executive function tasks, including the MSET.

To our knowledge, this is the first study investigating the MSET taking the aspect of time distribution into account. The conventional scoring method as described in the manual of the BADS only includes a penalty profile point when individuals spend >271 s on one of the MSET subtasks (Wilson et al., 1996). It lacks a scoring system for unequal time distribution throughout the subtests. In order to equally distribute the allowed time over the six subtests, planning behavior and time monitoring are required (Mantyla, Carelli, & Forman, 2007). In the current study, time distribution was assessed by subtracting the shortest time spent on a subtask from the longest time spent on a subtask. Unfortunately, this method does not take into account the overall variability of time spent on the subtasks. However, it is a relatively simple method to estimate time distribution and feasible in clinical practice. Adding the aspect of time distribution to the scoring index prevents the occurrence of ceiling effects in persons with mild executive deficits. This improvement is indicated by the better sensitivity of the adapted MSET score (resulting in a clinically useful cut-off point) compared with the poor sensitivity of the traditional score. An unintended outcome of this study was that the two investigated groups differed significantly on some demographic characteristics. The impaired group had a lower education level, which may affect performance on the executive tests. However, adjusting for this potentially confounding factor did not alter the results.

In conclusion, the present study shows that the adapted scoring method of the MSET can be clinically useful in measuring executive deficits in individuals with brain injuries, and more sensitive and specific than the conventional score. The adapted scoring method of the MSET is able to discriminate between persons with and without executive function deficits. The next step is to collect normative data in a sample of healthy participants from various age groups and the full spectrum of educational levels, for use in clinical assessment. Although the MSET, including the adapted scoring index, may be clinical useful as a first executive screening or as a treatment outcome measure, it is always advised to use multiple tests for the assessment of executive functions, since executive function is a multifarious concept.

Conflicts of Interest

None declared.

Acknowledgements

The authors acknowledge the participation of the study participants and thank Rehabilitation Medical Centre Groot Klimmendaal and the Sint Maartenskliniek for their contribution to the data collection. This work was supported by the National Initiative Brain and Cognition (NIBC). This quick-result project is embedded in the pillar “The Healthy Brain, Program Cognitive Rehabilitation” (grant number 056-11-011).

References

Alderman
N.
Burgess
P. W.
Knight
C.
Henman
C.
Ecological validity of a simplified version of the multiple errands shopping test
Journal of the International Neuropsychological Society
 , 
2003
, vol. 
9
 (pg. 
31
-
44
)
Bennett
P. C.
Ong
B.
Ponsford
J.
Assessment of executive dysfunction following traumatic brain injury: Comparison of the BADS with other clinical neuropsychological measures
Journal of the International Neuropsychological Society
 , 
2005
, vol. 
11
 (pg. 
606
-
613
)
Bertens
D.
Fasotti
L.
Boelen
D. H.
Kessels
R. P. C.
A randomized controlled trial on errorless learning in goal management training: Study rationale and protocol
BMC Neurology
 , 
2013
, vol. 
13
 pg. 
64
 
Bertens
D.
Fasotti
L.
Egger
J. I.
Boelen
D. H.
Kessels
R. P. C.
Reliability of an adapted version of the Modified Six Elements Test as a measure of executive function
 , 
2014
Manuscript submitted for publication
Blake
H.
McKinney
M.
Treece
K.
Lee
E.
Lincoln
N. B.
An evaluation of screening measures for cognitive impairment after stroke
Age and Ageing
 , 
2002
, vol. 
31
 (pg. 
451
-
456
)
Burgess
P. W.
Alderman
N.
Evans
J.
Emslie
H.
Wilson
B. A.
The ecological validity of tests of executive function
Journal of the International Neuropsychological Society
 , 
1998
, vol. 
4
 (pg. 
547
-
558
)
Burgess
P. W.
Shallice
T.
The Hayling and Brixton tests
 , 
1997
Thurston, UK
Thames Valley Test Company
Chan
R. C.
Hoosain
R.
Lee
T. M.
Fan
Y. W.
Fong
D.
Are there sub-types of attentional deficits in patients with persisting post-concussive symptoms? A cluster analytical study
Brain Injury
 , 
2003
, vol. 
17
 (pg. 
131
-
148
)
Chan
R. C.
Manly
T.
The application of “dysexecutive syndrome” measures across cultures: Performance and checklist assessment in neurologically healthy and traumatically brain-injured Hong Kong Chinese volunteers
Journal of the International Neuropsychological Society
 , 
2002
, vol. 
8
 (pg. 
771
-
780
)
Cohen
J.
A power primer
Psychological Bulletin
 , 
1992
, vol. 
112
 (pg. 
155
-
159
)
Damasio
A. R.
On some functions of the human prefrontal cortex
Annals of the New York Academy of Sciences
 , 
1995
, vol. 
769
 (pg. 
241
-
251
)
Duits
A.
Kessels
R. P. C.
Hendriks
M.
Kessels
R.
Gorissen
M.
Duits
A.
Schmand
B.
Schatten van het premorbide functioneren
Neuropsychologische diagnostiek: De klinische praktijk
 , 
in press
Amsterdam, The Netherlands
Boom
Duncan
J.
Emslie
H.
Williams
P.
Johnson
R.
Freer
C.
Intelligence and the frontal lobe: The organization of goal-directed behavior
Cognitive Psychology
 , 
1996
, vol. 
30
 (pg. 
257
-
303
)
Emmanouel
A.
Kessels
R. P. C.
Mouza
E.
Fasotti
L.
Sensitivity, specificity and predictive value of the BADS to anterior executive dysfunction
Neuropsychological Rehabilitation
 , 
2014
, vol. 
24
 (pg. 
1
-
25
)
Emslie
H.
Wilson
F. C.
Burden
V.
Nimmo-Smith
I.
Wilson
B. A.
Behavioural assessment of the dysexecutive syndrome in children (BADS-C)
 , 
2003
London
Harcourt Assessment/The Psychological Corporation
Espinosa
A.
Alegret
M.
Boada
M.
Vinyes
G.
Valero
S.
Martinez-Lage
P.
, et al.  . 
Ecological assessment of executive functions in mild cognitive impairment and mild Alzheimer's disease
Journal of the International Neuropsychological Society
 , 
2009
, vol. 
15
 (pg. 
751
-
757
)
Fan
J.
Upadhye
S.
Worster
A.
Understanding receiver operating characteristic (ROC) curves
Canadian Journal of Emergency Medicine
 , 
2006
, vol. 
8
 (pg. 
19
-
20
)
Fernandez-Serrano
M. J.
Perez-Garcia
M.
Schmidt Rio-Valle
J.
Verdejo-Garcia
A.
Neuropsychological consequences of alcohol and drug abuse on different components of executive functions
Journal of Psychopharmacology
 , 
2010
, vol. 
24
 (pg. 
1317
-
1332
)
Gouveia
P. A.
Brucki
S. M.
Malheiros
S. M.
Bueno
O. F.
Disorders in planning and strategy application in frontal lobe lesion patients
Brain and Cognition
 , 
2007
, vol. 
63
 (pg. 
240
-
246
)
Hanley
J. A.
McNeil
B. J.
The meaning and use of a area under a receiver operating characteristic (ROC) curve
Radiology
 , 
1982
, vol. 
143
 (pg. 
29
-
36
)
Hochstenbach
J.
Mulder
T.
Van Limbeek
J.
Donders
R.
Schoonderwaldt
H.
Cognitive decline folowing stroke: A comprehensive study of cognitive decline folowing stroke
Journal of Clinical and Experimental Neuropsychology
 , 
1998
, vol. 
20
 (pg. 
503
-
517
)
Kessels
R. P. C.
Mimpen
G.
Melis
R.
Olde Rikkert
M. G.
Measuring impairments in memory and executive function in older people using the Revised Cambridge Cognitive Examination
American Journal of Geriatric Psychiatry
 , 
2009
, vol. 
17
 (pg. 
793
-
801
)
Krabbendam
L.
Kalff
A. C.
Behavioural assessment of the dysexecutive syndrome (BADS), Dutch version.
 , 
1998
Lisse, The Netherlands
Swets & Zeitlinger
Lamberts
K. F.
Evans
J. J.
Spikman
J. M.
A real-life, ecologically valid test of executive functioning: The executive secretarial task
Journal of Clinical and Experimental Neuropsychology
 , 
2010
, vol. 
32
 (pg. 
56
-
65
)
Levine
B.
Robertson
I. H.
Clare
L.
Carter
G.
Hong
J.
Wilson
B. A.
, et al.  . 
Rehabilitation of executive functioning: An experimental-clinical validation of goal management training
Journal of the International Neuropsychological Society
 , 
2000
, vol. 
6
 (pg. 
299
-
312
)
Lezak
M. D.
The problem of assessing executive functions
International Journal of Psychology
 , 
1982
, vol. 
17
 (pg. 
281
-
297
)
Lezak
M. D.
Howieson
D. B.
Bigler
E. D.
Tranel
D.
Neuropsychological assessment
 , 
2012
New York
Oxford University Press
Liu
K. C.
Chan
R. C.
Chan
K. K.
Tang
J. Y.
Chiu
C. P.
Lam
M. M.
, et al.  . 
Executive function in first-episode schizophrenia: A three-year longitudinal study of an ecologically valid test
Schizophrenia Research
 , 
2011
, vol. 
126
 (pg. 
87
-
92
)
Manly
T.
Hawkins
K.
Evans
J.
Woldt
K.
Robertson
I. H.
Rehabilitation of executive function: Facilitation of effective goal management on complex tasks using periodic auditory alerts
Neuropsychologia
 , 
2002
, vol. 
40
 (pg. 
271
-
281
)
Mantyla
T.
Carelli
M. G.
Forman
H.
Time monitoring and executive functioning in children and adults
Journal of Experimental Child Psychology
 , 
2007
, vol. 
96
 (pg. 
1
-
19
)
Oosterman
J. M.
Molenveld
M.
Olde Rikkert
M. G. M.
Kessels
R. P. C.
Diagnostic utility of the Key Search Test as a measure of executive functions
Psychogeriatrics
 , 
2010
, vol. 
10
 (pg. 
173
-
178
)
Oosterman
J. M.
Wijers
M.
Kessels
R. P. C.
Planning or something else? Examining neuropsychological predictors of Zoo Map performance
Applied Neuropsychology
 , 
2013
, vol. 
20
 (pg. 
103
-
109
)
Renison
B.
Ponsford
J.
Testa
R.
Richardson
B.
Brownfield
K.
The ecological and construct validity of a newly developed measure of executive function: The Virtual Library Task
Journal of the International Neuropsychological Society
 , 
2012
, vol. 
18
 (pg. 
440
-
450
)
Shallice
T.
Burgess
P. W.
Deficits in strategy application following frontal lobe damage in man
Brain
 , 
1991
, vol. 
114
 (pg. 
727
-
741
)
Spinella
M.
Self-rated executive function: Development of the executive function index
International Journal of Neuroscience
 , 
2005
, vol. 
115
 
5
(pg. 
649
-
667
)
Stuss
D. T.
Levine
B.
Adult clinical neuropsychology: Lessons from studies of the frontal lobes
Annual Review of Psychology
 , 
2002
, vol. 
53
 (pg. 
401
-
433
)
van den Berg
E.
Nys
G. M.
Brands
A. M. A.
Ruis
C.
van Zandvoort
M. J. E.
Kessels
R. P. C.
The Brixton Spatial Anticipation Test as a test for executive function: Validity in patient groups and norms for older adults
Journal of the International Neuropsychological Society
 , 
2009
, vol. 
15
 (pg. 
695
-
703
)
Wechsler
D.
Wechsler adult intelligence scale - third edition
 , 
1997
San Antonio, TX
Psychological Corporation
Wilson
B. A.
Alderman
N.
Burgess
P. W.
Emslie
H.
Evans
J. J.
Behavioural assessment of the dysexecutive syndrome (BADS)
 , 
1996
Bury, St Edmunds
Thames Valley Test Company
Zimmermann
P.
Fimm
B.
Test for attentional performance (TAP), version 2.1: Operating manual
 , 
2007
Herzogenrath, Germany
Psytest