Abstract

Although there has been some progress in identifying the range of skills that comprise the executive neurocognitive system, their assessment has proved to be a challenge. Operationalization of executive functions (EFs) may be progressed by identifying the cognitive constructs that underlie EF test performance via principal components analysis. The underlying factor structure of 19 EF tests was examined in a non-clinical sample of 200 adults (mean age = 30.8 [18–64] years); the sample comprised 97 men. Findings revealed only weak correlations between various measures derived from the EF tests. Exploratory factor analysis revealed a model comprising six independent factors, consistent with previous reports, describing the functions of the EF system. The factors comprised: Prospective Working Memory, Set-Shifting and Interference Management, Task Analysis, Response Inhibition, Strategy Generation and Regulation, and Self-Monitoring and Set-Maintenance. Results confirm the diverse and heterogeneous nature of EFs and caution against conceptualizations that underestimate its complexity. Furthermore, variability within the “normal” executive system is evident, and further research is required to understand executive functioning in healthy populations.

Introduction

In the last decade, there has been rapid progress in regards to our understanding and conceptualization of the cognitive skills that comprise the executive system (Anderson, 1998; Chayer & Freedman, 2001). Neuropsychological investigations have significantly contributed by demonstrating the multidimensional and dissociable nature of executive functions (EFs) in both healthy non-clinical and clinical individuals (Burgess, Alderman, Evans, Emslie, & Wilson, 1998; Duncan, Johnson, Swales, & Freer, 1997; Robbins et al., 1998). Despite this, debate has continued as to what is precisely meant by the term “executive function” and what criteria are to be used to assess its component skills. The skills assessed by tasks purported to measure executive functioning are also not well understood. As such, support for a model that is able to guide a comprehensive assessment has not been established (Graffman, 1995; Testa & Pantelis, 2009).

Three main approaches have proved informative for developing our current conceptualization of executive functioning. The first and most common approach involved lesion studies describing evidence from single case (de Oliveira-Souza, Moll, Moll, & de Oliveira, 2001; Eslinger, Flaherty-Craig, & Benton, 2004), and group studies of patients with frontal lobe (FL) damage (Baddeley, Della Sala, Papagno, & Spinnler, 1997; Godbout, Grenier, Braun, & Gagnon, 2005; Hornak et al., 2004; Jacobs & Anderson, 2002; Manes et al., 2002; McDonald, Delis, Norman, Tecoma, & Iragui-Madozi, 2005). This work provided insight into the functional specialization of EFs within frontal subregions, including the dorsolateral prefrontal cortex (DLPFC) and orbitofrontal cortex (OFC) (Baddeley & Della Sala, 1996; Bechara, Damasio, & Damasio, 2000; Duncan & Owen, 2000; Goldman-Rakic, 1987; Levine et al., 1998; Nauta, 1971; Passingham, 1985; Shallice, Burgess, & Frith, 1991; Stuss & Levine, 2002).

The DLPFC has been implicated in skills required for planning, decision-making and problem-solving, including strategy generation; goal formation and selection; monitoring and sequencing of behavior; anticipatory set; task analysis; decision-making; set-shifting; and abstract thinking (Aron, Robbins, & Poldrack, 2004; Burgess, 2000; Denckla, 1996; Fuster, 1991; Goldman-Rakic, 1987; Lezak, 1993; Norman & Shallice, 1986; Stuss & Benson, 1986; Tranel et al. 1994). In contrast, the OFC is thought to mediate social, affective, motivational, and personality aspects of an individual's functioning, including the capacity for emotional and behavioral self-regulation, and integration of subjective experiences required for self-awareness and individuality (Cicerone & Tanenbaum, 1997; Stuss & Levine, 2002). More recently, Stuss (2007) has proposed a model of EF that describes four functional domains of EFs attributed to specialized frontal regions. These include executive cognitive functions; behavioral and emotional self-regulatory functions; activation-regulating functions; and metacognitive processes. In Stuss's (2007) proposal, executive cognitive functions are largely mediated by the DLPFC and relate to skills of planning, monitoring, activating, switching, and inhibition. Behavioral self-regulatory skills involve ventral prefrontal and limbic regions, and are responsible for emotional processing. Activation-regulating functions include the ability to initiate and sustain appropriate behavioral responses to attain goals, and implicate the anterior cingulate and superior frontal regions. Metacognitive processes include skills related to personality, social cognition, consciousness and self-awareness, and are mediated by the frontal poles.

Secondly, evidence from neuroimaging studies (e.g. functional, structural, neurochemical) (Smith & Jonides, 1999; Szameitat, Schubert, Muller, & Von Cramon, 2002; Tamm, Menon, & Reiss, 2002; Taylor et al., 2004; Tekin & Cummings, 2002) has suggested that more detailed conceptualizations of the executive system are required, including evidence that executive dysfunction can occur following damage to non-frontal regions (Fassbender et al., 2004; Himanen et al., 2005; Roth & Saykin, 2004; van der Werf, Prins, Jongen, van der Meer, & Bleijenberg, 2000). Advances in brain imaging technologies have highlighted the inadequacy of a strict localizationist approach and the overly simplistic view of equating FLs and EFs (e.g. Blakemore & Choudhury, 2006; Charlton et al., 2006; Gothelf, Furfaro, Penniman, Glover, & Reiss, 2005; Holmes et al., 2005; Zimmerman et al., 2006). Both clinical and imaging studies have implicated EFs in non-frontal regions, various neural networks and many subordinate cognitive processes (Aron, et al., 2004; Benke, Delazer, Bartha, & Auer, 2003; Carter, Botvinick, & Cohen, 1999; Storey, Forrest, Shaw, Mitchell, & Gardner, 1999; Stuss & Alexander, 2000).

The third approach has incorporated the use of factor analytical techniques to identify constructs that underlie EF test performance in clinical and healthy populations. Despite this more psychometrically directed approach, investigations have reported diverse findings (Amieva, Phillips, & Della Sala, 2003; Bennett, Ong, & Ponsford, 2005; Burgess et al., 1998; Chan, 2001; Robbins et al., 1998). For example, Shute and Huertas (1990), Robbins and colleagues (1998), Pineda and Merchan (2003), and Bennett and colleagues (2005) demonstrated fractionation of the executive system in clinical and unimpaired samples. Robbins and colleagues (1998), using a large healthy sample, reported a four-factor solution using the Cambridge Neuropsychological Test Automated Battery (CANTAB) and the Tower of London (TOL) task. There were several cross-loadings, and derived factors were thought to represent Planning and Spatial Working Memory (SWM) (TOL and SWM), Attentional Set-Shifting (Interdimensional-Extradimensional or IDED task and the TOL), Strategic Aspects of Executive Functions (TOL and spatial working memory task), and Mnemonic Aspects of the Spatial Working Memory task. The solution accounted for 62.2% of the variance. Shute and Huertas (1990) investigated a smaller group of adults (n = 58) using measures purported to be sensitive to FL functioning (Category Test, Wisconsin Card Sort Task [WCST], Trail Making Test [TMT], Digit Symbol–WAIS [Wechsler Adult Intelligence Scale] and four experimental tasks), as well as a measure of operational reasoning (Piagetian Shadows Task). Their factor analysis produced four principal factors that accounted for 70% of the variance. A strong relationship between the operational reasoning task and the Category Test, TMT-B and the WCST was also identified, where these variables all loaded, although not exclusively, on the first principle factor in the solution. Pineda and Merchan (2003) identified five factors in a normal population using measures derived from four tests, but only listed the measures loading on each factor and did not offer an interpretative analysis of all findings. The factors were represented by the Stroop Errors measure, the Stroop Time measure, the TMT, Verbal Fluency and the WCST (the WCST factor was proposed to represent Organization and Flexibility). The solution also appeared to be stable and explained a high percentage of the shared variance (74.9%).

The identified factors and clinical interpretation has varied considerably for each factor analytic investigation, with results influenced by the limited number and type of executive measures included. A comprehensive portrayal of the executive system likely requires the inclusion of a larger number of measures assessing the full breadth of EFs. Large sample sizes to ensure the validity and stability of the solution in factor analysis are also required (Fabrigar et al., 1999). As it is often difficult to administer a wide range of tests to large clinical populations, factor analytic investigations often fail to meet these statistical requirements, so that the generalizability of findings can be limited.

One challenge to the development of an EF model guiding assessment has been the wide range of measures of EF used in clinical practice. The majority of these have not been developed in direct relation to any of the EF theories or localization studies mentioned earlier, leaving clinicians with no clear guidelines as to what they are measuring when administering a given executive measure. The ability to administer well-designed and validated EF measures is necessary to progress on the operationalization of EFs. A structured evaluation of EF tests would help to clarify the construct validity of each measure, and assist in understanding the cognitive skills underlying each measurement. Also of relevance is the type of population. Predominantly, clinical samples are examined to infer a model of “healthy” executive functioning. However, relying on dysfunction as an indirect measure of unimpaired function may be inappropriate given that a linear relationship between a region's function and dysfunction may not exist (Miller, 1990; Pantelis et al., 2003). In consideration, research characterizing EF variability within healthy, unimpaired populations may be necessary to develop a “gold standard” model of EF with which clinical findings can be compared.

In summary, the adoption of a range of investigative techniques has enabled a better understanding of EFs and the complex neurocognitive system that they comprise. Despite this large body of research, it is still difficult to identify which skills comprise EFs and how specific executive skills relate to or can be differentiated from one another in neuropsychological assessment. Whereas some previous factor analytic studies have been conducted, these have employed a limited number of tests and modest sample sizes, limiting the validity and generalizability of their findings.

The present study therefore aimed to examine the cognitive skills underlying a broad range of commonly used EF measures used in clinical and research settings; and explore the relationship between latent factors derived from these measures using an exploratory factor analytic approach in a large healthy population, in order to inform clinical assessment (Ardila, 2008; Denckla, 1994, 1996; Eslinger, 1996; Lezak, 1995).

Method

Participants

Two hundred adults were recruited from the general community via word of mouth and advertising within work organizations. Exclusion criteria included age <18 or > 65 years, and any past history of developmental or acquired brain injury. The participants comprised 97 men and 103 women, with a mean age of 30.8 years (range = 18–64 years, SD = 9.14). The percentage of participants in each age bracket was: 18–19 years (2.5%), 20–29 years (50%), 30–39 years (36%), 40–49 years (5.5%), 50–59 years (3.5%), and 60–65 years (2.5%). The mean estimated intelligence, as measured by the National Adult Reading Test (NART; Nelson, 1982) was 107.97 (range = 99.93–116.01), indicating that the sample was placed well within the average range. Fifty-four had completed or undertaken some secondary level education, 143 had completed or commenced tertiary courses, and three had completed or commenced TAFE (technical or trade) courses. Information obtained from the Australian Bureau of Statistics (ABS, 2001) indicated that the sample was representative of the general Australian population in terms of participants' age and gender. Direct comparisons regarding education level were not possible, as detailed information (i.e. specific number of years and degrees achieved) was not recorded.

Materials

Nineteen neuropsychological tests were administered to all participants. The EF tests were drawn from an exhaustive literature review of 31 executive measures cited as being sensitive to executive deficits. All tests were assessed on four criteria, including: (i) how commonly the tests were used, (ii) if the tests were favorably reviewed in the literature, (iii) the psychometric properties of the measures, and (iv) the skills that the tests were purported to assess, in order to ensure that a wide range of executive skills was included. Each test was allocated a score ranging from −2 (fails to meet criteria), 0 (represents a neutral view or lack of data in the literature), or 2 (clearly meets the specific criteria), resulting in scores from −8 to 8. Tests assigned a score of 4 or above were included, with the aim of being more inclusive than exclusive. Two clinical neuropsychologists (R.T. and P.B.) independently evaluated each test according to the designated criteria. There was agreement between the raters as to which tests should be included within the study.

The selected measures included: Porteus Mazes (Porteus, 1965), Random Number Generation (RNG; Brugger, 1997), Six Elements Test (SET; Wilson et al., 1998), Key Search Task (Wilson et al., 1998), Zoo Map Test (Wilson et al., 1998), Stroop Test (Golden, 1978), Hayling Sentence Completion Test (Burgess & Shallice, 1996a), Brixton Spatial Anticipation Test (Burgess & Shallice, 1996b), Similarities (WAIS-III; Wechsler, 1997), Twenty Questions Test (Mosher & Hornsby, 1966), Cognitive Estimates Test (CET; Shallice & Evans, 1978), Tower of London–Revised (TOL-R; Schnirman et al., 1998), TMT (Reitan, 1992), Contingency Naming Test (CNT; Anderson et al., 2000), Animal Fluency (Benton, 1968), Verbal Fluency (Benton, 1968), Concept Generation Test (Levine, Stuss & Milberg, 1995) and the Wisconsin Card Sorting Test (WCST; Heaton, 1981). In addition, the National Adult Reading Test (NART; Nelson, 1982) was used to provide an estimate of participants' intelligence. Tests that were reviewed and failed to meet the criteria included: Action Program Test (BADS; Wilson et al., 1998), Multiple Errands Test (Shallice & Burgess, 1991), Party Planning Task (Chalmers & Lawrence, 1993), Tinker Toy Test (Lezak, 1982), Design fluency (Ruff, 1998), Rule Shift Cards Test (BADS; Wilson et al., 1998), Gesture Fluency (Jason, 1985), Proverb Test (Lezak, 1993), Temporal Judgement Test (BADS; Wilson et al. 1998), and the Halstead Category Test (Halstead, 1947).

All tests were administered and scored according to standard instructions, with the following exceptions. The Porteus Maze test: given that the current study was interested in examining planning skills, as opposed to fine motor control, participants were not penalized for touching the lines with their pen or lifting their pen off the page. The Twenty Questions Test: if participants were unable to guess either item after 20 questions, the trial continued until they responded correctly or requested the game to cease, as it was considered important to analyze the approach taken to performing the task.

The Cognitive Estimates Test: normative data provided by Shallice and Evans (1978) were not appropriate given that the test was designed for a British population. Considering the lack of Australian normative data, and large number of participants in the current investigation, normative data from the present investigation were used to evaluate each participant's response. As in the original investigation (Shallice & Evans, 1978), an error score was derived by determining the difference between the participant's response on each question and a “correct answer,” based on the average response provided by the sample. Scores were then converted to a standardized or Z score, and summed to provide a total error score for analysis. It was necessary to modify one culturally biased test item that required estimation of the height of the Tower of London. A comparable question asking the height of the Sydney Opera House was substituted. The TOL–R: Due to time constraints, each participant was administered only 15 of the 30 problems (five 4-move problems, five 5-move problems, and five 6-move problems) available in the Schnirman et al. (1998) version of the TOL test. Trial order was counterbalanced (the first 100 individuals completed even-numbered problems, and second 100 individuals the odd-numbered problems). All 15 problems were administered to each participant, regardless of their task performance.

Procedures

All participants gave informed consent prior to their inclusion in the study. Testing was generally conducted within one session (2–3 h), depending on the availability and stamina of the participant at a time and location of their convenience. All participants completed all executive tasks, which were administered by one examiner as described, and no measures were computerized. The tests were administered in a random order except for the WCST, which was always administered prior to the Brixton Test because of the similarity of the tasks, and the potential for the instructions of the Brixton Test to alert participants to the possibility of unexpected rule changes.

Statistical Analysis

Previous EF studies have adopted both exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). Although both models aim to identify the underlying structure of a set of variables, EFA imposes few restrictions on the relationship between the variables and the number of factors identified (Fabrigar et al., 1999), whereas CFA is underpinned by theoretical or empirical hypotheses. EFA was deemed most appropriate for the present study, as there was insufficient information available regarding the executive measures and their potential inter-relationships in a healthy population from which a defensible and testable model could be generated.

All variables were examined for outliers and violations of normality. Outliers that were greater than 3.29 SD from the mean were removed from the data and replaced with a score corresponding to that value (Tabachnick & Fidell, 1996). Although some variables were found not to conform to normal distributions, they were not transformed due to the large sample size employed in this study, and because EFA techniques are robust to violations of normality (Floyd & Widaman, 1995; Gorsuch, 1983; Tabachnick & Fidell, 1996; de Vaus, 2002).

In order to examine the cognitive components underlying performance on the selected executive tasks, a principal components analysis (PCA) with varimax rotation was used (Floyd & Widaman, 1995). The selection of dependent variables was done with careful consideration (Fabrigar et al., 1999; Nunnally & Bernstein, 1994) in order to provide a solution that best represented the EF constructs assessed by the measurements (Nunnally & Bernstein, 1994). Specific guidelines to ensure uniformity and inclusion of the most informative and worthwhile variables were developed, as outlined subsequently.

Guidelines for Variable Selection and Data Reduction

Given the large number of potential variables, a systematic approach for variable selection was devised (Table 1). All profile scores or scaled scores were eliminated as such “overlapping” variables essentially reflect an identical measure (Kline, 1994). They also have a limited range and are therefore poor discriminators, which can obscure real differences within a sample (Floyd & Widaman, 1995; Kline, 1998; de Vaus, 2002). In addition, three other statistical methods were adopted. The first technique examined correlations between each of the test variables. When variables were highly correlated (correlations >0.7), all but one of the scores was eliminated; selecting the variable most commonly used in previous research (de Vaus, 2002). The second data reduction technique involved employing a PCA; this occurred when a particular test (WCST and CNT) provided values that represented different facets of EF, and deriving one overall score was not possible or would have resulted in the loss of important information (Tabachnick & Fidell, 1996). The third data reduction technique included examination of the correlations between the variables and other tests proposed to assess similar executive skills, given that it is easier to identify patterns of correlations when the degree of inter-correlation between a set of variables is maximized (Nunnally & Bernstein, 1994). The variable found to have the highest correlation with the other executive measures was selected for inclusion. This technique was applied if the (two) previously described methods were not able to sufficiently reduce the data.

Table 1.

Variable label for each executive function measure selected for the statistical analyses.

Instrument Measure used Range Mean SD 
Porteus Maze Total Number of Type One and Two Errors (PMTOTERR) 4.00 0.40 0.81 
Hayling Test Part B minus Part A Total Time (HAYB-AT) 119.00 15.15 2.61 
Hayling Test Part B: Total Category A Errors plus Total Category B Errors (HAYBAERR) 12.00 2.11 2.61 
20 Questions Test Percentage of Constraint Seeking Questions (TWPERCS) 77.37 66.54 15.19 
20 Questions Test Percentage of Pseudo-Constraint Seeking Questions (TWPERPCS) 37.98 6.21 9.00 
Wechsler Adult Intelligence Scale–Similarities Raw Score (SIM) 23.00 22.45 4.55 
Stroop Test Color Word Score (STPCWS) 53.00 46.33 9.92 
Brixton Test Total Number of Errors (BRIXERR) 24.00 12.40 4.48 
Cognitive Estimates Test Total Z Score Average from the Mean Score (CETZAVMC) 2.42 −0.0019 0.35 
Key Search Test Total Raw Score (KS) 14.00 12.48 3.53 
Trail Making Test Time for Part B minus Time for Part A (TMTBADIF) 97.00 32.88 18.62 
Six Elements Test Total Number of Subtasks where Rules were Broken (SETRUTOT) 5.00 0.31 0.90 
Six Elements Test Total Number of Switches (SETSWITC) 46.00 8.28 7.12 
Zoo Map Test Zoo Map One Raw Score (ZM1RS) 6.00 0.64 1.22 
Verbal Fluency Total Number of Words (VFTOTWDS) 68.00 44.14 11.24 
Verbal Fluency Total Number of Errors (VFTOTERR) 7.00 1.45 1.60 
Contingency Naming Test Factor A Score (CNTFA) 8.85 0.00 1.00 
Contingency Naming Test Factor B Score (CNTFB) 5.37 0.00 1.00 
Contingency Naming Test Factor C Score (CNTFC) 8.61 0.00 1.00 
Animal Fluency Number of Words in Size List (AFWRDS1) 17.00 13.04 3.04 
Animal Fluency Number of Size Errors (AFWERRS) 5.00 1.07 1.20 
Concept Generation Test Number of Categories Achieved (CGTNO.CT) 5.00 4.23 1.27 
TOL–Revised Total Time Taken for all Trials (TOLTTC) 1254.00 394.58 252.12 
TOL–Revised Total Trials Correct (TOLTC) 11.00 11.31 2.56 
WCST Factor A Score (WCSTFA) 4.55 0.14 0.88 
WSCT Factor B Score (WCSTFB) 6.20 0.12 0.85 
Random Number Generation Repetition (RNGR) 35.00 17.94 3.07 
Random Number Generation Seriation (RNGS) 13.63 9.00 2.62 
Random Number Generation Cycling (RNGC) 15.92 6.47 3.14 
e-Dysexecutive Questionnaire Total Score (e-DEX) 143.00 63.92 27.54 
Instrument Measure used Range Mean SD 
Porteus Maze Total Number of Type One and Two Errors (PMTOTERR) 4.00 0.40 0.81 
Hayling Test Part B minus Part A Total Time (HAYB-AT) 119.00 15.15 2.61 
Hayling Test Part B: Total Category A Errors plus Total Category B Errors (HAYBAERR) 12.00 2.11 2.61 
20 Questions Test Percentage of Constraint Seeking Questions (TWPERCS) 77.37 66.54 15.19 
20 Questions Test Percentage of Pseudo-Constraint Seeking Questions (TWPERPCS) 37.98 6.21 9.00 
Wechsler Adult Intelligence Scale–Similarities Raw Score (SIM) 23.00 22.45 4.55 
Stroop Test Color Word Score (STPCWS) 53.00 46.33 9.92 
Brixton Test Total Number of Errors (BRIXERR) 24.00 12.40 4.48 
Cognitive Estimates Test Total Z Score Average from the Mean Score (CETZAVMC) 2.42 −0.0019 0.35 
Key Search Test Total Raw Score (KS) 14.00 12.48 3.53 
Trail Making Test Time for Part B minus Time for Part A (TMTBADIF) 97.00 32.88 18.62 
Six Elements Test Total Number of Subtasks where Rules were Broken (SETRUTOT) 5.00 0.31 0.90 
Six Elements Test Total Number of Switches (SETSWITC) 46.00 8.28 7.12 
Zoo Map Test Zoo Map One Raw Score (ZM1RS) 6.00 0.64 1.22 
Verbal Fluency Total Number of Words (VFTOTWDS) 68.00 44.14 11.24 
Verbal Fluency Total Number of Errors (VFTOTERR) 7.00 1.45 1.60 
Contingency Naming Test Factor A Score (CNTFA) 8.85 0.00 1.00 
Contingency Naming Test Factor B Score (CNTFB) 5.37 0.00 1.00 
Contingency Naming Test Factor C Score (CNTFC) 8.61 0.00 1.00 
Animal Fluency Number of Words in Size List (AFWRDS1) 17.00 13.04 3.04 
Animal Fluency Number of Size Errors (AFWERRS) 5.00 1.07 1.20 
Concept Generation Test Number of Categories Achieved (CGTNO.CT) 5.00 4.23 1.27 
TOL–Revised Total Time Taken for all Trials (TOLTTC) 1254.00 394.58 252.12 
TOL–Revised Total Trials Correct (TOLTC) 11.00 11.31 2.56 
WCST Factor A Score (WCSTFA) 4.55 0.14 0.88 
WSCT Factor B Score (WCSTFB) 6.20 0.12 0.85 
Random Number Generation Repetition (RNGR) 35.00 17.94 3.07 
Random Number Generation Seriation (RNGS) 13.63 9.00 2.62 
Random Number Generation Cycling (RNGC) 15.92 6.47 3.14 
e-Dysexecutive Questionnaire Total Score (e-DEX) 143.00 63.92 27.54 

Results

Data Reduction of the WCST and CNT

Results from the PCA analysis of WCST scores indicated that a two-factor solution accounted for 78.19% of the total variance. Factor scores from this analysis were subsequently used (i.e. variable names WCSTFA and WCSTFB). Regarding interpretation of factor scores, the majority of measures from the task, including the number of correct responses, the number of categories achieved, and the number of conceptual level responses, were included within the first factor. This factor likely represents the reasoning, problem-solving element of the task. In contrast, the second factor included the trials to complete first category, failure to maintain set and total correct, and reflected abilities required for set maintenance or “staying on task.”

The CNT produced a five-factor solution that accounted for 64.64% of the variance. Examination of the scree plot indicated, however, that a three-factor solution was likely a better fit. This accounted for 46.30% of the variance. Factor-derived scores from this analysis were subsequently used in the final factor analysis (i.e. variable names CNTFA, CNTFB and CNTFC). In terms of factor score interpretation, the first factor comprised scores from the baseline Trials 1 and 2. The second factor included the number of self-corrections made by participants on Trials 3 and 4. The last factor comprised the number of correct responses and time for completion on Trial 4, as well as the number of correct responses on Trial 1.

Analysis of Correlations between the EF Measures

Pearson's product–moment correlations were conducted using the test variables selected. For ease of interpretation, test variables were grouped according to each of the commonly cited executive domains (as reported earlier) that they purportedly assess. Bonferroni corrections were also applied (p-value <.0001) because of the large number of correlations performed. In the original (uncorrected) matrix, there were 55 intra-task and four inter-task correlations that were significant at the <.01 level. As predicted, application of the Bonferroni correction resulted in an overall reduction in the number of significant correlations. A considerable number of correlations remained significant however, but were generally weak. Within the matrix, 31 inter-task correlations were significant at the p < .0001 level, as well as 11 intra-task correlations. In consideration of the multiple comparisons conducted within the dataset, there is an increased likelihood of experiment-wise Type 1 error being present. However, given the strict p value set when Bonferroni adjustments were conducted, it was not considered possible to further reduce the set significance level and further adjust for multiple comparisons. Further, an omnibus null test (Cohen, 2000) was conducted on the correlations that did not share method variance. It was found that, despite the large number of correlations conducted, the chi-square value was significant at p < .001, where x² = 1264.74, df = 406, and the critical level of significance = 553.455.

Although the number of correlations conducted is too extensive to report in this paper, in summary, the majority of correlations that were identified occurred between variables that were derived from the same task. Correlations between different executive tasks were weak to moderate and limited in number, demonstrating the independence of the tasks. If these tests are valid and reliable measures of the executive system, and the system comprises dissociable components, then higher correlations between measures that are said to assess similar executive constructs would be expected.

PCA of EF Measures

Guidelines for selecting the best solution were obtained from Fabrigar et al. (1999). All selected test variables were subject to a PCA with varimax rotation (Fabrigar et al. 1999). The solution accounted for 62.99% of the variance and produced 11 factors, using Kaiser's criterion of eigenvalues greater than one. Evidence for the dissociability of the solution was supported by the unrotated solution, which indicated that the first unrotated principal component was comparable in size with other components. However, the scree plot indicated that either five or six factor solutions may be more appropriate; the solution was therefore re-run restricting the solution. In order to obtain the optimum solution, measures that loaded below 0.4 were removed and the analysis was conducted again following the extraction of the specified number of variables (Floyd & Widaman, 1995; Tabachnick & Fidell, 1996).

Examination of the results from both analyses demonstrated that the six-factor solution produced the best result, in that it was well-defined and offered a simple structure with only one measure loading across the factors. It also accounted for a greater percentage of the variance, being 46.54% compared with the 42.23% for the five-factor solution. Table 2 shows the six-factor solution with loadings less than 0.4 excluded. The percentage of variance accounted for by each of the individual factors (with eigenvalues presented in parentheses) was 8.43% (2.10), 8.19% (2.05), 8.09% (2.02), 7.57% (1.88), 7.28% (1.82) and 7.02% (1.76), respectively.

Table 2.

Rotated component matrix loadings of orthogonally rotated factors extracted by principal components factor analysis

Neuropsychological measures Factor loadings
 
Factor 1 Factor 2 Factor 3 Factor 4 Factor 5 Factor 6 
Tower of London (Total Time) 0.81 0.17 0.05 −0.14 −0.09 0.21 
Random Number Generation (Seriation) 0.66 0.11 0.25 −0.09 −0.18 0.22 
Tower of London (Total Correct) 0.65 −0.14 0.35 −0.09 −0.07 0.06 
Porteus Mazes 0.50 0.22 −0.27 −0.01 0.13 −0.02 
Contingency Naming Test (FA) −0.03 0.78 −0.07 0.09 0.14 −0.01 
Stroop Color Word Test 0.10 0.72 −0.10 −0.09 0.01 −0.13 
Trail Making Test −0.01 0.65 −0.12 0.09 −0.07 0.09 
Zoo Map Test 0.06 −0.09 0.59 −0.19 0.01 −0.06 
Wisconsin Card Sort Test (FA) 0.09 −0.00 0.50 0.01 0.02 0.28 
Brixton Test −0.25 0.21 0.48 −0.05 0.14 0.14 
Contingency Naming Test (FB) 0.03 −0.03 0.46 0.41 −0.02 −0.13 
Cognitive Estimates Test −0.09 −0.01 0.40 −0.07 0.22 −0.00 
Hayling Test (Total Time) −0.01 0.16 0.15 0.78 −0.04 −0.09 
Hayling Test (Error Score) −0.12 0.01 −0.00 0.74 0.11 0.05 
Twenty Questions Test (% of PCS Qns) −0.01 0.08 −0.14 0.48 0.03 0.03 
Six Elements Test (Switches) 0.01 −0.03 0.07 −0.04 0.75 −0.13 
Six Elements Test (Total Score) −0.08 −0.10 −0.21 −0.02 0.71 0.06 
Verbal Fluency Test (Number of Words) 0.16 −0.34 −0.07 −0.23 0.58 −0.07 
Animal Fluency Test (Number of Words) −0.06 −0.35 0.16 −0.11 0.41 0.01 
Random Number Generation (Repetition) −0.02 0.13 0.19 −0.08 −0.01 0.65 
Wisconsin Card Sort Test (FB) 0.13 −0.09 −0.13 0.22 −0.12 0.64 
Contingency Naming Test (FC) 0.09 −0.08 0.26 −0.06 0.01 0.54 
Random Number Generation (Cycling) −0.02 0.09 0.22 −0.17 0.10 0.50 
Neuropsychological measures Factor loadings
 
Factor 1 Factor 2 Factor 3 Factor 4 Factor 5 Factor 6 
Tower of London (Total Time) 0.81 0.17 0.05 −0.14 −0.09 0.21 
Random Number Generation (Seriation) 0.66 0.11 0.25 −0.09 −0.18 0.22 
Tower of London (Total Correct) 0.65 −0.14 0.35 −0.09 −0.07 0.06 
Porteus Mazes 0.50 0.22 −0.27 −0.01 0.13 −0.02 
Contingency Naming Test (FA) −0.03 0.78 −0.07 0.09 0.14 −0.01 
Stroop Color Word Test 0.10 0.72 −0.10 −0.09 0.01 −0.13 
Trail Making Test −0.01 0.65 −0.12 0.09 −0.07 0.09 
Zoo Map Test 0.06 −0.09 0.59 −0.19 0.01 −0.06 
Wisconsin Card Sort Test (FA) 0.09 −0.00 0.50 0.01 0.02 0.28 
Brixton Test −0.25 0.21 0.48 −0.05 0.14 0.14 
Contingency Naming Test (FB) 0.03 −0.03 0.46 0.41 −0.02 −0.13 
Cognitive Estimates Test −0.09 −0.01 0.40 −0.07 0.22 −0.00 
Hayling Test (Total Time) −0.01 0.16 0.15 0.78 −0.04 −0.09 
Hayling Test (Error Score) −0.12 0.01 −0.00 0.74 0.11 0.05 
Twenty Questions Test (% of PCS Qns) −0.01 0.08 −0.14 0.48 0.03 0.03 
Six Elements Test (Switches) 0.01 −0.03 0.07 −0.04 0.75 −0.13 
Six Elements Test (Total Score) −0.08 −0.10 −0.21 −0.02 0.71 0.06 
Verbal Fluency Test (Number of Words) 0.16 −0.34 −0.07 −0.23 0.58 −0.07 
Animal Fluency Test (Number of Words) −0.06 −0.35 0.16 −0.11 0.41 0.01 
Random Number Generation (Repetition) −0.02 0.13 0.19 −0.08 −0.01 0.65 
Wisconsin Card Sort Test (FB) 0.13 −0.09 −0.13 0.22 −0.12 0.64 
Contingency Naming Test (FC) 0.09 −0.08 0.26 −0.06 0.01 0.54 
Random Number Generation (Cycling) −0.02 0.09 0.22 −0.17 0.10 0.50 

Notes: FA = Factor Score A; FB = Factor Score B; FC = Factor Score C. Bold number indicate factor loadings ≥ 0.40.

Factor One comprised four measures. Two were from the same test (TOL-R) and the others were the Porteus Maze and the RNG Seriation measure. These variables loaded strongly on the factor, with loadings ranging from 0.50 to 0.81. Executive Factor One was thought to represent Prospective Working Memory. Executive Factor Two comprised three measures: the Stroop, TMT and the CNT (F1) factor score. All three variables demonstrated comparable and strong loadings on this factor, thought to represent Set-Shifting and Interference Management. Executive Factor Three comprised five measures. Three of the measures loaded moderately on this factor: the Zoo Map test, the Brixton test, and the WCST (FA) measure. This factor was thus thought to assess Task Analysis. The fourth and fifth measures, which were found to load relatively more weakly, were from the CET and the CNT (F2). Executive Factor Four involved three measures. Two loaded strongly on this factor and were derived from the Hayling test, while the third measure, which loaded more moderately, was the Twenty Questions Test Percentage of PCS questions. This factor was thought to reflect Response Inhibition. The CNT (FB) factor score was also found to load on this factor, although it contributed slightly more to the third factor in the solution. Executive Factor Five comprised four measures, three of which were found to load strongly. Two of the measures were derived from the SET and one measure from the Verbal Fluency task. In comparison, a weaker loading was observed for the last measure, the Animal Fluency (number accurately recalled in size order) measure. This factor represented Strategy Generation and Regulation. Executive Factor Six also comprised four measures. These were the WCST (F2) and CNT (F3) scores, in addition to two of the RNG measures, Repetition and Cycling. This factor was thought to reflect Self-Monitoring and Set-Maintenance.

PCA of the EF measures was repeated with an oblimax rotation given identifiable difference between the techniques may be relevant to the stability and reliability of the solution. Although a varimax rotation seeks to maximize the variance to achieve the simplest solution within a factor analysis, an oblimax rotation places no restrictions upon the final solution to gain simplicity in the interpretation. When compared with the result obtained with a varimax rotation, the analysis produced a solution that was comparable in structure. The solution offered a simple structure, and only one of the items (i.e. the CET) loaded across two of the factors.

To further validate the findings, the sample population was divided into two groups and the factor analysis was again conducted with a sample of 100 participants in each group; results comparable with the original solution were identified, adding further support to the stability of the solution reported.

Discussion

The aim of the present investigation was to characterize the relationship between current neuropsychological tests and executive cognitive skills. This is informative for developing a conceptualization of EFs, as measured by the assessment tasks used to assess this cognitive system. Delineating the inter-relationships between executive measures can also assist in the interpretation of neuropsychological findings.

Examination of the correlations between the test variables generally demonstrated very weak relationships. The independence of the executive skills assessed by each of the measures, and the multifactorial nature of the measures likely contributes to this finding. Successful EF test completion is believed to require a number of executive and non-executive skills (Burgess, 1997; Miyake, Friedmann, Emerson, Witzki, & Howerier, 2001; Phillips, 1997), which remain to be adequately defined. Underlying relationships between executive measures may be obscured because the non-executive abilities required for task performance are not comparable. Conversely, correlations between tests may reflect commonality in the non-executive skills required to complete the tasks, rather than the executive components of the tests.

For this reason, the adoption of a factor analytic approach was considered most informative. Factor analysis of the executive measures in this study demonstrated the existence of six dissociable components. Low correlations between the identified factors demonstrated their independence, supporting previous notions of a fractionated executive system (Duncan et al., 1997; Miyake et al., 2000). This study is one of the first investigations to establish a relationship between executive function constructs and a large group of specific neuropsychological measures.

Although it is acknowledged that interpretation of factors is difficult, the authors have discussed subsequently the common executive skills that may represent each of the identified constructs. This interpretation was based on the previous literature examining each of the EF tasks, which was initially reviewed in the test selection process. Additional investigation would be required to validate the factors identified.

Executive Factor One was considered representative of Prospective Working Memory. The TOL-R, Porteus Mazes and RNG are all tasks that require the ability to hold on line possible responses, and think or look ahead in the short-term using working memory capacity, so that the optimum behavioral response or solution can be selected.

Executive Factor Two was thought to represent Set-Shifting and Interference Management. Tasks in this construct require a good capacity to process, manage and shift between more than one element of a stimulus concurrently. The TMT requires participants to mentally shift between two sets of information (letters and numbers), and stay on task while coping with more than one task demand simultaneously (dual task). This is also a component skill needed in the CNT. Further, the Stroop Task requires the ability to efficiently shift attention from the most salient aspect of the stimuli (i.e. reading the word) to the more important aspect for successful task performance (i.e. the color in which the word is printed).

Executive Factor Three included measures from the Zoo Map test, the Brixton test, and the first derived factor score from the WCST (FA), in addition to more moderate contributions from the Cognitive Estimates Task (CET) and the CNT (FB). Overall measures included in this factor appear to represent Task Analysis. Successful performance requires good problem-solving skills including identifying and analyzing the task's requirements. For example, the WCST, Zoo Map Test and Brixton Test require participants to understand the cognitive reasoning that underpins these tasks, and identify and implement the most appropriate behavioral approach. This includes a component of conceptual or abstract reasoning, in addition to analysis of the task requirements and rules. Successful performance of the CET is also dependent on these skills, requiring evaluation, judgment and reasoning.

Executive Factor Four was thought to represent Response Inhibition. The Hayling Test is a measure of response inhibition, including the ability to stop or inhibit pre-potent responding. Similarly the Twenty Questions Test (PSC) variable, included in this construct, is considered a measure of inhibiting inappropriate responding. The CNT (FB) factor score additionally loaded on this factor, although it contributed more strongly to Executive Factor Three. This cross-loading is not unexpected given that CNT (FB) represents number of self-corrections exhibited, which is related to the participant's capacity to inhibit incorrect responses. It may have been expected that the Stroop task would have loaded within this factor given that it is said to assess verbal response inhibition. However, given that it loaded strongly and exclusively within Factor 2, it appears to be primarily assessing an individual's capacity to deal with more than one aspect of task stimuli concurrently rather than taxing their ability to inhibit a response. Further, it also differs in that the self-generation of a response is not required (comparable with tasks in Factor Four).

Executive Factor Five encompasses Strategy Generation and Regulation. This is a multidimensional construct, which is proposed to comprise the ability to select and implement strategies to produce the optimum task response. The adoption of a strategy facilitates successful performance for tasks within this construct. For example, on the Animal Fluency task, use of a strategy such as listing all African animals will ensure that the largest number of items can be generated. Underpinning this is the capacity to be cognitively flexible and self-regulate responses. This would ensure that the strategic approach is modified if it is not efficacious or in accordance with task demands.

Executive Factor Six comprised Self-Monitoring and Set-Maintenance (Factor 6). The WCST, RNG and CNT variables included within this factor may represent the ability to identify incorrect responding or inappropriate behavior, and modify behavior in response to feedback. This can be achieved by self-monitoring performance, as required in the RNG to ensure random production of responses, or the CNT to ensure that the participant is adhering to task rules. It can also be provided by environmental input, such as examiner response feedback in the WCST.

It is acknowledged that each factor represents an integrated and multi-dimensional set of skills. Dysfunction of the executive system could therefore be attributable to any of the identified skills of which it is comprised. Further dissociation of each construct is likely possible, and although beyond the scope of this paper, will be important for better understanding the skills that contribute to these constructs and the tests used to assess EF. It is pertinent that both clinicians and researchers acknowledge the multifactorial nature of these tasks when they are administered.

Previous factor analytic studies have not been sufficiently comprehensive to provide an overview of EF tests and the skills they assess. The strength of the current investigation lies in the uniquely large number of measures included. In addition, the current findings are supported by the degree of congruence between the literature reports of EF skills earlier discussed and the constructs identified. In previous lesion studies, executive skills including strategy generation; inhibition, goal formation, and selection; monitoring of behavior; and problem-solving are all directly comparable with the constructs in this investigation (Aron et al., 2004; Burgess, 2000; Denckla, 1996; Fuster, 1991; Goldman-Rakic, 1987; Lezak, 1993; Norman & Shallice, 1986; Stuss & Benson, 1986; Tranel et al. 1994). Notably, these skills are largely attributable to the DLPC rather than the OFC, which lacks standardized assessment in neuropsychological practice, and were therefore not identified in this investigation.

Findings of the current study also support the EF model of Stuss (2007) that describes four components of executive functioning. The identified factors are directly comparable with two of Stuss' theoretical constructs. Executive cognitive skills (planning, monitoring, activating, switching, and inhibition) can be directly related to this study's constructs of Set-Shifting and Interference Management, Task Analysis, Self-Monitoring and Set-Maintenance, and Response Inhibition; whereas, activation-regulating functions (ability to initiate and sustain appropriate behavioral responses to attain goals) are comparable with the Strategy Generation and Regulation and Prospective Working Memory factors in this study. Although these executive factors are thought to be mediated by the DLPFC, the remaining two components of Stuss' model involved in social, emotional processing, personality, and self-awareness (i.e. behavioral self-regulatory skills and metacognitive processes) implicate OFC regions. It is therefore not unexpected that these factors would not be comparable with the current findings.

Comparisons with previous exploratory factor analytic studies in healthy groups are also beneficial. Robbins et al.'s (1998) four-factor solution demonstrated similarities with the current findings: direct links between Robbins et al.'s Attentional Set-Shifting and Strategic Aspects of Executive Functions constructs, and this study's findings of Set-Shifting and Interference Management and Strategy Generation and Regulation constructs were identified. Robbins et al.'s Planning and Spatial Working Memory factor can be partially related to the Task Analysis construct identified in this study (with the main component being the problem-solving/planning of tasks). Significant differences are evident between Robbins et al.'s Mnemonic Aspects of the Spatial Working Memory task, and the current study's Prospective Working Memory, Response Inhibition, and Self-Monitoring and Set-Maintenance.

Direct comparisons are limited with the other investigations. Shute and Huertas’ (1990) analysis produced four factors comprising an operational reasoning task and the Category Test, TMT-B and the WCST which all loaded, although not exclusively, on the first principle factor in the solution. The TMT and WCST loaded on separate factors in the current study. Pineda and Merchan (2003) identified five factors derived from four tests, but listed only the measures loading on each factor and did not offer an interpretative analysis of all findings. The factors were represented by the Stroop Errors measure, the Stroop Time measure, the TMT, Verbal Fluency, and the WCST (the WCST factor was proposed to represent Organization and Flexibility). In the current study the Stroop task and TMT were placed within the same factor, but the WCST and Verbal Fluency loaded on different constructs. The construct on which the WCST loaded was defined as Task Analysis, and indicative of reasoning and problem-solving skills.

Given the significant methodological differences between the earlier mentioned and current investigations, it is evident that a comprehensive description of skills underlying EF performance is unlikely to emerge from investigations using a relatively small number of tests. Comparisons between this and future investigations adopting a similarly large number of tasks will be useful to determine whether executive tests are sensitive to comparable executive components in other clinical and healthy groups.

The results of this study have significant implications on the clinical assessment of executive functioning. They have confirmed that the EF system is multifactorial, but also that an expanded number of independent executive skills may exist. In order to obtain a comprehensive assessment of this system, a larger and more comprehensive battery may need to be used to provide a comprehensive picture of executive difficulties. The failure of tests to identify executive dysfunction may not only be a result of the poor psychometrics of the test, but also reflect the possibility that the administered test is not sensitive to the specific executive difficulty that the patient is experiencing.

The results are also informative for the clinician as they provide a greater understanding of how commonly used EF measurements are inter-related; that is the extent to which individual tests assess unique or overlapping aspects of EFs. Clinical neuropsychologists often rely on profile analysis of various EF test scores to understand the cognitive strengths and weaknesses of their clients; this is often based on their clinical experience, and knowledge of their tests, cognition and neuroanatomy. Any literature that can assist clinicians to understand what types of cognitive processes their clinical measures are actually assessing is regarded to be very important, and thought to be addressed to a degree by the current investigation. Further, understanding how non-clinical individuals perform on these tasks can assist the interpretation of the performance of clinical populations.

With regards to theoretical models of the EF system, the current findings highlight that the inconsistency of research findings may reflect methodological differences in the choice of tests used, often combined with small sample sizes. Studies utilizing only a few tests are unlikely to capture the full spectrum of executive skills, resulting in an oversimplification of the concept of EFs.

It is acknowledged that there are several limitations to this study. Recruitment did not specifically control for age, education, intelligence or employment history; and while using a normal population was considered a strength of this study, it would be informative to compare and validate the factor structure of EF measures in a neurologically impaired sample. If analogous executive factors are identified, research could be directed at investigating each of the independent constructs, and develop “purer” tasks that specifically measure the skill(s) comprising each. Clearly, this would facilitate test interpretation for both the clinician and the researcher. Given that previous research has demonstrated that the stability of factor solutions is dependent on the characteristics of the population being investigated (Chan, 2001; Pineda & Merchan, 2003; Robbins et al., 1998), understanding the profile of each EF factor structure within different clinical samples may be necessary.

In summary, this study offered the opportunity to examine the performance of a large healthy adult sample on an extensive range of EF tests. Results indicated the existence of six constructs including Prospective Working Memory, Set-Shifting and Interference Management, Task Analysis, Response Inhibition, Strategy Generation and Regulation, and Self-Monitoring and Set-Maintenance and identified the contribution of each test to the measurement of these constructs. This finding is consistent with current conceptualizations of EF in the literature, and confirms the notion that EF tests assess a diverse range of skills. The current study has thereby progressed our understanding of the relationship between specific EFs and their relationship with neuropsychological measures used in clinical and research practice. This study provides an important benchmark against which other investigations, attempting to obtain a more comprehensive portrayal of the executive system in healthy and clinical populations can be compared.

Conflict of interest: none declared.

References

Amieva
H.
Phillips
L.
Della Sala
S.
Behavioural dyexecutive symptoms in normal aging
Brain and Cognition
 , 
2003
, vol. 
53
 (pg. 
129
-
132
)
Anderson
V.
Assessing executive functions in children: Biological, psychological and developmental considerations
Neuropsychological Rehabilitation
 , 
1998
, vol. 
8
 
3
(pg. 
319
-
349
)
Anderson
V.
Anderson
P.
Northam
E.
Taylor
H. G.
Standardisation of the Contingency Naming Test for school-aged children: A measure of reactive flexibility
Clinical Neuropsychological Assessment
 , 
2000
, vol. 
1
 (pg. 
247
-
273
)
Ardlia
A.
On the evolutionary origins of executive functions
Brain and Cognition
 , 
2008
, vol. 
68
 
1
(pg. 
92
-
99
)
Aron
A. R.
Robbins
T. W.
Poldrack
R. A.
Inhibition and the right inferior frontal cortex
Trends in Cognitive Sciences
 , 
2004
, vol. 
8
 
4
(pg. 
170
-
177
)
Australian Bureau of Statistics
2001 National Consensus of Population and Housing. (Data File)
2001
 
Available from the Australian Bureau of Statistics website, http://www.abs.gov.au
Baddeley
A.
Della Sala
S.
Working memory and executive control
Philosophical Transactions of Royal Society London B Biological Sciences
 , 
1996
, vol. 
351
 
1346
(pg. 
1397
-
1403
discussion 1403–1394
Baddeley
A.
Della Sala
S.
Papagno
C.
Spinnler
H.
Dual-task performance in dysexecutive and nondysexecutive patients with a frontal lesion
Neuropsychology
 , 
1997
, vol. 
11
 
2
(pg. 
187
-
194
)
Bechara
A.
Damasio
H.
Damasio
A. R.
Emotion, decision making and the orbitofrontal cortex
Cerebral Cortex
 , 
2000
, vol. 
10
 
3
(pg. 
295
-
307
)
Benke
T.
Delazer
M.
Bartha
L.
Auer
A.
Basal ganglia lesions and the theory of fronto-subcortical loops: Neuropsychological findings in two patients with left caudate lesions
Neurocase
 , 
2003
, vol. 
9
 
1
(pg. 
70
-
85
)
Bennett
P. C.
Ong
B.
Ponsford
J.
Assessment of executive dysfunction following traumatic brain injury: Comparison of the “BADS” with other clinical neuropsychological measures
Journal of the International Neuropsychological Society
 , 
2005
, vol. 
11
 (pg. 
606
-
613
)
Benton
A.
Differential behavioral effects on frontal lobe disease
Neuropsychologia
 , 
1968
, vol. 
6
 (pg. 
53
-
60
)
Blakemore
S. J.
Choudhury
S.
Development of the adolescent brain: Implications for executive function and social cognition
Journal of Child Psychology and Psychiatry
 , 
2006
, vol. 
47
 
3–4
(pg. 
296
-
312
)
Brugger
P.
Variables that influence the generation of random sequences: An update
Perceptual and Motor Skills
 , 
1997
, vol. 
84
 (pg. 
627
-
661
)
Brugger
P.
Rabbitt
P.
Theory and methodology in executive function research
Methodology of frontal and executive function
 , 
1997
Hove, Sussex
Psychology Press
(pg. 
81
-
116
)
Brugger
P. W.
Strategy application disorder: The role of the frontal lobes in human multitasking
Psychological Research
 , 
2000
, vol. 
63
 
3–4
(pg. 
279
-
288
)
Brugger
P. W.
Alderman
N.
Evans
J.
Emslie
H.
Wilson
B. A.
The ecological validity of tests of executive function
Journal of the International Neuropsychological Society
 , 
1998
, vol. 
4
 (pg. 
547
-
558
)
Burgess
P.W.
Shallice
T.
Response suppression, initiation and strategy use following frontal lobe lesions
Neuropsychologia
 , 
1996
, vol. 
34
 
4
(pg. 
263
-
272
)
Burgess
P.W.
Shallice
T.
Bizarre responses, rule detection and frontal lobe lesions
Cortex
 , 
1996
, vol. 
32
 
2
(pg. 
241
-
259
)
Carter
C. S.
Botvinick
M. M.
Cohen
J. D.
The contribution of the anterior cingulate cortex to executive processes in cognition
Reviews in the Neurosciences
 , 
1999
, vol. 
10
 
1
(pg. 
49
-
57
)
Chalmers
D.
Lawrence
J. A.
Investigating the effects of planning aids on adults' and adolescents' organisation of a complex task
International Journal of Behavioral Development
 , 
1993
, vol. 
16
 (pg. 
191
-
214
)
Chan
R.
Dysexecutive symptoms among a non-clinical sample: A study with the use of the dysexecutive questionnaire
British Journal of Psychology
 , 
2001
, vol. 
92
 (pg. 
551
-
565
)
Charlton
R. A.
Barrick
T. R.
McIntyre
D. J.
Shen
Y.
O'Sullivan
M.
Howe
F. A.
, et al.  . 
White matter damage on diffusion tensor imaging correlates with age-related cognitive decline
Neurology
 , 
2006
, vol. 
66
 
2
(pg. 
217
-
222
)
Chayer
C.
Freedman
M.
Frontal lobe functions
Current Neurology and Neuroscience Reports
 , 
2001
, vol. 
1
 
6
(pg. 
547
-
552
)
Cicerone
K. D.
Tanenbaum
L. N.
Disturbance of social cognition after traumatic orbitofrontal brain injury
Archives of Clinical Neuropsychology
 , 
1997
, vol. 
12
 
2
(pg. 
173
-
188
)
Cohen
J. D.
Botvinick
M.
Carter
C. S.
Anterior cingulate and prefrontal cortex: who's in control?
Nature Neuroscience
 , 
2000
, vol. 
3
 
5
(pg. 
421
-
423
)
de Oliveira-Souza
R.
Moll
J.
Moll
F. T.
de Oliveira
D. L.
Executive amnesia in a patient with pre-frontal damage due to a gunshot wound
Neurocase
 , 
2001
, vol. 
7
 
5
(pg. 
383
-
389
)
De Vaus
D.
Analysing Social Sciences Data: 50 Key problems in data analysis.
 , 
2002
London
Sage Publications
Denckla
M. B.
Lyon
G. R.
The measurement of executive function
Frames of Reference for Assessing Learning Disabilities: New views on measurement issues
 , 
1994
Baltimore, MD
Brooks Publishing Co
(pg. 
117
-
142
)
Denckla
M.
Lyon
G. R.
Krasnegor
N. A.
A theory and model of executive function: A neuropsychological perspective
Attention, Memory and Executive Function
 , 
1996
Baltimore
Paul Brookes Publishing
(pg. 
263
-
277
)
Duncan
J.
Johnson
R.
Swales
M.
Freer
C.
Frontal lobe deficits after head injury: The unity and diversity of function
Cognitive Neuropsychology
 , 
1997
, vol. 
14
 (pg. 
713
-
741
)
Duncan
J.
Owen
A. M.
Common regions of the human frontal lobe recruited by diverse cognitive demands
Trends in Neuroscience
 , 
2000
, vol. 
23
 
10
(pg. 
475
-
483
)
Eslinger
P. J.
Lyon
G. R.
Krasnegor
N. A.
Conceptualizing, Describing, and Measuring Components of Executive Function: A Summary
Attention, Memory and Executive Function
 , 
1996
Baltimore
Paul Brookes Publishing
(pg. 
367
-
395
)
Eslinger
P. J.
Flaherty-Craig
C. V.
Benton
A. L.
Developmental outcomes after early prefrontal cortex damage
Brain and Cognnition
 , 
2004
, vol. 
55
 
1
(pg. 
84
-
103
)
Fabrigar
L.
Wegener
D.
MacCallum
R.
Strahan
E.
Evaluating the use of exploratory factor analysis in psychological research
Psychological Methods
 , 
1999
, vol. 
4
 
3
(pg. 
272
-
299
)
Fassbender
C.
Murphy
K.
Foxe
J. J.
Wylie
G. R.
Javitt
D. C.
Robertson
I. H.
, et al.  . 
A topography of executive functions and their interactions revealed by functional magnetic resonance imaging
Brain Research: Cognitive Brain Research
 , 
2004
, vol. 
20
 
2
(pg. 
132
-
143
)
Floyd
F.
Widaman
K.
Factor analysis in the development and refinement of clinical assessment instruments
Psychological Assessment
 , 
1995
, vol. 
7
 
3
(pg. 
286
-
299
)
Fuster
J.
The prefrontal cortex and its relation to behavior
Progressive Brain Research
 , 
1991
, vol. 
87
 (pg. 
201
-
211
)
Godbout
L.
Grenier
M. C.
Braun
C. M.
Gagnon
S.
Cognitive structure of executive deficits in patients with frontal lesions performing activities of daily living
Brain Injury
 , 
2005
, vol. 
19
 
5
(pg. 
337
-
348
)
Golden
C. J.
Stroop Colour and Word Test.
 , 
1978
Wood Dale, IL
Stoelting Company
Goldman-Rakic
P. S.
Development of cortical circuitry and cognitive function
Child Development
 , 
1987
, vol. 
58
 
3
(pg. 
601
-
622
)
Gorsuch
R.
Factor Analysis
 , 
1983
2nd Ed.)
Hillsdale, NJ
Erlbaum
Gothelf
D.
Furfaro
J. A.
Penniman
L. C.
Glover
G. H.
Reiss
A. L.
The contribution of novel brain imaging techniques to understanding the neurobiology of mental retardation and developmental disabilities
Mental Retardation and Developmental Disabilities Research Reviews
 , 
2005
, vol. 
11
 
4
(pg. 
331
-
339
)
Grafman
J.
Grafman
J.
, et al.  . 
Similarities and distinctions among current models of prefrontal cortical functions
Structure and Functions of the Human Prefrontal Cortex. Annals of the New York Academy of Sciences
 , 
1995
, vol. 
769
 
New York
New York Academy of Sciences
(pg. 
337
-
368
)
Halstead
W.
Brain and Intelligence
 , 
1947
Chicago
University of Chicago Press
Heaton
R. K.
Wisconsin Card Sorting Test manual
1981
Odessa, FL
Psychological Assessment Resources
Himanen
L.
Portin
R.
Isoniemi
H.
Helenius
H.
Kurki
T.
Tenovuo
O.
Cognitive functions in relation to MRI findings 30 years after traumatic brain injury
Brain Injury
 , 
2005
, vol. 
19
 
2
(pg. 
93
-
100
)
Holmes
A. J.
MacDonald
A.
3rd
Carter
C. S.
Barch
D. M.
Andrew Stenger
V.
Cohen
J. D.
Prefrontal functioning during context processing in schizophrenia and major depression: An event-related fMRI study
Schizophrenia Research
 , 
2005
, vol. 
76
 
2–3
(pg. 
199
-
206
)
Hornak
J.
O'Doherty
J.
Bramham
J.
Rolls
E. T.
Morris
R. G.
Bullock
P. R.
, et al.  . 
Reward-related reversal learning after surgical excisions in orbito-frontal or dorsolateral prefrontal cortex in humans
Journal of Cognitive Neuroscience
 , 
2004
, vol. 
16
 
3
(pg. 
463
-
478
)
Jacobs
R.
Anderson
V.
Planning and problem solving skills following focal frontal brain lesions in childhood: Analysis using the Tower of London
Child Neuropsychology
 , 
2002
, vol. 
8
 
2
(pg. 
93
-
106
)
Jason
G.
Gesture fluency after focal cortical lesions
Neuropsychologia
 , 
1985
, vol. 
23
 (pg. 
463
-
481
)
Kline
P.
An Easy Guide to Factor Analysis
 , 
1994
New York
Routledge
Kline
P.
The New Psychometrics: Science, psychology and measurement
1998
New York
Routledge
Lezak
M. D.
The problem of assessing executive functions
International Journal of Psychology
 , 
1982
, vol. 
17
 (pg. 
281
-
297
)
Lezak
M. D.
Newer contributions to the neuropsychological assessment of executive functions
Journal of Head Trauma Rehabilitation
 , 
1993
, vol. 
8
 (pg. 
24
-
31
)
Lezak
M. D.
Neuropsychological Assessment.
 , 
1995
3rd Ed.)
New York
Oxford University Press
Levine
B.
Stuss
D. T.
Milberg
W. P.
Alexander
M. P.
Schwartz
M.
Macdonald
R.
The effects of focal and diffuse brain damage on strategy application: Evidence from focal lesions, traumatic brain injury and normal aging
Journal International Neuropsychological Society
 , 
1998
, vol. 
4
 
3
(pg. 
247
-
264
)
Levine
B.
Stuss
D.T.
Milberg
W.P.
Concept generation: validation of a test of executive functioning in a normal aging population
Journal of Clinical and Experimental Neuropsychology
 , 
1995
, vol. 
17
 
5
(pg. 
740
-
758
)
Manes
F.
Sahakian
B.
Clark
L.
Rogers
R.
Antoun
N.
Aitken
M.
, et al.  . 
Decision-making processes following damage to the prefrontal cortex
Brain
 , 
2002
, vol. 
125
 
Pt 3
(pg. 
624
-
639
)
McDonald
C. R.
Delis
D. C.
Norman
M. A.
Tecoma
E. S.
Iragui-Madozi
V. I.
Is impairment in set-shifting specific to frontal-lobe dysfunction? Evidence from patients with frontal-lobe or temporal-lobe epilepsy
Journal of International Neuropsychological Society
 , 
2005
, vol. 
11
 
4
(pg. 
477
-
481
)
Miller
L.
Major syndromes of aggressive behaviour following head injury: An introduction to evaluation treatment
Cognitive Rehabilitation
 , 
1990
, vol. 
8
 
6
(pg. 
14
-
19
)
Miyake
A.
Friedmann
N. P.
Emerson
M. J.
Witzki
A. H.
Howerier
A.
The unity and diversity of executive functions and their contribution to complex “frontal lobe” tasks: A latent variable analysis
Cognitive Psychology
 , 
2000
, vol. 
41
 (pg. 
49
-
100
)
Miyake
A.
Friedmann
N. P.
Rettinger
D. A.
Shah
P.
Hegarty
M.
How are visuospatial working memory, executive functioning, and spatial abilities related? A latent-variable analysis
Journal of Experimental Psychology: General
 , 
2001
, vol. 
130
 
4
(pg. 
621
-
640
)
Mosher
F.
Hornsby
J.
Bruner
J.
, et al.  . 
On asking questions
Studies in Cognitive Growth
 , 
1966
New York
Wiley & Sons
(pg. 
86
-
102
)
Nauta
W. J.
The problem of the frontal lobe: A reinterpretation
Journal of Psychiatric Research
 , 
1971
, vol. 
8
 
3
(pg. 
167
-
187
)
Nelson
H. E.
National Adult Reading Test (NART): Test manual
1982
Windsor
NFER-Nelson
Norman
D.
Shallice
T.
Davidson
R.
Schwartz
G.
Shapiro
D.
Attention to action
Consciousness and Self-regulation
 , 
1986
New York
Plenum Press
(pg. 
1
-
18
)
Nunnally
J.
Bernstein
I.
Psychometric Theory
1994
New York
McGraw Hill Inc
Pantelis
C.
Yucel
M.
Wood
S. J.
McGorry
P. D.
Velakoulis
D.
Early and late neurodevelopmental disturbances in schizophrenia and their functional consequences
Australian and New Zealand Journal of Psychiatry
 , 
2003
, vol. 
37
 
4
(pg. 
399
-
406
)
Passingham
R. E.
Cortical mechanisms and cues for action
Philosophical Transactions of Royal Society London B
 , 
1985
, vol. 
308
 
1135
(pg. 
101
-
111
)
Phillips
L. H.
Rabbitt
P.
Do frontal tests measure executive function? Issues of assessment and evidence from fluency tests
Methodology of frontal and executive function
 , 
1997
Hove, Sussex
Psychology Press
(pg. 
191
-
213
)
Pineda
D.
Merchan
V.
Executive function in young Columbian adults
International Journal of Neuroscience
 , 
2003
, vol. 
113
 (pg. 
397
-
410
)
Porteus
S. D.
Porteus Maze Test: Fifty years' application
 , 
1965
Palo Alto, California
Pacific Books
Reitan
R. M.
Trail Making Test: Manual for administration and scoring
1992
South Tuscon, AZ
Reitan Neuropsychological Laboratory
Robbins
T. W.
James
M.
Owen
A. M.
Sahakian
B. J.
Lawrence
A. D.
McInnes
L.
, et al.  . 
A study of performance on tests from the CANTAB battery sensitive to frontal lobe dysfunction in a large sample of normal volunteers: Implications for theories of executive functioning and cognitive aging
Journal of the International Neuropsychological Society
 , 
1998
, vol. 
4
 (pg. 
474
-
490
)
Roth
R. M.
Saykin
A. J.
Executive dysfunction in attention-deficit/hyperactivity disorder: Cognitive and neuroimaging findings
Psychiatric Clinics of North America
 , 
2004
, vol. 
27
 
1
(pg. 
83
-
96, ix
)
Ruff
R. M.
Ruff Figural Fluency Test Professional Manual.
 , 
1988
Odessa, FL
Psychological Assessment Resources Inc
Schnirman
G. M.
Welsh
M. C.
Retzlaff
P. D.
Development of the Tower of London-Revised
Assessment
 , 
1998
, vol. 
5
 
4
(pg. 
355
-
360
)
Shallice
T.
Burgess
P. W.
Frith
C. D.
Can the neuropsychological case-study approach be applied to schizophrenia?
Psychological Medicine
 , 
1991
, vol. 
21
 
3
(pg. 
661
-
673
)
Shallice
T.
Evans
M. E.
The involvement of the frontal lobes in cognitive estimation
Cortex
 , 
1978
, vol. 
14
 (pg. 
294
-
303
)
Shallice
T.
Burgess
P. W.
Deficits in strategy application following frontal lobe damage in man
Brain
 , 
1991
, vol. 
114
 (pg. 
727
-
741
)
Shute
G. E.
Huertas
V.
Developmental variability in frontal lobe function
Developmental Neuropsychology
 , 
1990
, vol. 
6
 (pg. 
1
-
11
)
Smith
E. E.
Jonides
J.
Storage and executive processes in the frontal lobes
Science
 , 
1999
, vol. 
283
 
5408
(pg. 
1657
-
1661
)
Storey
E.
Forrest
S.
Shaw
J.
Mitchell
P.
Gardner
R. M.
Spinocerebellar ataxia type 2: Clinical features of a pedigree displaying prominent frontal-executive dysfunction
Archives of Neurology
 , 
1999
, vol. 
56
 
1
(pg. 
43
-
50
)
Stuss
D. T.
Miller
B. L.
Cummings
J.
New approaches to prefrontal lobe testing
The human frontal lobes: Functions and disorders
 , 
2007
2nd ed.
New York, NY
Guilford Press
(pg. 
292
-
305
)
Stuss
D. T.
Benson
D. F.
The Frontal Lobes
 , 
1986
New York
Raven Press
Stuss
D. T.
Alexander
M. P.
Executive functions and the frontal lobes: A conceptual view
Psychological Research
 , 
2000
, vol. 
63
 
3–4
(pg. 
289
-
298
)
Stuss
D. T.
Levine
B.
Adult clinical neuropsychology: Lessons from studies of the frontal lobes
Annual Review of Psychology
 , 
2002
, vol. 
53
 (pg. 
401
-
433
)
Szameitat
A. J.
Schubert
T.
Muller
K.
Von Cramon
D. Y.
Localization of executive functions in dual-task performance with fMRI
Journal of Cognitive Neuroscience
 , 
2002
, vol. 
14
 
8
(pg. 
1184
-
1199
)
Tabachnick
B. G.
Fidell
L. S.
Using Multivariate statistics
1996
California
Harper Collins
Tamm
L.
Menon
V.
Reiss
A. L.
Maturation of brain function associated with response inhibition
Journal of the American Academy of Child and Adolescent Psychiatry
 , 
2002
, vol. 
41
 
10
(pg. 
1231
-
1238
)
Taylor
S. F.
Welsh
R. C.
Wager
T. D.
Phan
K. L.
Fitzgerald
K. D.
Gehring
W. J.
A functional neuroimaging study of motivation and executive function
Neuroimage
 , 
2004
, vol. 
21
 
3
(pg. 
1045
-
1054
)
Tekin
S.
Cummings
J. L.
Frontal-subcortical neuronal circuits and clinical neuropsychiatry: An update
Journal of Psychosomatic Research
 , 
2002
, vol. 
53
 
2
(pg. 
647
-
654
)
Testa
R.
Pantelis
C.
Wood
S. J.
Allen
N.
Pantelis
C.
The role of executive functions in psychiatric disorders
Neuropsychological of Mental Disorders
 , 
2009
Cambridge, UK
Cambridge University Press
Tranel
D.
Anderson
S. W.
Benton
A. L.
Boller
F.
Grafman
J.
Development of the concept of ‘executive function’ and its relationship to the frontal lobes
Handbook of Neuropsychology
 , 
1994
, vol. 
9
 
Amsterdam
Elselvier
(pg. 
125
-
148
)
van der Werf
S. P.
Prins
J. B.
Jongen
P. J.
van der Meer
J. W.
Bleijenberg
G.
Abnormal neuropsychological findings are not necessarily a sign of cerebral impairment: A matched comparison between chronic fatigue syndrome and multiple sclerosis
Neuropsychiatry Neuropsychology Behavioural Neurology
 , 
2000
, vol. 
13
 
3
(pg. 
199
-
203
)
Wechsler
D.
Wechsler Adult Intelligence Scale – Third Edition
 , 
1997
San Antonio, TX
Psychological Corporation
Wilson
B. A.
Evans
J. J.
Emslie
H.
Alderman
N.
Burgess
P.
The development of an ecologically valid test for assessing patients with dysexecutive syndrome
Neuropsychological Rehabilitation
 , 
1998
, vol. 
8
 (pg. 
213
-
228
)
Zimmerman
M. E.
Brickman
A. M.
Paul
R. H.
Grieve
S. M.
Tate
D. F.
Gunstad
J.
, et al.  . 
The relationship between frontal gray matter volume and cognition varies across the healthy adult lifespan
American Journal of Geriatric Psychiatry
 , 
2006
, vol. 
14
 
10
(pg. 
823
-
833
)

Author notes

This investigation was conducted at Department of Psychology, Psychiatry, Psychological Medicine, Monash University, Melbourne, Australia.