Abstract

In spite of the widespread use of Common European Framework of Reference for language learning, teaching, and assessment (CEFR) scales, there is an overwhelming lack of evidence regarding their power to describe empirical learner language (Fulcher 2004; Hulstijn 2007). This article presents results of a study that focused on the empirical robustness (i.e. the power of level descriptions to capture what learners actually do in a language test) of the CEFR vocabulary & fluency scales (A2-B2). Data stem from an Italian & German oral proficiency test (Abel et al. 2012). Results show that the empirical robustness was flawed: Some scale contents were hardly observable or so evenly distributed that they could not distinguish between learners. Contradictory/weak correlations among scale features and heterogeneous cluster solutions suggest that scales did not consistently capture typical learner behaviour. Often, learner language could not be objectively described by any level description. Also, it was only partially possible to link scale contents to research-based measures of fluency and vocabulary. Given the importance of CEFR levels in many high-stakes contexts, the results suggest the need of a large empirical validation project.

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)
You do not currently have access to this article.