Abstract

Over the last two decades, research has suggested that candidates' test performances and scores are collaboratively achieved through interviewing/scoring processes and there could be unfair situations caused by the inter-interviewer variation. To investigate a precise picture of the impact of inter-interviewer variation, this research examines the variability of interviewer behaviour, its influence on a candidate's performance and raters' consequent perceptions of the candidate's ability on analytical rating scales (for example, pronunciation, grammar, fluency). The data are collected from two interview sessions involving the same candidate with two different interviewers, and the video-taped interviews are rated by 22 raters on five marking categories. The results show that a significantly different score was awarded to ‘pronunciation’ and ‘fluency’ in the two interviews. The reasons for the differences are discussed based on conversation analysis findings. This paper concludes with suggestions as to how the potential unfairness caused by interviewer variability could be solved.

You do not currently have access to this article.