Extract

Around this time each year stakeholders from across the publishing landscape receive the most recent data on the journal impact factor (IF). This is often followed by celebratory and congratulatory messages from journal editors and board members—who proudly announce how much their journal’s IF increased and where it now ranks relative to their disciplinary peers. These updated figures are then prominently displayed on the homepages of most journals. What’s more, these increases are often viewed as an indicator of increased quality, rigor, and merit. The discipline of social work is no exception. This is problematic, given the numerous limitations of the IF.

Although the IF has its vocal proponents (e.g., Garfield, 2006) and arguably some basic utility as an “approximation of journal quality” (Hodge & Lacasse, 2011), its flaws raise the question as to why we should continue to use it at all. As a journal-level metric, the IF is unable to assess the value of any given article or author. To make this inference, one would need to read the article and assess its claims, scientific rigor, methodological soundness, and broader implications. What’s more, the IF (which represents the average number of citations across a finite set of eligible articles) is vulnerable to the skewness in citation rates among articles (Nature, 2005) and to the manipulation, negotiation, and gaming of its calculation among stakeholders (Ioannidis & Thombs, 2019). At a more fundamental level the IF does not capture journal functioning such as improvements to (or worsening of) internal evaluative processes (e.g., effectiveness of peer review, changes to submission instructions and policies, use and adherence to reporting guidelines, etc.; Dunleavy, 2022). These and other issues are explored in more depth by Seglen (1997).

You do not currently have access to this article.