-
PDF
- Split View
-
Views
-
Cite
Cite
Jelena Brankovic, Leopold Ringel, Tobias Werron, Spreading the gospel: Legitimating university rankings as boundary work, Research Evaluation, Volume 31, Issue 4, October 2022, Pages 463–474, https://doi.org/10.1093/reseval/rvac035
- Share Icon Share
Abstract
The dramatic salience of university rankings is usually attributed to a number of macro-level trends, such as neoliberal ideology, the spread of audit culture, and globalization in the broadest sense. We propose that the institutionalization of university rankings cannot be fully accounted for without a better understanding of the meso-level processes that enable it. To explore these, we zoom in on an organization called IREG Observatory (whereby IREG stands for ‘International Ranking Expert Group’). Since it first emerged, in 2002, IREG has acted as a carrier of a kind of rationalized ‘faith in rankings’—a faith it has laboured to justify, diffuse, and solidify through boundary work at the intersection of technocratic, managerial, academic, and commercial spheres. Drawing on the insights gained from this particular case, the article argues that the institutionalization of university rankings is not solely a matter of universities being impelled by them but also a matter of how actors in and around the university sector collectively partake in the legitimation of the practice of ranking universities. At a more general level, our analysis potentially provides a blueprint for understanding boundary work as a meso-level process that plays an important role in the institutionalization of rankings, and other devices of evaluation.
1. Introduction
One of the most dramatic developments in the recent history of universities has been an ‘explosion’ of various practices and devices of evaluation (Power 1994; Strathern 2000; Bogue and Hall 2003; Hamann and Beljean 2017). Among them, rankings seem to occupy a somewhat unique position, not least when we consider the attention they receive, both in expert and popular discourses (Amsler 2014; Hazelkorn 2015; Paradeise and Filliatreau 2016). In addition to being regularly covered by major national newspapers, rankings are the only device that has been repeatedly put to the purpose of evaluating all universities in the world in a single framework. The idea of placing universities in rank-ordered tables is so compelling that, since the turn of the 21st century, the number of rankings has grown steadily—both in specific national contexts and internationally.
This development is both remarkable and unremarkable. It is unremarkable when we take broader social and historical developments into consideration. Seen against the larger family of rankings that have gained traction in recent decades, the ones that we see in the university sector emerge as neither the oldest nor the most popular—many people know them, but arguably even more people know which football club has moved up or down in the Premier League (Ringel and Werron 2020; Brankovic, Ringel and Werron 2021). On the other hand, their salience is quite remarkable because, of all the devices used to compare and evaluate university performances, rankings have been continuously among the most contested one, routinely critiqued, on occasions boycotted, and frequently seen as biased, flawed, and doing harm rather than good (Amsler 2014; Ringel, Hamann and Brankovic 2021; Kaidesoja 2022). And yet, we still do not have a good enough explanation that would capture how, amidst the contestation and controversy, university rankings not only persevere but also thrive.
To remedy this, we propose moving away from attributing the salience of university rankings to a number of macro-level trends, such as neoliberalism, the spread of audit culture, geopolitics, and globalization more generally (e.g., Marginson and van der Wende 2007; Shore and Wright 2015; Hazelkorn 2016). Extending the analytical scope of these explanations, we argue that this problem cannot be fully addressed without accounting for the meso-level dynamics that also play a role in the institutionalization of university rankings. We suggest that, to understand why university rankings are a ubiquitous phenomenon, we need to have a better grasp of the processes within and at the margins of the university sector. As rankings become a more common point of reference in higher education and science policy agendas, university strategies, and expert discourse, their ‘life’ as devices increasingly depends on situated organizational practices, which mandates us to consider the global, the local, and the entanglements between the two.
To explore these dynamics, we zoom in on an organization called IREG Observatory (shorter IREG, which stands for ‘International Ranking Expert Group’). IREG profiles itself as a prime site of debate and expertise on rankings and is entirely dedicated to the promotion of ranking as a method of evaluating and comparing universities. Initially established as an informal group in 2002, IREG has regularly convened major and minor producers of university rankings, policymakers, university administrators, international organizations, data companies, consultants, as well as scholars of rankings, higher education, and science. As an organization, IREG is native to what Eyal (2012) refers to as a ‘space between fields’—in which diffuse actors seamlessly come together and interact. We see this space as a kind of a meso-level social order in which collective attention is directed to rankings and meaning is assigned to them. Crucially, as we shall argue in the remainder of this article, it is also a space in which university rankings are collectively legitimated. We start by elaborating on the research problem that motivates this study.
2. From talking about to organizing around university rankings
The dramatic salience of rankings in the university sector internationally, which gained a historical momentum with the first publication of the Shanghai rankings in the early 2000s, has been largely attributed to neoliberalism, audit culture, and other aforementioned broad trends. While we do not call these interpretations into question, we suggest that they are incomplete as long as we do not consider meso-level dynamics, that is, the structures and processes involving organizations, communities, and interactions (Turner 2012), which both enable and emerge around rankings as a specific device for evaluating and comparing universities. This invariably includes context-specific interpretations of the said macro-level trends, as well as more material aspects, such as the patterns of interaction and entanglements between actors.
When considering the historical institutionalization of university rankings, it is important to note that rankings have been an object of debate among scholars and administrators at least since the early 20th century (Ringel and Werron 2020; Wilbers and Brankovic 2021). However, it was not until the beginning of the 21st century that they would become an object of sustained interest also beyond specific national contexts. Soon, they would attain the status of a global phenomenon, attracting ever larger audiences. Sarah Amsler was very much on point when she observed, about a decade after the first global rankings were published, that ‘[w]riting about rankings has become a global business’ (2014: 155). Writing and—we would add—talking about rankings has since then become even more common across scholarly, administrative, and policy circles. In parallel with the expansion of university rankings, we have also witnessed a proliferation of organized occasions dedicated, in part or entirely, to these devices: conferences, workshops, round tables, projects, edited volumes, reports, and books. These have all emerged as sites where rankings are discussed among professional peers, particularly those considered to possess expertise on the subject in some sense, scholarly or otherwise. However, these discussions are rarely an object of interest for social scientists who grapple with the question of why and how university rankings become institutionalized.
The theoretical literature on institutionalization has repeatedly asserted the importance of the so-called ‘carriers’, that is, the actors who ‘labor to reproduce, promote, and diffuse the ideas and thus establish their legitimacy’ (Sahlin-Andersson and Engwall 2002; Drori 2006: 101; Jepperson and Meyer 2011). As agents of globalization and rationalization, these actors are key in diffusing and sustaining rationalized technologies of evaluation, such as rankings. The role of carriers is typically assumed by international organizations and other transnationally positioned actors, which among other activities may sponsor an ongoing exchange between government officials, policy experts, civil society, and various other parties (Boli and Thomas 1997; Kentikelenis and Seabrooke 2017). This type of interaction is of course not a new phenomenon in higher education and science, yet we are now witnessing it also specifically in relation to rankings. Organizations such as the UNESCO, OECD, World Bank, and the European University Association have repeatedly contributed to the discussion on rankings over the past two decades (examples include Hazelkorn 2007; Altbach and Salmi 2011; Rauhvargers 2011; Marope, Wells and Hazelkorn 2013). Organizations producing rankings too have become regular convenors of the many ‘voices’ (see also Lim 2018).
As a device of evaluation based on quantified comparisons of performances, rankings are well suited for such multilateral settings. By conceptualizing quantification as ‘a technology of distance’, Porter (1996: ix) explicitly stresses the suitability of number-based devices for the communication beyond the boundaries of locality and community, but also of culture and ideology. Scholars have already noted that university rankings—in part because they are a method of complexity reduction—act as a link between the academic and other fields (Hamann and Schmidt-Wellenburg 2020). Referring to such ‘world-bridging’ phenomena, Star and Griesemer (1989) proposed the concept of ‘boundary objects’, which may denote an abstract or material object such as a tool, a device, a classification system, or an idea. The key characteristic of boundary objects is that they are both flexible enough to be adapted to the needs and purposes of different actors, and robust enough to maintain a common identity across sites (Bowker and Star 1999). We note that rankings display such a combination of flexibility and robustness, which we see in their ability to mobilize the attention and interest of multiple parties, among them universities, students, parents, governments, donors and sponsors, social scientists (and academics in general, across the disciplinary spectrum), and the public at large.
Rankings’ world-bridging character is particularly manifest in the multiplicity of purposes they serve—a feature they share with other quantitative ‘indicators’ (cf. Merry 2016). By way of illustration, university administrators may see rankings as useful for comparing the performances of their own with other institutions, finding suitable partners, setting targets in their strategic planning, or even monitoring how individual departments and academic staff ‘perform’. For the commercial organizations producing them, rankings can be part of the strategy to sell advertising space, subscriptions, consulting, and increase the visibility of products. Data companies and private consultants may see rankings as a means to shore up demand for their own services. For government agencies, rankings can be a technocratic solution for making decisions on funding (usually argued in reference to the alleged scarcity of resources). In general, similar to university administrators, government officials may take an interest in rankings because they allow them to make sense of institutions’ performances. Students and academics may see rankings as helpful when deciding where to study or apply for a job, and employers because they believe rankings to be a signal of the potential value of their employees. Scholars of higher education and science can use rankings as a source of empirical insight and may especially be interested in their scientific soundness and validity as measurement and evaluation tools. They can also see them as an opportunity to engage in social critique. Last but not least, an interest in rankings and other metrics may arise also because, as Brighenti (2021: 325) ponders, ‘pleasure can be derived from performing, being recognized, and standing out’.
On the producers’ side too, rankings’ capacity to span social worlds is equally notable. By way of example, the U.S. News & World Report, a for-profit news magazine, belongs both to the journalistic and the corporate spheres. As a university-based research centre, CWTS Leiden is clearly part of the academic world, but given that the centre is also attached to a private consultancy company with limited liability, CWTS Leiden BV, it may also be influenced by a business logic. Attesting to this phenomenon, scholars have recently drawn attention to the conflict of interest in the organizations that both rank universities and sell consulting or advertising services to them (Jacqmin 2021; Chirikov 2022). Academics are neither exempt from wearing the proverbial multiple hats. A scholar may simultaneously publish in academic journals, consult governments on their higher education policy, sell a similar kind of service to universities, in their own capacity or through a private consulting firm, or deliver a (financially compensated) keynote at an event sponsored by a ranking producer. What these examples show is that the individuals and organizations partaking in the collective conversation, and not least in the production of some of the rankings, belong to more than one institutional domain, each with its own logic of operation, role expectation, and an understanding of what rankings are or what purposes they (are supposed to) serve.
The growing importance of the technologies of evaluation emerging at the intersection of the academic and other social fields is well exemplified by Williams (2020) in her study on the measurement of research impact. Williams shows how ‘research impact’ operates within a ‘space between fields’ (Eyal 2012), namely the fields of politics, application, media, and the economic field. Eyal’s notion of the ‘spaces between fields’ (2012), which he proposes as a bridge between Bourdieu’s field analysis and Latour’s actor–network theory, is particularly instructive for studying interstitial spaces. Eyal proposes to ‘cease to think of the boundary in Euclidean terms, as a fine line with no width to it, and begin to grasp it as a real social entity with its own volume’. He then elaborates:
As such, the boundary does not simply separate what’s inside and outside the field, for example, what is economic and what is not, but is also a zone of essential connections and transactions between them. On the one hand, the volume of the boundary is where struggles take place to apportion actors and practices this way and that; on the other hand, it is also where networks provide for a seamless connection between fields. (Eyal 2012: 175)
The distinction between the ‘zone’ and the ‘fine line’ is helpful because it allows us to think of boundaries as conditions for both ‘separation and exclusion’ and ‘communication, exchange, bridging, and inclusion’ (Lamont and Molnár 2002: 181). It is, however, especially helpful because it treats boundaries as dynamic sites, which can evolve and in which interaction and meaning-making can take place. In this sense, we see boundary work as the work being done in the boundary zone. This work may or may not be consequential for different kinds of boundaries, regardless of whether it represents a conscious collective effort to influence specific boundaries (cf. Langley et al. 2019).
The collective legitimation of university rankings, we thus propose, takes place (also) in the herewith-described meso-level boundary zone. That is, when different parties, such as the academics studying rankings, university administrators, consultants, policymakers, and ranking organizations themselves, are repeatedly brought to the same table where they are expected to take each other into account. Clearly, not all ranking-related encounters between policymakers and social scientists or between ranking organizations and university administrators are equally relevant or consequential, even though it can be argued that all of these (by no means rare) occasions are of the boundary kind in some way. In fact, it is the sum of all of these ‘boundary encounters’ that makes boundary work (effective). And when such repeated encounters become progressively routinized, organized, and even to some extent institutionalized as distinct social (boundary) space—with its own volume, struggles, actors, and practices—they merit a particularly close consideration.
3. IREG and boundary work around rankings
To date, IREG has raised only minor interest in the scholarship on university rankings. Earlier empirical works documenting some of IREG’s activities pointed to its efforts to position itself as an important actor in the international rankings scene (Paradeise and Filliatreau 2016; Barron 2017). In general, IREG tends to appear in research almost exclusively in relation to the ‘Berlin Principles on Ranking of Higher Education Institutions’,1 which it adopted in 2006, and the related ranking audits. IREG would typically be recognized as a body put in place in order to hold rankings to account and then criticized for not living up to that ambition (Hägg and Wedlin 2013; Ordorika and Lloyd 2015; Stack 2016; Barron 2017; Hauptman Komotar 2020). Closer to our interest, in a recent study looking into how companies providing data analytics use rankings to sell their products to universities, Chen and Chan (2021) mention IREG’s events as being used by companies like Elsevier as a venue to promote their products to universities.
What is IREG? According to its own description, ‘IREG Observatory on Academic Ranking and Excellence’ (shorter ‘IREG’) is ‘an international institutional non-profit association of ranking organizations, universities and other bodies interested in university rankings and academic excellence’.2 Formally speaking, IREG is not a classic organization, but a ‘meta-organization’ (Ahrne and Brunsson 2005), which means that its members are other organizations. As usual for organizations of this type, the highest statutory body is the General Assembly, made of members’ delegates, which elects the Executive Committee, and performs other statutory tasks. IREG’s members support the organization by paying membership fees, participating in its events (again, for a fee), and occasionally hosting and/or sponsoring IREG’s events. IREG also has a website, where it posts news and information on rankings. Auditing university rankings is another service IREG offers to various interested parties, for which it relies on a network of experts. Its operations are supported by a small Secretariat, based in Warsaw (Poland).
With its overall loose structure and geographically diffuse membership, IREG ‘comes to life’ especially on the occasions when its members meet. This happens at IREG’s own conferences, forums, and occasional smaller events, organized at least once a year (often with the General Assembly scheduled in the programme). We are particularly interested in these events, given that it is on those occasions that the IREG’s collective focus is on the object that is also the reason for its existence—rankings. It is also the event where IREG regularly brings together members and non-members. Among these are ranking organizations (major and minor), management and policy consultants, university presidents, government officials, social scientists, bibliometricians, data analytics companies, accreditors, officials from international organizations, and others, sometimes labelled as ‘experts’ by the organization.3 Having its events widely recognized is clearly a marker of success for IREG, as illustrated by the following excerpt from the invitation to the ‘Jubilee IREG 2022 Conference’:
The IREG conferences have traditionally become a unique and neutral international platform, where university rankings are discussed in the presence of those who do the rankings, and those who are ranked: authors of the main global rankings, university managers, and experts on higher education.4
Due to its being positioned in a boundary and, at first glance, relatively undefined social space, IREG’s legitimacy cannot be taken for granted. Rather, we see it as inseparably tied both to (1) the legitimacy of rankings as a specific type of evaluation device and (2) the social worlds, or institutional spheres, that have an interest in rankings. The social worlds, and the individual actors thereof, that we see as the most relevant for our study are the following:
Technocratic: individuals affiliated with international organizations, governments, their agencies, and other national and international bodies working on higher education and/or science policy.
Managerial: members of university administrations, heads of various university offices and units, and similar.
Academic: social scientists, mainly those researching higher education and science.
Commercial: individuals representing data analytics companies, media outlets, private consultants and their firms, and other profit-oriented businesses.
As noted earlier, the boundaries between these social worlds are not always clear while also being fuzzy in the sense that some actors can switch between roles or combine multiple roles simultaneously. Also, while we see these spheres as the most important ones for studying the case at hand, this is not an exhaustive list or the only possible way of conceiving the worlds. Further divisions within and across spheres are possible and may as well play a role (e.g., national/international, disciplinary affiliation, for-profit/non-for-profit, and so on).
In view of its mission and organizational structure, IREG can be viewed as a ‘boundary organization’ (Guston 1999; Medvetz 2014), located in the boundary space between these social worlds, both leaning on them while having some degree of independence. Boundary organizations provide an object of social action, in our case rankings, and ‘stable but flexible sets of rules for how to go about engaging with that object’ (Moore 1996: 1598). Their activities are often ‘an elaborate symbolic balancing act’, which is necessary to secure resources and legitimacy from the institutional spheres they lean on (Medvetz 2014: 16). These conceptual considerations direct us to the kind of rules and practices IREG promotes for engaging with rankings, how they are promulgated, and how IREG balances its role at the nexus between diffuse actors who have an interest or stakes in the practice of ranking universities.
4. Data and method
To explore the roles and activities of IREG, we have combined data from multiple publicly available sources. Our interest is both in the present and the past, in the sense that we are not interested only in understanding IREG as it is today but also how came to be (in fact, we suspect that both are connected: to understand the present, we need to take the past into consideration). We observe IREG over the period of about 20 years, starting with the first meeting in 2002.
Our analysis is based on a corpus of qualitative data compiled from the following sources: (1) four issues of the UNESCO-CEPES’ journal Higher Education in Europe (in the analysis refered to as HEiE),5 containing reports on and follow-up articles to IREG’s first four meetings, specifically, vol. 27, issue 4 (2002), vol. 30, issue 2 (2005), vol. 32, issue 1 (2007), and vol. 33, issue 2/3 (2008); (2) text, documents, and photos from IREG’s website, with special attention to the pages dedicated to events and the news section6; (3) Internet Archive (Wayback Machine) to retrieve information from past versions of the website7; (4) video recordings of the 2020 and 2021 IREG conferences (Beijing8 and Jeddah9, respectively), as well as video material from other IREG events or events in which IREG’ representatives participated; (5) all the articles from the University World News mentioning ‘IREG’,10 published between the magazine launch (2007) and 28 October 2021 (40 articles in total); and (6) secondary sources for the purposes of contextualizing IREG’s work over the period. We have also contacted IREG’s Secretariat in order to obtain additional material but did not receive a reply.
Our method and interpretation involved thorough and in-depth reading/watching of the material, followed by multiple rounds of interpretation and discussion by the three authors. We have used the qualitative analysis software MAXQDA to code the data on IREG’s events (2 and 3 above), which allowed us to compare the descriptions, programmes, topics, and speakers. The Internet Archive (3) allowed us to retrieve pages on past events, the list of IREG’s Executive Committee members and its member organizations, since 2009 and 2011, respectively. Although not all information was equally available for every event, or year, to allow for perfect comparability, the data we retrieved were sufficiently rich to make reliable observations about IREG’s evolution and activities.
5. Legitimating rankings through boundary work
According to its official account, IREG’s history began in 2002, at a meeting convened by the now defunct UNESCO—European Centre for Higher Education (henceforth UNESCO-CEPES).11 Entitled ‘The Invitational Roundtable on Statistical Indicators for Quality Assessment of Higher/Tertiary Education Institutions—Ranking and League Tables Methodologies’, the meeting was convened by Jan Sadlak, the director of UNESCO-CEPES, and Jamie P. Merisotis, the president of the Washington-based Institute for Higher Education Policy (IHEP).12 The meeting was organized within the framework of a UNESCO-CEPES’ project on ‘strategic Indicators’ in higher education13—a direct response to UNESCO’s earlier call for developing a system which would be ‘able to quantify the intangibles of a set of complex teaching, learning, and research phenomena, and the administration, functioning, and financing of higher education’ (HEiE, 27(4), 2002: 359). IREG’s origin, therefore, had more to do with international trends in higher education policy at the turn of the 21st century, than with any developments related specifically to international rankings of universities.
One of the conclusions of the first meeting was that ‘more research and continued dialogue’ on rankings was needed (Merisotis, HEiE, 27(4), 2002: 480). Over the next several years, Merisotis, Sadlak, and others would build a small yet gradually expanding community of professionals gathered around university rankings, making it eventually independent of UNESCO-CEPES and other organizations it had relied on in the beginning. In this section, we analyse IREG’s progression whereby we orient ourselves around three boundary processes we identify in the data: (1) the demarcation of discourse on university rankings, (2) the elaboration of IREG’s agency, and (3) the consolidation of the boundary between multiple institutional spheres. We weave into our analysis critical episodes from IREG’s history in order to present a fuller picture of the case and its place in the wider institutional context. Overall, we divide IREG’s history into two phases: the informal phase (period 2002–9) and the organizational phase (since 2009).
5.1 Discursive demarcation
The status of IREG as an organization that is existentially invested in the legitimacy of rankings made it often act as their apologist. Notably, however, when engaging in the rhetorical justification of rankings, IREG would rarely do so in order to highlight the advantages of rankings compared to other devices of evaluation, but rather as a way of undermining unspecified critical and sceptical voices. This rhetoric, therefore, signifies the construction of a discursive boundary that serves to discipline the actors partaking in the discourse on rankings. We identify three emblematic rhetorical moves, which we interpret as appeals to determinism, realism, and instrumentalism.
5.1.1 Determinism: ‘rankings are inevitable’
Across IREG’s documents and statements by its representatives, rankings are typically portrayed as a consequence of a number of interrelated broader trends, among them ‘massification’, the expansion of student markets, ‘globalization’ and ‘global competition’. The frequent use of the phrase ‘rankings are here to stay’, which Amsler calls a ‘hypnotic mantra’ (2014: 157), also speaks to the understanding of rankings as an inevitability of our contemporary times. After IREG’s first meeting, Merisotis would write: ‘Whether or not colleges and universities agree with the various ranking systems and league table findings is irrelevant; ranking systems clearly are here to stay’ (HEiE, 27(4), 2002: 361). We are also told that there is ‘increasing evidence that ranking systems are here to stay’ (Sadlak, Merisotis and Liu, HEiE, 33(2–3), 2008: 195). During his serving as the president of IREG (2009–2018), Sadlak would use this rhetoric on numerous occasions, and he would not be alone. We identified instances of the same assertion having been made on multiple occasions by other members of IREG’s Executive Committee, including the official invitations to IREG’s events or articles written by IREG core members across both decades. Clearly then, actors talking as if rankings were inevitable is part of the effort to create the impression that rankings indeed are inevitable.
5.1.2 Realism: ‘rankings reflect reality’
The second rhetorical move is an appeal to realism, whereby rankings are portrayed as little more than earnest efforts to mirror reality. This move bears resemblance to what Desrosières (2001: 340), in his typology of attitudes towards quantification, refers to as ‘metrological realism’. Here, rankings are tied to the normative ideal of unveiling the status dynamics already in place in higher education. We can see this line of reasoning in the following text authored by a producer of rankings who was part of IREG’s network in the first decade:
Just as each nation has its notional pecking order, based originally on senior common room gossip but underpinned by intelligence on the flow of big research money, there is an informal international ranking. Go to any university in the world, or to the boardroom of a multinational company, and there will be a consensus on the leading 10 or 20 institutions. (Jobbins, HEiE, Vol. 30(2), 2005: 141)
Statements by well-known scholars would occasionally be used to attest to the credibility of these appeals, in support of rankings’ overall ability to capture or at least approximate the ‘truth’. Philip Altbach, for example, referred to as an ‘internationally renowned scholar of higher education’, is quoted in the 2010 conference invitation for having said that ‘rankings need to be taken seriously’ and that they ‘do not push for a competition among higher education institutions (as its critics like to argue) but they reflect growing global competition in higher education’. In practice, no ranking is considered to be a perfect representation of reality, which makes organizations such as IREG all the more meaningful as they provide opportunities for a dialogue about rankings and reality.
5.1.3 Instrumentalism: ‘rankings are needed’
Another narrative promoted assiduously by IREG is that rankings proliferate because they are ‘needed’, ‘widely used’, and ‘useful’. The invitation to the 2010 IREG conference, for instance, mentions ‘the growing demand for broadly understood information about higher education and its institutions’. Rankings are framed as a tool that helps universities and governments elevate the quality of institutions. References to ‘stakeholders’ and their ‘demand’ or ‘need’ for information and transparency about universities are abundant in the corpus. Again, the invitation to the 2010 IREG conference is a good example: ‘The proliferation of rankings is one aspect of the growing demand for broadly understood information about higher education and its institutions’. Similarly, the invitation to the 2014 IREG conference describes rankings ‘as one of information tools for variety of stakeholders, including those directly and indirectly concerned with them’. More recently, when Isidro Aguillo, an Honorary Member of IREG, was asked by a user on Twitter why he thought the rankings were needed, he replied: ‘It is not my opinion. Millions of students, researchers, scholars and managers use them. There is a need’.14
Weaving these threads together (5.1.1, 5.1.2, and 5.1.3), we see how IREG maintains a discourse on rankings that might be generally considered as ‘spirited’ and ‘constructive’—which also applies to criticism. This effectively aligns the scope of discussion with questions such as: How can rankings be made better at capturing ‘reality’ or at meeting the needs of various ‘users’? We can see this clearly in IREG’s events, where, for example, sessions dedicated to ranking methodologies and updates from ranking organizations feature as a standard part of the programme. Against this backdrop, challenging rankings’ inevitability, utility, realism, or pointing to a future in which rankings do not play an important role would easily be dismissed as unreasonable and futile: ‘Although a positive view of rankings is not unanimously shared, it is likely that the naysayers are fighting a losing battle’ (Sadlak, Merisotis and Liu, HEiE, 33(2–3), 2008: 195). Notably, the identity of these ‘naysayers’ remains unspecified. Yet, it is quite clear that they are ‘others’—those who roam outside the bounds of what IREG considers legitimate.
5.2 Agency elaboration
On the occasion of its second meeting, in 2004, Sadlak, Merisotis, and colleagues decided to establish the ‘International Ranking Expert Group’, or IREG, as an informal group, which would thereafter meet regularly in order to discuss rankings. From this point on, IREG would exploit the growing popularity of rankings while striving to become an independent and authoritative voice on the subject, eventually tying its agency and purpose to the ‘strengthening of public awareness and understanding of range of issues related to university rankings and academic excellence’ (emphasis added).15 Over the years, IREG would elaborate its own agency in the name of this purpose and along the following roles: facilitator, curator, guide, and watchdog.
5.2.1 Facilitator
Since the early days, IREG has advertised itself as a group that brings together different actors interested in rankings. By acting as a facilitator of ‘stakeholder interaction’, IREG seeks to assert itself as a point of convergence of multiple interests. As sites were direct and personal interaction takes place, IREG’s events are especially important. On these occasions, data analytics companies and ranking organizations would advertise their services to government officials and university administrators, usually as part of the official programme of the event. Policy and management consultants, often lined up as speakers, would meet potential clients or simply use the forum to promote themselves as experts. Scholars play an essential part as well: they ‘review’ or ‘assess’ rankings and make suggestions for improvement. The ‘rankers’, in turn, listen to and engage with criticism. The photos on IREG’s website suggest to the viewer that these events are not just formal gatherings but also opportunities for informal socializing, personal introductions, exchanging business cards, discussing potential collaborations, or strengthening existing ties between the participants.
5.2.2 Curator
As a curator, IREG prepares and publishes news and other type of information on university rankings. The home page currently features a section entitled ‘Ranking News’, which, among other things, typically includes announcements of a new ranking being published, ranking-related developments, updates on the organization’s activities or activities of its members, and occasionally the latest research published. Judging from the (admittedly partial) information we have from the Internet Archive, the frequency of news updates has increased in regularity over the last several years. In addition, IREG runs a newsletter and publishes an inventory of national and international university rankings, which are occasionally updated. From time to time, IREG publishes updates on university rankings and its activities in the widely read online magazine University World News. IREG’s approach to its role as a curator is very indicative of its aspiration to be ‘the place’ where interested parties, and public at large, can acquaint themselves with university rankings and receive regular updates.
5.2.3 Guide
IREG also fashions itself as a trusted guide providing free-to-access reliable information on ‘good’ ranking practices to various ‘users’ and ‘stakeholders’. To this end, in 2015, it published a document called ‘IREG Guidelines for Stakeholders of Academic Rankings’, which addresses students and parents, higher education institutions, policymakers, government agencies, employers, and the media. The guidelines draw on the aforementioned Berlin Principles, which are primarily intended for the producers of rankings as a way of guiding them in developing a ‘good ranking practice’. The belief that ‘when properly used and interpreted, rankings can be an important tool in assessing higher education programs’ (Guidelines: 5) points to a distinction made between ‘proper’ and ‘improper’ use of rankings. By implication, part of the responsibility for any adverse effects rankings may have is thereby shifted to—‘users’.
5.2.4 Watchdog
One of IREG’s earliest ambitions was to establish itself as a ‘watchdog’ over the expanding ranking industry. The Berlin Principles, adopted in 2006, were seen as ‘a logical and quality self-assuring measure on behalf of IREG’ to regulate the field of rankings (HEiE, 32(1), 2007: 1). Later on, IREG introduced the ‘IREG Seal of Approval Process’—an auditing procedure for ranking organizations that wished to be benchmarked against the Berlin Principles in an independent review. In the first several years, the audit was advertised at events and would have dedicated space in the programme, only to disappear gradually in later years. To date, not more than a handful of university rankings have undergone an IREG audit, while IREG itself has received a great deal of criticism for not living up to its ambition of actually holding ranking organizations—including some of its members, such as U.S. News & World Report and Shanghai Ranking Consultancy—to account.
As IREG was moving away from UNESCO-CEPES, and in particular once it became a formal organization, its future depended increasingly on being able to claim the position of an independent actor in the international rankings scene. Acting simultaneously as a facilitator of exchange, a curator of information, a guide for ‘stakeholders’, and a (albeit not particularly successful) watchdog of rankings has allowed IREG to carve out a legitimate space at the intersection of multiple institutional spheres, in which rankings could be regularly discussed and practically engaged with—primarily among the ‘yaysayers’.16 And while the demarcation of discourse is mostly about signalling the ‘outer’ boundary of the zone within which rankings are considered a legitimate device of evaluation in and of itself, the agency elaboration is about IREG positioning itself as a bridge across diverse actors and interests.
5.3 Boundary stabilization
Since the outset, IREG’s boundary work has been characterized by three ambitions: to secure its position in the broader ‘ecosystem’ of organizations; to co-opt high-level actors from different spheres—particularly ranking producers, policy experts, and social scientists—into its activities and structures; and to broaden its sphere of influence. IREG has pursued these ambitions by deploying four interlocked strategies: formal organizing, co-opting new individuals and organizations, spotlighting ‘rankers’ as a specific actor category, and orchestrating the collective conversation on rankings.
5.3.1 Organizing
In 2009, IREG—up to this point an informal group—would create a formal organization: IREG Observatory. We see this as a milestone in its history for the following reasons. First, formalizing allowed IREG to strengthen its position as an independent actor. Becoming a legal entity allowed it to forge a distinct organizational identity, own and accumulate resources, and work towards further elaborating its agency. Second, the associational form also redrew relations of accountability. IREG would now be primarily accountable to its member organizations, which, at least in principle, should be the ones steering its direction. Third, IREG could also formally grow, through a gradual expansion of its membership and network, as well as through scaling up its operations. Fourth, formal organization would introduce a rhythm in IREG’s functioning, paced by the regular statutory meetings of its members’ representatives and governors as well as its soon-to-become regular conferences.
5.3.2 Co-opting
In particular during its informal phase, IREG would regularly invite ranking ‘newcomers’ to its events. For example, the launch of the Shanghai ranking in 2003 prompted Merisotis and Sadlak to invite its editor, Nian Cai Liu to join the next IREG meeting. Liu would then become a member of IREG’s core group and be a regular speaker at its events. He also became a member in the IREG Observatory’s Executive Committee and was eventually awarded Honorary Membership. A similar path was followed by Robert (usually referred to as ‘Bob’) Morse of the U.S. News & World Report, Isidro F. Aguillo (whom we already mentioned) of Webometrics ranking, Ben Sowter of the QS ranking, and Dmitry Grishankov of the RAEX rating agency. In addition, numerous scholars and policy experts were first invited as speakers to one of IREG’s events and then, through subsequent repeated appearances, became included in IREG’s extended network of experts. By and large, IREG has always strived to maintain an image of being the place where ‘top-level’, ‘distinguished’, and ‘renowned’ ‘experts’ meet.
In 2009, in addition to individuals, IREG (now as IREG Observatory) would start to formally include organizations as well. Since then, its organizational membership base would continuously grow. By 2021, IREG had some 70 member organizations, of which higher education institutions were a sizeable majority (57, by our count). This ratio stands in contrast to the one we saw in the first 2 years of the organization’s existence, when ranking organizations made about half of IREG’s (indeed much smaller) membership base. IREG’s geographical expansion is also interesting to observe (Figure 1). Over the years, IREG seems to have expanded almost exclusively by including higher education institutions from Eastern Europe, especially Russia, and, to a lesser extent, Western Asia.17

Number of IREG Observatory’s member organizations, by region and year (approximate). The numbers for 2011–21 are based on the membership listings retrieved from the archived versions of IREG’s website (see Section 4, paragraph 2). Countries are assigned to regions following the United Nations geoscheme, devised by the United Nations Statistics Division.
Source: Internet Archive. https://web.archive.org/web and http://www.ireg-observatory.org/, data retrieved on 10 February 2022.
5.3.3 Spotlighting ‘rankers’
Since the early years, IREG has used the term ‘ranker’ as a way of distinguishing between the organizations that publish rankings (including their representatives) and everyone else.18 And while a ‘ranker’ can be a commercial business, a government agency, a social science institute or university department, and in principle come from any of the social worlds that IREG leans on, this distinction is not made explicit at IREG’s events. Even as some of the ‘rankers’ are direct competitors—especially, the commercial ones which, in addition to publishing rankings, also sell products and services to universities and governments—they are all staged as peers caring about providing good rankings. Here is how Morse of the U.S. News commented on it: ‘The conference is for an exchange of views. People get to hear what another ranker here is doing, or if they have a new idea, or if they’re using data differently, or how they’ve handled a ranking’.19 In addition to ‘rankers’ being identified as a category, IREG also shines the spotlight on them by, most notably, regularly giving them a prominent place at its events.
Over time, we note that a distinction between kinds of rankings and ‘rankers’ gradually emerged and became consolidated in the programme of IREG’s annual conferences. In recent years, the global ‘ranking elite’20 (in the first instance, ARWU, QS, THE, and U.S. News) is distinguished from ‘other global rankings’ (e.g., U-Multirank, Webometrics, sometimes also Leiden Ranking) and ‘national rankings’ (Perspektywy and CHE being two prominent examples). Their representatives are regularly lined up as speakers at IREG’s events, while some of the ranking organizations are also its members. Being able to bring the ‘ranking elite’ together seems a matter of pride for IREG:
It does not happen often that authors of practically all prominent international and national academic rankings, with some like Times Higher Education (THE) and QS competing against each other, meet at the same time in one (though virtual) place. However, it indeed happened at the IREG 2020 conference entitled ‘University rankings in the time of uncertainty’ in late October.21
Lesser-known ‘rankers’, especially national ones, are often invited, although, by and large, no individual national ranking gets nearly as much regular stage time as the professed ‘elite’.
5.3.4 Orchestrating repeated encounters
To date, IREG has organized more than 20 events, most of them in Europe. Since 2009, their frequency would increase and become more stable. To put an event together, IREG sets the agenda, selects speakers, organizes sessions, and occasionally even prepares suggestions for speakers on what issues they might address. Mirroring the periodic publication of rankings (Brankovic, Ringel and Werron 2018; Ringel and Werron 2021), this overall format is then repeated year after year. When we compare the programmes of these events, we note that, over time, the format becomes progressively more standardized and, in some respects, largely repetitive. If we put aside the opening and closing sessions, the most standardized segments of the programme are the sessions in which ‘rankers’ are invited to update the audience on their rankings. This usually comes down to presenting and discussing new editions of rankings, changes in methodology, or introducing new rankings. Even the speakers themselves, that is, the representatives of ranking organizations, are largely the same from one year to the next.
Particularly interesting among the regularly organized thematic sessions are those that bring together a diverse group of speakers. Policy experts, rankers, consultants, scholars, university and government officials, managers, bibliometricians, among others, are assembled in varying configurations in a single panel. These actors would, for example, discuss global trends in higher education and science, present their own work, comment on new ranking developments, offer advice or criticism (to ‘rankers’ but also other actors), or speculate about the future. The following excerpt from the 2010 Berlin conference programme illustrates this format:
First Session
New Developments in National and International Rankings
Chair: Frank Ziegele, Director of CHE
Alex Usher, President of Higher Education Strategies Associates, Toronto, Canada: Let the Sun Shine In: The Use of Academic Rankings in Developing Countries
Peter Okebukola, Chairman of Council, Osun State University, and former Executive Secretary, National Universities Commission, Nigeria: Trends in Academic Rankings in the Nigerian University System
Luis Piscoya, Professor at San Marcos University of Lima, Peru: Rankings in Peru in Context of Recent Developments in Higher Education in the Latin America
Jamil Salmi, Tertiary Education Coordinator, the World Bank, Washington, DC, USA: A New Approach for Measuring the Performance of Tertiary Education Systems
Waldemar Siwiński, Vice-President of IREG Observatory, President of Perspektywy Education Foundation, Warsaw, Poland: Building a Bridge between the National and International Rankings.
As usual for larger conferences, there is a tendency in IREG’s events towards generic and broad overarching themes: rankings and accreditation (or excellence, quality, employability, etc.), national rankings, subject rankings, ranking methodologies, to name a few examples. Such framing is a tent broad enough to accommodate individuals from diverse backgrounds and with different interests, but also sufficiently important sounding for making the event attractive for members, non-members, as well as for potential future members and speakers.
In a sustained effort to secure a stable position, IREG co-opts a range of actors—including policy experts, administrators, and academics—into its structures and activities. In addition to acting as a point of convergence of their respective interests, as we have seen in the case of facilitating (5.2.1), IREG also claims to fulfil a cohesive function by bringing diverse actors together into a sustained conversation on university rankings, now spanning two decades. Here, ‘rankers’ are treated as a bounded category in its own right, which further blurs the boundaries between their multiple institutional spheres. Ranking methodologies and other topics lying within the discursive boundary created by IREG, then, become a ‘leveller’ that allows individuals coming from different social worlds to engage in a productive exchange. The formal structure, gradual expansion thereof, repeated encounters of members and non-members, and continuous prominence given to ‘rankers’—all contribute to the stabilization of the boundary zone occupied by IREG.
6. Conclusion
This study was motivated by the ambition to improve our understanding of how contested evaluation devices, such as university rankings, become progressively institutionalized. We hypothesized that meso-level processes play an important role in this, which we explored by empirically focusing on the organization called IREG Observatory. We traced three processes by which IREG navigated and gradually stabilized a relatively uncharted interstitial space—located at the intersection of technocratic, managerial, academic, and commercial spheres. The first process refers to IREG’s efforts to limit the discourse on rankings to discussions that aim to advance the practice, while marginalizing those that fundamentally challenge it. We see this as rhetorical work aiming to profess a kind of ‘faith in rankings’. In the second process, IREG rationalizes this faith by elaborating its own agency along a range of boundary-spanning roles, which are used to assert the organization as an authority on the subject. Finally, as a rationalized agent of the faith in rankings, IREG works to stabilize the boundary (zone) by instituting formal links and regular opportunities for interaction with and between the actors from the social worlds it leans on for legitimacy and other resources.
Overall, this article argues that the ubiquity of university rankings is not solely a matter of macro-level trends directly ‘strong-arming’ universities into adjustment but also a matter of how actors in and around the university sector collectively partake in the legitimation of rankings as a practice. This study, therefore, adds to the growing body of work that teases out the meso- and micro-level processes that emerge due to rankings and which in turn help sustain them as a legitimate device for evaluating universities (e.g., Lim 2018; Ringel, Brankovic and Werron 2020; Ringel, Hamann and Brankovic 2021; Chun and Sauder 2022). The study also contributes to the literature on the infrastructures of evaluation more generally (e.g., Lamont 2012; Mennicken and Sjögren 2015; Krüger 2020; Waibel, Peetz and Meier 2021), by pointing to the organizational and interorganizational dynamics—within and across boundaries—that can be set in motion when new devices of evaluation are introduced.
So, what is IREG? One could certainly argue that it is just another ‘mercenary’ marching to the drum beat of profit-making, neoliberalism, and squeezing the life out of academia. This position is understandable, or at least one that is not difficult to at least sympathize with. After all, university rankings do affect ‘us’ who work in academia, and often so in negative ways (Stack 2016; Hallonsten 2021; Kaidesoja 2022). Yet, we would be cautious to jump to conclusions as to the motives of ranking organizations and the auxiliary actors such as IREG. Not because we harbour sympathies for them, but because these kinds of speculations can deflect our attention away from some of the underlying, and perhaps more subtle dynamics of the institutionalization of university rankings. One such dynamic is that many actors, and here we principally refer to those who do not hold particularly high stakes in the practice, share IREG’s ‘faith in rankings’, including scholars (Brighenti 2021). Perhaps not the same level of enthusiasm, but surely some degree of belief that there is value in discussing rankings on IREG’s demarcated and highly scripted terms. However, it is precisely these actors—scientists, university administrators, policy experts, and other ‘non-rankers’—who are indispensable for IREG’s legitimacy as a carrier of the ‘gospel’ of rankings.
As we have seen, it seems like IREG has tried to establish itself as a key actor in global rankings governance—one that would discipline ranking organizations and hold them to account. However, nothing of the sort appears to have happened. Rather than being a body of any such consequence, especially for major ranking organizations, IREG has become their platform, presented as a ‘neutral’ forum of ‘experts’. Given its history, membership structure, and not least the discourse on rankings its leaders have promoted since the outset, this is hardly a surprise. But even if these had been entirely different, the question of whether anything like ‘hard’ governance is even remotely possible in the field of university rankings remains. This especially concerns IREG’s ‘elite’, which has vested interests in sustaining their rankings-related commercial activities—which may go against some of the principles IREG was originally created to protect. Instead, what we may have been witnessing is a gradual weakening of the autonomy of IREG to the advantage of ‘big’ ranking organizations and data analytics companies. Future research could thus look into the effects of the entanglements between ranking organizations and other actors—including international organizations, governments, various businesses present in the domain of higher education and science, as well as universities themselves.
How does, then, IREG matter in the grand scheme of things? We note here that this study was not designed to assess the actual effect of IREG on rankings’ taken-for-grantedness. Rather, it was designed to illuminate a corner of the larger ‘machine’ of their institutionalization. And, all things considered, IREG is hardly a vital ‘cog’ in its wheels, although it may be a unique kind of organization in some respects. But by studying this particular organization, we have illuminated the workings behind some of the critical elements that keep the machine of rankings’ legitimation going. One of the most critical of these elements is a sustained conversation about how to advance the practice of ranking universities—a conversation that takes place among legitimate parties held together by what seems to be a symbiotic arrangement. This conversation is a much-needed ‘grease’ in the wheels of the machine, not least because it helps reinforce the belief that ‘rankings are here to stay’. Together with its network of organizations and individuals, IREG contributes to the sustainment of this conversation.
Future research could dig deeper into the boundary processes we have identified in this article, such as demarcation, co-opting, or organizing. Earlier we noted that university rankings have for some time now been an object of sustained interest across sites, some of which could be characterized as interstitial in the sense of this article. The events organized by ranking organizations, where (among others) vice-chancellors, scholars, and government officials are invited, are a good example. The Center for World-Class Universities at Shanghai Jiao Tong University has been organizing a biennial international conference on ‘world-class universities’ since 2005—an event of a similar format as IREG’s conferences.22 Times Higher Education also organizes a range of such events on a regular basis.23 The events organized by higher education institutions where ‘rankers’ are ‘invited to the table’ (as recently written about by Diep 2022) are another site where boundary processes take place. The organization called ‘World 100 Reputation Network’,24 which collaborates closely with some of the ranking organizations, is another potentially interesting case. The continued existence of such dedicated occasions is, we would argue, part and parcel of the meso-level social order that helps university rankings thrive.
Rankings themselves, finally, are not specific to the university sector and observing them through their boundary-spanning properties may help identify more subtle similarities and connections between the university sector and other ‘worlds of rankings’ (Ringel et al. 2021). Based on these insights, we suspect that a similar kind of boundary work could be going on in other domains, such as in the case of nation-states, corporate, or artist rankings. In all of these cases, rankings face criticism, which gives their producers cause to be worried about the legitimacy of their products and drives them to directly engage with their various critics (Ringel 2021). We therefore believe that our analysis provides a useful blueprint for understanding boundary work as an underrated condition for the recent success of rankings, and possibly other contested devices of evaluation, not just in the university sector but also more generally.
Endnotes
https://ireg-observatory.org/en/initiatives/ranking-seal-of-approval/, last retrieved on 31 July 2022.
https://ireg-observatory.org/en/about-us/, last retrieved on 5 May 2022.
We note that, when we use the quotation marks in the case of the word ‘expert’, we do not do so in order to question or undermine anyone’s real expertise. Quite the opposite—we are not interested in the nature or legitimacy of the expertise the actors ascribe to themselves and others. But we find it important to highlight that this very word is used frequently by IREG.
https://ireg-observatory.org/en/events/ireg-2022-warsaw-conference/, quote retrieved on 14 May 2022.
https://www.tandfonline.com/loi/chee20, articles retrieved on 5 November 2021.
https://ireg-observatory.org/en/, data retrieved on 10 December 2021.
https://web.archive.org/web/*/http://www.ireg-observatory.org/, data last retrieved on 10 February 2022.
Ireg Observatory channel on YouTube, IREG 2020, Beijing Conference, https://www.youtube.com/channel/UC1hrcc0RZEQwcu18SiJNadw/videos, last visited on 10 December 2021.
Perspektywy channel on YouTube, IREG 2021 Jeddah Conference, https://www.youtube.com/c/PerspektywyEdukacja/videos, last visited on 10 December 2021.
https://www.universityworldnews.com/fullsearch.php?query=IREG&mode=search, data retrieved on 28 October 2021.
CEPES was established in 1972 as a decentralized office of UNESCO, located in Bucharest. Its purpose was to promote international cooperation in the sphere of higher education among UNESCO’s Member States in Central, Eastern, and South-East Europe, while it also cooperated with Canada, Israel, and the USA. The centre was closed in 2011. Source: https://en.wikipedia.org/wiki/UNESCO-CEPES, information retrieved on 5 November 2021.
IHEP was, and still is, a non-profit research and advocacy organization based in Washington, DC.
The project was a follow-up to the 1998 UNESCO World Conference on Higher Education.
https://twitter.com/isidroaguillo/status/1318919624264736768?s=20&t=cy4Ojb1qyxLAR3bgTIO-VA, retrieved on 12 May 2022.
https://ireg-observatory.org/en/about-us/, retrieved on 12 May, 2022.
The word is not a quote from the corpus. We have derived it in order to contrast with the earlier-cited noun ‘nayseyers’.
This could be, at least to some degree, a result of IREG’s secretariat being located in Poland, which might have played a role in steering IREG’s expansion strategy eastwards. The eastwards expansion is possibly also a reflection of (rankings-friendly) national policy dynamics in some of these countries, which steer universities towards organizations such as IREG and their events. Organizational status dynamics may also serve as an explanation, to a certain extent at least (see e.g., Brankovic 2018a,b). All of these are, however, speculations, yet it would be useful to know what drives universities and other actors towards IREG and similar networks.
The term ‘ranker’ is fairly common, also beyond IREG.
https://youtu.be/2F7ZRo_wfIc?t=330, last accessed on 12 May 2022.
The phrase ‘ranking elite’ is also used by IREG. See, for example, the article linked in the following footnote, published in the University World News (7 November 2020), authored by IREG’s then Vice President Waldemar Siwiński and the Managing Director Kazimierz Bilanow.
https://www.universityworldnews.com/post.php?story=20201105133824428, last retrieved on 5 November 2021.
https://cwcu.sjtu.edu.cn/En/Content/35, last accessed on 1 July 2022. The last (eighth) biennial conference took place in 2019. Likely due to the COVID-19 pandemic, no conference was organized in 2021.
https://www.timeshighereducation.com/events/summits, last accessed on 2 May 2022.
https://www.theworld100.com/, last accessed on 25 August 2022.
Acknowledgements
We are grateful to the participants of the session ‘Numbers, rankings, evaluations’ at the 17th New Institutionalism Workshop in Madrid, where an earlier version of the paper was presented. We are also indebted to the colleagues at the Robert K. Merton Centre for Science Studies (RMZ) at the Humboldt University in Berlin for a lively discussion and helpful comments on an almost-final version of the article. Our analysis has greatly benefitted from the exchanges with Dominik Antonowicz, Brendan Cantwell, André Martins, and Krystian Szadkowski. We are especially thankful to the anonymous reviewers, for their thoughtful criticism and suggestions, as well as to the special issue editors, Julian Hamann, Frerk Blome, and Anna Kosmützky, for their help and guidance. The participants of the special issue workshop, and Alexander Mitterle and Roland Bloch in particular, have provided generous feedback on a manuscript that we ended up abandoning. We wish to thank them for challenging it and in doing so helping us to, perhaps inadvertently, redirect our empirical focus and eventually write (hopefully!) a better paper. Last but not least, we wish to thank Dr. Kazimierz Bilanow, Managing Director of the IREG Observatory, who travelled to Berlin on a very short notice to hear the paper presentation at the RMZ. Dr Bilanow’s comments made the session all the more interesting and memorable and we see his decision to come as a great compliment to our study. All errors are our own.
Conflict of interest statement. None declared.