Abstract

After the fall of the Iron Curtain and a subsequent period of restructuring the research and innovation system, the Czech Republic has introduced a performance-based research funding system, commonly known as the Evaluation Methodology. The Evaluation Methodology is purely quantitative and focused solely on research outputs (publications, patents, prototypes, etc.) to determine the amount of institutional funding for research organizations. While aiming to depersonalize and depoliticize the allocation of institutional funding in the research system, improve research productivity, and safeguard accountability, we argue that the Evaluation Methodology has in fact become a negative example of a performance-based research funding system. Our analysis of the Evaluation Methodology shows that it has introduced considerable instability and unpredictability in the Czech research system, making strategic planning for research organizations difficult. The article contributes to a growing body of literature on research evaluation and performance-based research funding systems, discussing the impacts of introducing such systems in countries including the UK, Spain, Slovakia, Hong Kong, Australia, Poland, Italy, New Zealand, Flanders, Norway, Denmark, and Finland. The Czech case provides new insights in the interactions between politico-economic regimes and research policy, while also directing the attention of research policy scholars to significant developments in Central and Eastern European countries.

1. Introduction

Just like other countries in Eastern Europe, the Czech Republic has a particular history of development of its science and innovation system that differs from the often-analysed Western and Northern European systems. As the former centrally planned approach to doing science was condemned by many researchers as politicized science, it resulted in a strong drive on the part of the scientific community to depoliticize science after the Velvet Revolution of 1989 ( Arnold 2011 ; Linková and Stöckelova 2012 ). To modernize the governance of science, a science policy council—the Research, Development, and Innovation Council (RDI Council)—was set up, together with two independent funding agencies, which award competitive funding, one for fundamental research and the other for applied research and innovation. In an effort to simplify and modernize the system and to align it with international developments, the evaluation of research and funding allocation was changed as well. In order to depoliticize and depersonalize decision-making, the Czech Republic has developed a method to evaluate research and to allocate funding based on productivity. However, the Evaluation Methodology brought unintended consequences. By threatening the stability and continuity of research organizations in the Czech Republic, it became subject of heated debates and even public demonstrations.

The Evaluation Methodology—in Czech Metodika hodnocení, colloquially referred to as the ‘coffee mill’—counts research outputs and assigns a certain number of points to each, grinding different research outputs through the same mill. The points are then translated into institutional research funding, with each point representing a certain number of Czech crowns (CZK). Hence, the Evaluation Methodology combines two functions: it is a mechanism both for evaluating research and for allocating institutional funding for R&D, with a direct, automatic link between the two. As such, the Evaluation Methodology is a performance-based research funding system (PFRS). Other PFRS, or elements thereof, have been implemented in the UK, Spain, Slovakia, Hong Kong, Australia, Poland, Italy, New Zealand, Flanders, Norway, Denmark, Finland, and elsewhere ( Butler 2010 ; Hicks 2012 ).

In this article, we analyse the development, performance, and effects of the Czech Evaluation Methodology in the period 2004–11. Based on current debates on research evaluation and performance-based research funding systems ( Butler 2003 , 2010 ; Geuna and Martin 2003 ; Gläser and Laudel 2007 ; Rodrigues-Navaro 2009 ; Martin and Whitley 2010 ; Martin 2011 ; Hicks 2012 ; Molas-Gallart 2012 ), we will show how the Czech Evaluation Methodology stands out. As it only focuses on research outputs and is purely metrics based, it is uniquely radical among the performance-based research funding systems in operation worldwide. We will argue that a purely quantitative assessment can introduce significant threats and cause discontinuities in a research system. This is particularly problematic in a research and innovation system in development, such as in transition countries, as it fails to provide policymakers and research organizations with the information needed to develop and progress.

After the methodology section, we describe the introduction and the evolution of the Evaluation Methodology against the background of Czech research and research policy. We go on to explain the workings of the 2010 version of the Evaluation Methodology, which links the evaluation of research with the allocation of funding. We then show how the Czech research community reacted to it and assess its impacts. Finally, we evaluate the Evaluation Methodology against the literature on performance-based research funding systems and research evaluation more generally.

1.1 Methodology

The article and its conclusions are based on research conducted in 2010 and 2011 in the context of the ‘International Audit of R&D&I in the Czech Republic’ commissioned by the Czech Ministry of Education, Youth, and Sports ( Arnold 2011 ). One task of the International Audit was the analysis and evaluation of the Evaluation Methodology. Consequently, we analysed the Evaluation Methodology based on an extensive analysis of Czech research policy documents (including the different versions of the Evaluation Methodology from 2004 to 2011) and a literature review of international scientific literature about institutional research funding and performance-based research funding systems. In addition, we surveyed 689 researchers and 74 research managers (rectors, deans, research institute directors) on the use and usefulness of the Evaluation Methodology. The surveys were online, conducted in mid 2010, and were open for 6 weeks. Questionnaires were piloted using a small sample of respondents. We contacted 2,636 individual researchers (response rate 26%) and 343 research managers (response rate 22%). Tables 1 and 2 below describe some of the features of the respondents. Respondents came from the universities, the Academy of Sciences, and other research institutes. As can be seen in the tables below, a large share of both research managers and researchers started employment in the organization 20 years ago and more, showing the continuity in staffing in Czech research organizations.

Table 1.

Characterization of research managers surveyed ( n = 74)

Characteristic n 
Organizational affiliation   
    Universities 33 45 
    Academy of Sciences 23 31 
    Other research institutes 18 24 
    Total 74 100 
Start of employment in organization   
    20+ years 43 58 
    10+ years 21 28 
    Less than 10 years 12 
    Total 74 100 
In current function   
    10+ years 15 20 
    5+ years 18 24 
    Less than 5 years 29 39 
    Recently 11 15 
    N/A 
    Total 74 100 
Characteristic n 
Organizational affiliation   
    Universities 33 45 
    Academy of Sciences 23 31 
    Other research institutes 18 24 
    Total 74 100 
Start of employment in organization   
    20+ years 43 58 
    10+ years 21 28 
    Less than 10 years 12 
    Total 74 100 
In current function   
    10+ years 15 20 
    5+ years 18 24 
    Less than 5 years 29 39 
    Recently 11 15 
    N/A 
    Total 74 100 

Source: International Audit Survey 2010.

Table 2.

Characterization of researchers surveyed ( n = 689)

Characteristic n 
Age (years) 
    20–30 13 
    31–40 127 18 
    41–50 156 23 
    51+ 348 51 
    N.A. 45 
    Total 689 100 
Organizational affiliation (primary organization) 
    University 308 45 
    Academy of Sciences 285 41 
    Other research institutes 87 13 
    Industry 1% 
    Total 689 100 
Start of employment in primary organization 
    Before 1991 299 43 
    1991–2000 183 27 
    After 2000 137 20 
    N/A 70 10 
    Total 689 100 
Characteristic n 
Age (years) 
    20–30 13 
    31–40 127 18 
    41–50 156 23 
    51+ 348 51 
    N.A. 45 
    Total 689 100 
Organizational affiliation (primary organization) 
    University 308 45 
    Academy of Sciences 285 41 
    Other research institutes 87 13 
    Industry 1% 
    Total 689 100 
Start of employment in primary organization 
    Before 1991 299 43 
    1991–2000 183 27 
    After 2000 137 20 
    N/A 70 10 
    Total 689 100 

Source: International Audit Survey 2010.

Further, we conducted over 35 semi-structured interviews with researchers, research managers, policymakers, and politicians, to understand the context of the Evaluation Methodology. We selected interviewees from different organizations, such as the Academy of Sciences, universities, research institutes, ministries, and agencies, also taking care of disciplinary and regional spread. Interviews were face-to-face and performed at people’s work places. Finally, we conducted an international peer review (panel-based assessment) of 16 research institutes from different fields and of different organization types.

2. The Czech Evaluation Methodology explained

2.1 The development of research in the Czech Republic

The institutionalization of research in the Czech Republic goes back a long way. Founded in 1348, Charles University was the first university established in Central Europe, and the tradition of non-university research institutes goes back to the Royal Czech Society of Sciences (1784–1952) and the Czech Academy of Science and Arts (1890–1952). However, within the centrally planned system, a specific research constellation was set up, characterized by a close entanglement of science and Communist party politics. The former Czechoslovakia knew a separation of research and teaching, with research being conducted predominately within the Academy of Sciences, established in 1953. The right to award PhD degrees belonged exclusively to the Academy of Sciences. Teaching was performed by the university system ( Lepori et al., 2009 ). Research and industry were structurally linked, with big industrial research centres being part of key branches of Czechoslovak industries.

After the fall of the Iron Curtain and the separation of the former Czechoslovakia, a strong need to reform the Czech research system emerged, and over the years the organizational landscape has changed radically, as has the governance of science ( Arnold 2011 ). Universities are now tasked with doing research, and the right to confer PhD degrees has been transferred from the Academy of Sciences to the universities. As for industrial research, many of the research centres were closed, so that links between science and industry were weakened.

With regard to the governance system, two new R&D funding organizations—one for fundamental research and the other for applied research—were introduced. More importantly, an RDI Council was set up as an expert advisory body to the government. In the 2008 Reform of the National RDI System, this RDI Council became the central body responsible for the coordination of the national RDI governance. The RDI Council has 16 members representing research and industry; in addition, there is the Chair, who is normally the Prime Minister. The RDI Council makes use of the support of three disciplinary advisory expert committees and two advisory commissions, one being the Commission for Evaluation. It is within this context of restructuring the research system and under the direction of the RDI Council that the Evaluation Methodology was introduced, gradually operationalized, and implemented.

2.2 The development of funding in the Czech Republic

With regard to the Czech R&D funding system, we can identify three distinct periods during the transition period ( Arnold 2011 ). The first period, from 1990 to 1998, involved the transformation of research performers and research governance organizations. Governmental budget allocations were still based on the level of expenditures in previous years. However, the Academy of Sciences instituted an internal funding reform based on peer review to assess the quality of research from 1993 onwards.

The second period from 1998 to 2003 prepared the Czech Republic for joining the European Union. In 1998, the ‘Research Intentions’ system was introduced in order to increase strategic planning and accountability. Research intentions were forward-looking plans, explaining how research organizations intended to use institutional research funding to reach specific institutional objectives. However, they did not have the intended positive consequences, and they quickly fell into disrepute due to failure to implement them effectively: ‘It was extremely bureaucratic, the outcomes were random, the peer review failed’ (interview, researcher and policy advisor, 2010). The first National R&D Policy of the Czech Republic in 2000 stressed the importance of R&D evaluation and the creation of an ‘evaluation culture’, and led to the introduction of the Evaluation Methodology in 2004.

In the third period, from 2004 onwards, the Evaluation Methodology was to be a safeguard against nepotism and corruption, a tool that was depoliticized and objective: ‘There was a group of people that was inspired by messages from other countries, and by bibliometrics, and they wanted a system that would work automatically and simply, and completely computerised’ (Czech researcher and advisor to the government). In addition, the Evaluation Methodology was expected to combat the lack of productivity of a large number of research organizations: ‘A huge table of all organisations and of their research outputs was made, covering all organisations that received state funding for R&D. It was more than a 1000 institutes and 270 of them had not produced a single output in five years’ (Czech researcher and policymaker). Moreover, the Evaluation Methodology should tackle the perceived failure of past evaluation practices. ‘The current R&D support system does not allow one to distinguish between the quality of attained results, professional standards and performance of individual organizations, departments and individuals and to take advantage of these distinctions to facilitate changing the system’. 1 It diagnosed a strong bias towards ex ante evaluation and tried to balance this with ex post evaluation, and it stressed that the development of the Czech Evaluation system should be based on international experience and include foreign experts. However, the 2004 document was only the first version of the Evaluation Methodology, as the content of the document transformed through annually updated versions. 2

Funding for universities essentially comes from two separate budgets: the budget for R&D and the budget for higher education/teaching. 3 In 2010 and 2011, institutional funding for R&D made up around 20% of the total university budget. In contrast, institutional research funding at the Academy of Sciences was 52% of total budget in 2010, as the Academy of Sciences does not receive funds for teaching, as it is not tasked to teach.

2.3 The transformation of the Evaluation Methodology

The 2004 Evaluation Methodology was very ambitious. It aimed to improve evaluation at all levels by setting standards for evaluating research projects, programmes, organizations, and policies. Designed mainly by natural scientists in the RDI Council, it also introduced the concept of ‘quantitative results evaluation’, while stressing the importance of respecting the differences between fields. R&D outputs were considered important evidence for determining institutional funding. However, at this stage, this did not imply an automatic link between research evaluation (by counting R&D outputs) and the allocation of funding.

The shift to a performance-based research funding system occurred in the context of the 2008 Reform of the National RDI System. This reform was not only about the shift to a performance-based research funding system but also sought to simplify and modernize the system and improve its governance. More specifically, it aimed to support research excellence and the application of research, improve research–industry cooperation, make public research organizations more flexible, improve the supply of human resources, and increase international collaboration in research ( Arnold et al., 2011 ).

In terms of research funding, the poor perceived4 performance drove a decision to shift to a fully performance-based research funding system replacing the research intentions ( RDI Council 2008 ). The reform specified that the amount of funding going to the ministries in charge of R&D and to the Academy of Sciences should be determined by the aggregated R&D outputs of the research organizations, as defined by the Evaluation Methodology. Hence, 2009 marks the introduction of the metrics-based evaluation of R&D outputs as a performance-based research funding system, albeit only at the level of funding bodies 5 . The 2010 Methodology formally introduced the allocation of funding at the level of organizations, enforcing and expanding the use of metrics-based evaluation of R&D.

2.4 The workings of the Evaluation Methodology

The main features of the Czech Evaluation Methodology (as of 2009) include the translation of research outputs from all types of research organizations and all types of disciplines into points, and the subsequent translation of points into money.

The Evaluation Methodology counts R&D outputs, that is, bibliometric outputs (articles, books, conference proceedings) and ‘applied outputs’ (patents, utility models, etc.). In more practical terms, the already existing, centralized R&D information system (RIV), in which research organizations are required to input their research outputs, functions as the source of information. Based on this database, a certain number of points is assigned to each output. Both eligible outputs and numbers of points assigned per output have been modified repeatedly over time, meaning that there has been a new version of the Evaluation Methodology every year.

The number of points a research organization gets determines the amount of institutional R&D funding it receives. Research outputs from the previous 5 years are used to calculate the total number of points. Each point represents a certain amount of money, although this amount depends on the total budget and fluctuates every year. Hence, the allocation of institutional R&D money is mechanistic, avoiding political decisions about how much money each research organization and each type of research organization obtains. This ties back to one of the reasons why the Evaluation Methodology was introduced, namely, mistrust following bad experiences of corruption and nepotism.

Unusually in an international context, the Evaluation Methodology is used for all types of research organizations, irrespective of their mission and role. The list of organizations eligible for institutional funding through the Evaluation Methodology includes mostly universities, institutes of the Academy of Sciences, and applied research institutes. However, it also includes organizations that have other missions besides doing research, e.g. museums and hospitals.

Moreover, the Evaluation Methodology is used for all disciplines, irrespective of their publishing patterns and their propensity to publish. 6 This has been recognized as a problem, and successive versions of the Evaluation Methodology have tried to account for these differences through incremental changes over the years. In an attempt to differentiate among different research fields, the RDI Council decided to sort research outputs into two large groups, distinguishing between the social sciences and humanities and other fields of research. Moreover, in 2010, ‘damping factors’ for groups of disciplines were introduced, aiming to limit the size of shifts in funding among scientific fields and categories of outputs from year to year. These were incremental changes in the sense that they did not change the fundamental character of the Evaluation Methodology. In the words of an influential researcher: ‘What has changed is only some refinements of the Evaluation Methodology related to different research fields and the number of points given to a certain result were also changed from time to time, but in an arbitrary way’ (interview 2010).

2.5 The translation of research outputs into points

By way of illustration, in the Evaluation Methodology 2010, a table is used to translate research outputs into points (see Table 3 ). Research outputs that receive points range from journal articles (categories J and D), to books (category B) to applied research outputs (categories P–V), and the number of points assigned by the RDI Council to each output ranges from 4 to 500 points.

Table 3.

The 2010 Evaluation Methodology

graphic 
graphic 

More specifically, and in light of the distinction made between the social sciences and humanities and all other fields of research, disciplines registered in the Czech National Excellence Reference Framework NRRE for the social sciences and humanities receive more points than disciplines not registered there in order to alleviate the inequalities between disciplines. The NRRE includes philosophy and religion; history; archaeology, anthropology and ethnology; political science; management and administration; legal science; linguistics; literature, mass-media and audiovisuals; architecture and cultural heritage; pedagogy and educational science. In these fields and, articles published in peer-reviewed Czech scientific periodicals (category J rec ) obtain more points than in all other disciplines (‘other specialisations’). The same approach applies to ERIH (European Reference Index for the Humanities) publications (category J neimp ) and monographs (category B).

3. Evaluation of the Czech Evaluation Methodology

3.1 The Evaluation Methodology in the eyes of the Czech research community

In 2010, when we started our research, the Evaluation Methodology had become a subject of heated debate and highly visible public demonstrations: ‘If the Czech Republic doesn’t stop this nonsense, we can close down the whole of research within three years’ (interview university administrator, 2010). Against this background we conducted a survey to understand how the Czech research community viewed the Evaluation Methodology, surveying researchers as well as research organization leaders (rectors, deans, directors).

In general, researchers regarded the Evaluation Methodology in a rather unfavourable light. In particular, they felt that the Evaluation Methodology does ‘not at all’ or only ‘to a limited extent’ consider all aspects relevant to judge the quality of research, do justice to the differences between research institutes, and between disciplines, and encourage collaboration. Or as one rector puts it: ‘One of the key weaknesses of the Evaluation Methodology is that it does not do justice to different modes of research, for instance, basic research, development or innovation’ (interview 2010). While 58% of respondents agreed to ‘a large or very large extent’ that the Czech system for institutional funding was in need of reform, only 20% agreed to ‘a large or very large extent’ that the Evaluation Methodology was the right way to effect this change.

Researchers from the Czech Academy of Science are significantly less satisfied with the Evaluation Methodology than researchers from universities and other research organizations (63% not at all satisfied compared to 24% at universities and 29% at other research organizations). Respondents from the Academy of Science viewing the Evaluation Methodology less favourably than their counterparts may be due to the perception that Academy institutes fare worst with the Evaluation Methodology: 21% of respondents from universities agreed to a ‘large or very large extent’ that they would receive more institutional funding as a result of the Evaluation Methodology, while only 13% from research organizations and 7% from Academy institutes said so.

As expected, there were differences in perception among the different disciplines. More than 60% of respondents from the social sciences and humanities felt that the Evaluation Methodology did not at all do justice to the differences between the disciplines, compared to 37% from the natural sciences and life sciences and 39% from the engineering/technical sciences. This is due to the different research outputs having different values in the table, with some having considerably less value then others, for instance, a book (40 points) compared to a Czech patent (200 points). Obviously, this is to the disadvantage of the social sciences and humanities. As a consequence, many researchers from the social sciences and humanities felt that they were losing out as a result of the Evaluation Methodology. While 15% of respondents from the natural sciences, 14% from engineering/technical sciences, and 11% from life sciences agreed ‘to a large extent’ that the Evaluation Methodology would result in their organization receiving more institutional funding, only 2% from the social sciences and humanities did so. This effect is also seen in other performance-based research funding systems, for instance, the UK Research Assessment Exercise ( Martin and Whitley 2010 ).

3.2 Effects of the Evaluation Methodology on performance

When looking at the effects of the introduction of PRFS on performance, it should be noted that the Czech Republic has long had a policy of increasing research expenditure in the universities, so the PRFS is not the only cause of the growth in outputs. Analyses of the Evaluation Methodology and its effects ( Fiala 2013 ; Vanecek 2013 ) show no clear relationship between the introduction of PRFS and increasing performance. More specifically, Vanecek (2013) points out that universities’ output growth began in 2005–6, just after the introduction of the Evaluation Methodology, but that there was no further acceleration following the introduction of the PRFS. While the growth in output was greater than in a panel of seven other European countries 7 , growth in the citation impact of Czech publications lagged the others. Applications to the Czech patent office grew rapidly from 2003—and came mostly from public research organizations rather than companies. Czech applications to the European and US patent offices grew faster than in the comparator countries, too—though from a very low base, overtaking only those from Hungary. In addition, Fiala (2013) shows that from 2008 to 2011, Czech universities more than doubled their overall research output. Thus, there was an increase of 140% in scientific productivity in this period. This can be documented by the year-by-year growth in 2009, 2010, and 2011, which is 65%, 30%, and 12%, respectively. Therefore, research productivity is still growing but the growth is slowing down. This slowdown does not underpin the hypothesis that growth in research outputs is due to the Evaluation Methodology, as one would expect the growth to be accelerating rather than slowing down.

3.3 Unintended effects of the Evaluation Methodology

Not surprisingly, the Evaluation Methodology did have unintended consequences. A large majority of the survey respondents pointed out that the Evaluation Methodology made researchers and research organizsations behave more opportunistically (63% agree to ‘a large extent’ or ‘a very large extent’, 21% ‘to a moderate extent’). This is in line with academic reflections on performance-based funding systems, showing that they can have large effects on collective behaviour ( Butler 2003 , 2010 ; Martin and Whitley 2010 ). For instance, evidence suggests that researchers are better placed in the Research Assessment Exercise in the UK if they focus on short-term, incremental, mono-disciplinary, or applied research. In addition, a transfer market has emerged in ‘star’ researchers that occurs before each Research Assessment Exercise round. And when Australia introduced a simple and mechanical system based on publication numbers in 1995, this resulted in an increased number of publications (Australia’s contribution to the Science Citation Index increased by 25% through the 1990s), but researchers systematically shifted their output towards lower impact factor journals in order to achieve greater publication numbers, leading to a decline in Australia’s relative citation impact in the same period. As in the Australian case, many researchers we interviewed stressed that in the Czech Evaluation Methodology, a large number of mediocre results can weigh much more than (single) outstanding contributions.

Strategies to boost the number of points include adapting outputs to make them countable and make them count more. Some interviewees told us that such ‘fake’ outputs make up a substantial part of the outputs submitted. More concretely, activities of a research institute that normally would not lead to countable research outputs are adapted in order to make them count. An example would be textbooks that are published as scientific monographs. Other strategies are re-publishing older works and establishing working paper series and promoting them as if they were refereed journals. Moreover, local citations are used to boost the impact factor of a journal that actually publishes poor-quality articles. These journals are typically located in Eastern European countries, including the Czech Republic, and are often in the social sciences and humanities. As can be seen in Table 4 , while all types of publications increased in numbers between 2009 and 2010, those publications that are most easily manipulated—Czech reviewed journals, books, and book chapters, as well as proceedings—grew at the highest rate.

Table 4.

Number of research outputs 2006–10

Evaluation year  2010 Growth 2009 2008 2007 2006 

 
Years counted  2005–9 rate (%) 2004–8 2003–7 2002–6 2001–5 
Jimp Article in WOS journal 35,617 33,056  29,773 25,478 
 Article in SCOPUS or ERIH journal 14,113 14 12,352    
 Article in Czech journal-reviewed 19,263 30 14,824    
Jneimp Article in non-WOS journal-Total 33,376 23 27,176  47,445 46,581 
Article in journal-Total 68,992 15 60,232 40,124 77,218 72,059 
B,C Book, chapter 21,096 61 13,094 13,111 17,756 18,740 
Book     7,164 6,468 
Chapter     10,592 12,272 
Proceedings 7,481 66 4,501 2,730 10,4340 83,713 
Patent 229 −38 371 276 562 363 
Utility model, industrial design 566 210 183    
Prototype, functional model 2,225 143 915    
Results implemented into legislation or standards 183 215 58    
Certified method 1,325 393 269    
Software 1,692 192 580    
Secret report −98 400   
Prototype, applied method 3,065 −7 3,284 3,133 1,077  
Trial operation, variety, breed 902 52 593    
Prototype, trial operation 352 −36 551    
Z a Trial operation, verified technology, variety, breed 1,253 10 1,144 887 1,676 1,471 
Specialized maps   105    
 Totals 108,116 28 84,744 60,263 202,630 176,350 
Evaluation year  2010 Growth 2009 2008 2007 2006 

 
Years counted  2005–9 rate (%) 2004–8 2003–7 2002–6 2001–5 
Jimp Article in WOS journal 35,617 33,056  29,773 25,478 
 Article in SCOPUS or ERIH journal 14,113 14 12,352    
 Article in Czech journal-reviewed 19,263 30 14,824    
Jneimp Article in non-WOS journal-Total 33,376 23 27,176  47,445 46,581 
Article in journal-Total 68,992 15 60,232 40,124 77,218 72,059 
B,C Book, chapter 21,096 61 13,094 13,111 17,756 18,740 
Book     7,164 6,468 
Chapter     10,592 12,272 
Proceedings 7,481 66 4,501 2,730 10,4340 83,713 
Patent 229 −38 371 276 562 363 
Utility model, industrial design 566 210 183    
Prototype, functional model 2,225 143 915    
Results implemented into legislation or standards 183 215 58    
Certified method 1,325 393 269    
Software 1,692 192 580    
Secret report −98 400   
Prototype, applied method 3,065 −7 3,284 3,133 1,077  
Trial operation, variety, breed 902 52 593    
Prototype, trial operation 352 −36 551    
Z a Trial operation, verified technology, variety, breed 1,253 10 1,144 887 1,676 1,471 
Specialized maps   105    
 Totals 108,116 28 84,744 60,263 202,630 176,350 

Source: Technology Centre (Czech Academy of Sciences); own calculations.

Numbers are taken from the webpages of RVVI, numbers in italics are Technology Centre calculations, based on the above data.

a This category was named Technologies (T) in 2006 and may include also some other types of results.

Finally, the way the Evaluation Methodology treats applied research outputs makes them subject to opportunistic behaviour too. Various applied outputs are said to be quite easy to produce, e.g. utility models or the national ‘small patent’ only require an administrative act to file. In addition, a prototype receives 40 points, regardless of whether it has been created by one person in 1 year or by a whole team over several years. As there is no quality check on applied research outputs and as originality and functionality are not required, existing and non-functioning solutions are registered. ‘Some results earn cheap points. For instance, the output category utility models are a cheap way to earn points. One can easily create twenty such designs within a month for 40 points each, which means 800 points, multiplied by 4000 Czech crowns per point. That is very attractive for an institute!’ (interview director of research institute, 2010). Table 4 shows that most applied research outputs (categories P to Z) have grown at an amazing rate since their introduction.

Table 4 illustrates some effects of opportunistic behaviour, as research actors, quite understandably, want to attain the highest possible research income. There has been an increase in research outputs (or more precisely, the number of points assigned to them) after the introduction of the Evaluation Methodology. However, the question is whether it has increased the productivity and the quality of research. Furthermore, the unpredictability of the Evaluation Methodology increases the ‘hunt for points’ even more. As research organizations do not know how much money they will earn for their points in the following year, they do not know how much institutional funding they will receive. Consequently, they accumulate as many points as possible, inflating the total number of points in relation to the total amount of relative institutional R&D funding, with the result that a point is worth less and less: ‘Within the system, everyone is now obsessed with getting points. The total number of points has grown from one year to the next. In 2009, 1 point = 10 k crowns. In 2011 a point will be worth only 4 k crowns. So it’s impossible to do serious financial planning against this background’ (Interview civil servant, 2010).

3.4 Unpredictability and instability

While in 2010 less than 50% of institutional research funding was allocated based on the Evaluation Methodology, this share was supposed to grow to 100% over the coming years. However, already in 2010, the Evaluation Methodology was causing large and erratic changes in institutional funding, making institutional funding unreliable and planning a challenge: ‘It is important to have institutional money in order to fund development—the coffee grinder would lock you in … Without stability, you quickly lose the best people’ (interview vice-rector, 2010). In our interviews, research managers explained that due to the unpredictability of institutional funding, institutes started to focus on short-term strategies to solve immediate problems. Due to the large uncertainty about future developments regarding the implementation and translation of the Evaluation Methodology into funding, more thought-out coping strategies are next to impossible. However, in a research environment—where time horizons are inherently long—long-term planning is important to create high-quality research results.

It is outside the scope of this article to go into depth about how different types of research organizations are coping specifically with the instability, unpredictability, and discontinuity of funding, but we analysed how it plays out differently for various scientific fields and disciplines. 8 As can be seen in Table 5 , between 2008 and 2010 some disciplines experienced increases in their share of institutional funding of over 200%, while others experienced decreases of similar magnitude. Scientific fields with percentages highlighted in bold ( Table 5 ) experienced major drops in their institutional funding of 50% or more. Major impacts can be seen in Agricultural Science, Earth Science and Bioscience, and Industrial research, with (sub-)disciplines affected including Sociology, Demography; Thermodynamics; Soil Science; Cell Biology; Cardiovascular diseases; Agricultural Economics; Agronomy; Livestock Rearing & Nutrition; and Computer Applications & Robotics. In contrast, scientific fields with percentages highlighted in italics are those that experience a significant increase in their institutional funding of double or more. (Sub-)disciplines affected are especially concentrated in the fields of Medicine, Chemistry, and Physics & Mathematics but also includes some areas in the Humanities & Social Sciences, such as Political Sciences and Legal Sciences.

Table 5.

Impact of the Czech Evaluation Methodology on the funding distribution over specific scientific and technological fields

Code Science field Share of institutional funding under the 2008 system (%) Share of institutional funding under PRFS 2010 (%) Change (%) 
AC Archaeology, Anthropology, Ethnology 2.0 1.0 −48 
AD Political Sciences 0.2 0.6 222 
AG Legal sciences 0.4 1.0 156 
AJ Letters, Massmedia, Audiovision 0.5 1.1 114 
AK Sport & Free-time Activities 0.2 0.1 −58 
AO Sociology, Demography 1.5 0.6 −60 
HUMANITIES & SOCIAL SCIENCES 13.5 14.2 
BA General Mathematics 2.0 3.5 74 
BE Theoretical Physics 0.3 1.0 225 
BF Elementary Particles and High Energy Physics 0.8 1.8 130 
BG Nuclear, Atomic and Molecular Physics, Colliders 2.0 1.0  48  
BJ Thermodynamics 1.0 0.4 −63 
BM Solid Matter Physics & Magnetism 4.0 5.3 32 
PHYSICS & MATHEMATICS 16.5 21.3 29 
CA Inorganic Chemistry 0.7 1.6 122 
CB Analytical Chemistry, Separation 0.7 2.2 208 
CC Organic Chemistry 2.0 1.8  9  
CD Macromolecular Chemistry 2.2 1.5  31  
CE Biochemistry 1.1 1.8 68 
CF Physics & Theoretical Chemistry 1.4 3.5 148 
CHEMISTRY 9.8 14.3 46 
DB Geology & Mineralogy 3.0 1.5 −52 
DC Seismology, Volcanology, Earth Structure 0.7 0.4  43  
DF Soil Science 0.4 0.1 −70 
DG Atmosphere Sciences, Meteorology 0.6 0.3 −54 
DH Mining, incl. Coal Mining 0.1 0.1 28 
EARTH SCIENCE 7.4 6.0 −20 
EA Cell Biology 0.7 0.3 −62 
EB Genetics & Molecular Biology 4.0 3.7 −7 
EC Immunology 0.2 0.5 131 
ED Physiology 2.3 1.1 −54 
EE Microbiology, Virology 2.5 1.9 −24 
EF Botanics 2.1 1.8 −13 
EG Zoology 2.1 1.4 −34 
BIOSCIENCE 15.7 12.0 −23 
FA Cardiovascular Diseases incl. Cardiothoracic Surgery 2.4 1.5 −36 
FB Endocrinology, Diabetology, Metabolism, Nutrition 1.6 0.9 −43 
FI Traumatology, Orthopaedics 0.1 0.2 129 
FL Psychiatry, Sexology 0.2 0.5 161 
FP Other Medical Disciplines 0.5 1.1 116 
FR Pharmacology & Medical Chemistry 0.3 0.8 156 
MEDICINE 7.9 12.2 54 
GA Agricultural Economics 1.1 0.2 −79 
GC Agronomy 1.9 0.3 −82 
GF Plant Pathology, Vermin, Weed, Plant Protection 0.1 0.5 408 
GG Livestock Rearing 1.0 0.3 −72 
GH Livestock Nutrition 0.5 0.2 −62 
GJ Animal Vermins & Diseases, Veterinary Medicine 1.6 0.7 −56 
AGRICULTURAL SCIENCE 9.3 4.9 −48 
IN Informatics, Computer Science 2.5 2.0 −21 
INFORMATICS, COMPUTER SCIENCE 2.5 2.0 −21 
JB Sensors. Measurement, Regulation 0.5 0.9 82 
JC Computer Hardware & Software 0.4 1.4 239 
JD Computer Applications, Robotics 4.3 0.6 −85 
JJ Other Materials 0.8 0.3 −58 
JK Corrosion & Surface Treatment of Materials 0.5 0.2 −60 
JL Materials Fatigue, Friction Mechanics 0.1 0.2 116 
JM Building Engineering 0.2 0.5 144 
JQ Machines & Tools 0.3 0.6 109 
JT Propulsion, Motors & Fuels 0.1 0.2 110 
INDUSTRY 16.5 12.8 −23 
Code Science field Share of institutional funding under the 2008 system (%) Share of institutional funding under PRFS 2010 (%) Change (%) 
AC Archaeology, Anthropology, Ethnology 2.0 1.0 −48 
AD Political Sciences 0.2 0.6 222 
AG Legal sciences 0.4 1.0 156 
AJ Letters, Massmedia, Audiovision 0.5 1.1 114 
AK Sport & Free-time Activities 0.2 0.1 −58 
AO Sociology, Demography 1.5 0.6 −60 
HUMANITIES & SOCIAL SCIENCES 13.5 14.2 
BA General Mathematics 2.0 3.5 74 
BE Theoretical Physics 0.3 1.0 225 
BF Elementary Particles and High Energy Physics 0.8 1.8 130 
BG Nuclear, Atomic and Molecular Physics, Colliders 2.0 1.0  48  
BJ Thermodynamics 1.0 0.4 −63 
BM Solid Matter Physics & Magnetism 4.0 5.3 32 
PHYSICS & MATHEMATICS 16.5 21.3 29 
CA Inorganic Chemistry 0.7 1.6 122 
CB Analytical Chemistry, Separation 0.7 2.2 208 
CC Organic Chemistry 2.0 1.8  9  
CD Macromolecular Chemistry 2.2 1.5  31  
CE Biochemistry 1.1 1.8 68 
CF Physics & Theoretical Chemistry 1.4 3.5 148 
CHEMISTRY 9.8 14.3 46 
DB Geology & Mineralogy 3.0 1.5 −52 
DC Seismology, Volcanology, Earth Structure 0.7 0.4  43  
DF Soil Science 0.4 0.1 −70 
DG Atmosphere Sciences, Meteorology 0.6 0.3 −54 
DH Mining, incl. Coal Mining 0.1 0.1 28 
EARTH SCIENCE 7.4 6.0 −20 
EA Cell Biology 0.7 0.3 −62 
EB Genetics & Molecular Biology 4.0 3.7 −7 
EC Immunology 0.2 0.5 131 
ED Physiology 2.3 1.1 −54 
EE Microbiology, Virology 2.5 1.9 −24 
EF Botanics 2.1 1.8 −13 
EG Zoology 2.1 1.4 −34 
BIOSCIENCE 15.7 12.0 −23 
FA Cardiovascular Diseases incl. Cardiothoracic Surgery 2.4 1.5 −36 
FB Endocrinology, Diabetology, Metabolism, Nutrition 1.6 0.9 −43 
FI Traumatology, Orthopaedics 0.1 0.2 129 
FL Psychiatry, Sexology 0.2 0.5 161 
FP Other Medical Disciplines 0.5 1.1 116 
FR Pharmacology & Medical Chemistry 0.3 0.8 156 
MEDICINE 7.9 12.2 54 
GA Agricultural Economics 1.1 0.2 −79 
GC Agronomy 1.9 0.3 −82 
GF Plant Pathology, Vermin, Weed, Plant Protection 0.1 0.5 408 
GG Livestock Rearing 1.0 0.3 −72 
GH Livestock Nutrition 0.5 0.2 −62 
GJ Animal Vermins & Diseases, Veterinary Medicine 1.6 0.7 −56 
AGRICULTURAL SCIENCE 9.3 4.9 −48 
IN Informatics, Computer Science 2.5 2.0 −21 
INFORMATICS, COMPUTER SCIENCE 2.5 2.0 −21 
JB Sensors. Measurement, Regulation 0.5 0.9 82 
JC Computer Hardware & Software 0.4 1.4 239 
JD Computer Applications, Robotics 4.3 0.6 −85 
JJ Other Materials 0.8 0.3 −58 
JK Corrosion & Surface Treatment of Materials 0.5 0.2 −60 
JL Materials Fatigue, Friction Mechanics 0.1 0.2 116 
JM Building Engineering 0.2 0.5 144 
JQ Machines & Tools 0.3 0.6 109 
JT Propulsion, Motors & Fuels 0.1 0.2 110 
INDUSTRY 16.5 12.8 −23 

Source : Elaboration of data in the report of the ‘Project for the preparation of the Methodology to evaluate the results of research organizations and of programmes finished in 2010’, Secretariat of the Board of the RDI Council, 2010; Technology Centre (Czech Academy of Sciences); Bold represents decreases in institutional funding of 50% or more; Italics represents increase of institutional funding of double or more.

As a result of these large fluctuations, the 2010 Evaluation Methodology introduced ‘damping factors’ for groups of disciplines, aiming to limit shifts in funding among scientific fields and categories of outputs. First of all, factors were initially set, so that fluctuations in funding between science fields could not exceed 15%. Second, changes in funding between basic and applied research were not allowed to exceed 1.5%. Third, changes in funding for various categories of research outputs could not change by more than 150%, with the exception of J imp (article in journals covered by WoS) and J neimp (articles in journals covered by SCOPUS or ERIH and article in (Czech) peer-reviewed journals).

However, these damping factors were not sufficient to solve the unequal treatment of disciplines. Most importantly, the research outputs that weigh the most—that is, articles published in journals covered by the WoS (J imp journals)—were excluded from the damping factor ( Table 6 ). These articles accounted for 65% of the points achieved in 2009. Hence, the damping factor actually covered only 35% of research outputs. In addition, while differences in scientific fields were taken into account, there were still considerable inter-field differences that were not addressed, such as different publishing patterns among disciplines (e.g. economics and history). Both aspects limit the effectiveness of the damping factors.

Table 6.

Weight of the R&D outputs for the achievement of points—outputs of 2009

Type of result  Percentage of total in 2009 
J imp Article in journals covered by WoS 65.1 
J neimp Article in journals covered by SCOPUS or ERIH (non WoS) 5.0 
Article in (Czech) peer-reviewed journals listed in the List of Periodicals (nonWoS) 3.5 
B (+C) Book or chapter in book 8.1 
Article in proceedings (included in the ISI proceedings) 1.6 
Patents 1.9 
Research report containing secret information 0.02 
T a Trials, verified technologies, prototypes etc. 2.5 
Trial operation, verified technology, variety, breed 2.7 
S b Prototype, certified (applied) method, functional sample, authorized software, utility model, industrial design 5.9 
Prototype, functional model 1.6 
Certified methodology 0.5 
Utility model 0.3 
Specialized maps 0.2 
Authorized software 1.0 
 Total 100.0 
Type of result  Percentage of total in 2009 
J imp Article in journals covered by WoS 65.1 
J neimp Article in journals covered by SCOPUS or ERIH (non WoS) 5.0 
Article in (Czech) peer-reviewed journals listed in the List of Periodicals (nonWoS) 3.5 
B (+C) Book or chapter in book 8.1 
Article in proceedings (included in the ISI proceedings) 1.6 
Patents 1.9 
Research report containing secret information 0.02 
T a Trials, verified technologies, prototypes etc. 2.5 
Trial operation, verified technology, variety, breed 2.7 
S b Prototype, certified (applied) method, functional sample, authorized software, utility model, industrial design 5.9 
Prototype, functional model 1.6 
Certified methodology 0.5 
Utility model 0.3 
Specialized maps 0.2 
Authorized software 1.0 
 Total 100.0 

Source: Technology Centre (Czech Academy of Sciences).

a Category valid until 2006; substituted by categories S and Z.

b Category valid in 2007 and 2008; substituted by new categories in 2009.

4. The Evaluation Methodology in international context

There are some common themes running through the rationales used to justify the introduction of performance-based research funding systems. Governments use PRFS to (i) stimulate efficiency in research activity; (ii) allocate resources based on merit; (iii) reduce information asymmetry between supply and demand for new knowledge; (iv) inform research policies and institutional strategies; and (v) demonstrate that investment in research is effective and delivers public benefits ( Abramo, D’Angelo and Di Costa 2011 ). In the tradition of New Public Management ( Boston et al., 1996 ; Ferlie et al., 1996 ), PRFSs seek to increase accountability for the expenditure of taxpayers’ money. They are seen as a means for selectively distributing research funds, but most also seek to use them to drive particular behaviours. While the Czech Republic is using a PFRS to guarantee objectivity and transparency, improve productivity and quality of research, and emphasize innovation, the shift to performance-based funding is generally also part of a broader movement to make universities more autonomous and introduce more strategic university research management.

With more and more countries emphasizing accountability in their research systems, there is a growing academic body of literature on research evaluation and performance-based research funding systems ( Butler 2003 , 2010 ; Geuna and Martin 2003 ; Gläser and Laudel 2007 ; Rodrigues-Navaro 2009 ; Auranen and Nieminen 2010 ; Martin and Whitley 2010 ; Martin 2011 ; Hicks 2012 ; Molas-Gallart 2012 ). This discusses a variety of important dimensions of performance-based research funding systems, on which the Czech Evaluation Methodology systematically stands out as distinct: unit of analysis, periodicity, output measures, cost, weight of output indictors, percentage of funding allocated by performance-based research funding system, and disciplinary differences.

Performance-based research funding systems have different units of analysis , for example, the Spanish system measures and rewards individual performance ( Hicks 2012 ; Molas-Gallart 2012 ). The UK and Hong Kong Research Assessment Exercises (RAEs) have ‘units of assessment’ that correspond more or less to departments or research groups. However, most performance-based research funding systems operate at the level of organizations. They are generally applied to a relatively homogeneous group of organizations, most often to public universities. In contrast, the Czech Evaluation Methodology is applied to a very heterogeneous group of organizations, ranging from universities, institutes of the Czech Academy of Sciences, applied research institutes to museums and university hospitals.

In terms of periodicity of funding allocation, the Czech Evaluation Methodology is in a minority too, as the funding is allocated annually, which leads to uncertainty in the planning of research organizations. In other countries, intervals between funding allocation exercises are longer, thus giving research organizations planning security. In Finland, for example, while the funding formula is calculated annually like in the Czech Republic, budgets are given for a 3-year period and a 4-year period as of 2013 ( Hölttä and Rekilä 2003 ; Mäkeläinen 2010 ). In the UK RAE, intervals have varied since the RAE was introduced in 1986. After an initial interval of 3 years (1986, 1989, 1992), it was expanded to 4 and 5 years (1996 and 2001). After 2001, there was a growing sense that the RAE had become too cumbersome and costly, and after a thorough review, the RAE took place again in 2008, with the new periodicity being 6 years (2014, 2020) ( Roberts 2003 ; Bence and Oppenheimer 2005 ; Martin and Whitely 2010 ). In short, both cases show a tendency towards longer budget periods, while the Czech Evaluation Methodology allocates 1-year budgets.

Research evaluation processes tend to focus on four output measures : volume, quality, impact, and utility ( Geuna and Martin 2003 ). The earlier systems tend to be heavily based upon peer review, paying attention to all four output measures. However, as in the Czech Republic, there is a growing interest in indicator-based systems, reflecting a desire to simplify and reduce the cost of assessment. Even among indicators-based systems, cost is a major consideration. While Norway chose to establish a national system of grading journals and require researchers to input their publications into a central database, Sweden has opted to focus on ISI journals and ISI-derived indicators in order to put a system in place more quickly and economically than was the case in Norway ( Carlsson 2009 ). 9 However, metrics-based systems put a predominant weight on volume indicators, while quality and impact is more difficult to capture with quantitative indicators, compared to peer review. This is why the first UK Research Excellence Framework proposal put forward in 2006, which was largely metrics-based, was abandoned after extensive consultation taking into account the scientific community’s reservations. In contrast, in the Czech Republic peer review is largely viewed with scepticism for fear of corruption and nepotism, and the use of indicators in the Czech Republic is driven by a desire to depoliticize and depersonalize the evaluation and funding process.

Most indicator-based performance-based research funding systems use a variety of indicators—not only output indicators (scientific publications and research outputs) but also input indicators (e.g. recruitment of PhD students and academic staff), process indicators (e.g. seminar and conference activity, international visiting research appointments), and structure indicators (e.g. number of PhD students, research collaborations and partnerships) ( Foss Hansen 2010 ). Typically, the weight of output indicators in a funding formula is small. In Norway, research publications are only one of four indicators that drive institutional research funding, the others being PhD production, EU research funding, and research funding from the Research Council of Norway ( Sivertsen 2010 ). In Finland, scientific publications drive only 5% of formula-based institutional funding for research and researcher education. Other indicators used include number of researchers, number of doctoral degrees completed at the university, external research funding, and researcher mobility ( Hölttä and Rekilä 2003 ; Mäkeläinen 2010 ). In contrast, the Czech Evaluation Methodology uses only output indicators, with the consequence that output indicators have a weight of 100%.

Furthermore, the percentage of funding that depends on the research evaluation is a key feature of performance-based research funding systems. Performance-based research funding systems typically move small amounts of money around each time they are performed ( Sivertsen 2010 ). In terms of shares of public university funding, for example, in 2008 the performance-based research funding systems in Australia and New Zealand governed 10% of university funding, in Italy and Norway 2%, and in Sweden 12.5% ( Hicks 2012 ). Also, there typically are ‘correcting factors’ to maintain stability in the system. For example, the UK Higher Education Funding Council operates a ‘moderation fund’ to help universities cope with changes. In the Czech Republic, however, the Evaluation Methodology governs 100% of institutional research funding.

Finally, accounting for disciplinary differences has not only been a major concern in the Czech Evaluation Methodology but also in other performance-based research funding systems. There are various ways of accounting for major differences in propensity to publish in fields. For example, the UK RAE achieves this by not putting different fields in competition with each other. Instead, similar departments compete within about 60 ‘units of assessment’ so that the RAE rewards quality within disciplines but does not cause competition among disciplines. In contrast, the Czech system sets the disciplines against each other and tries to compensate for differences in publication behaviour in the Evaluation Methodology. The Evaluation Methodology has tried to take into account differences between disciplines by including different types of research outputs (e.g. not only journal articles but also books), by sorting research outputs into two large disciplinary groups (‘specialisations’) and in 2010 by introducing ‘damping factors’ among 10 disciplinary groups.

Comparing the Czech Evaluation Methodology with other performance-based research funding systems along various dimensions clearly shows that the Czech system is uniquely radical in its focus on scientific publications and other research outputs and their automatic translation into money. What is more, the Czech performance-based research funding system is applied annually to all types of research organizations, allocating 100% of institutional research funding.

5. Conclusions

In 2010, when we started our research, the Evaluation Methodology had become a subject of heated debate and even demonstrations in the streets. It turned out that the protest was not directed against accountability but against the methods used to assess research: from a rather comprehensive research assessment approach sketched out in the initial policy papers, the Evaluation Methodology turned into a purely quantitative and mechanistic tool for counting and rating publications and other research outputs (‘coffee mill’) to calculate institutional funding. This transformation occurred through continuous modifications, discussed and decided on by the RDI council. As such, the Evaluation Methodology became a new way to safeguard accountability in the publicly funded research system especially institutional funding decisions, based on assessments of research outputs. In addition, the original intention to use the Evaluation Methodology as a way to depersonalize and depoliticize the research system appeared to be attained through a metrification of quality. However, the Evaluation Methodology did not rule out political decisions, but simply concealed them behind numbers: ‘Now the political decisions are hidden in the points system!’ (interview rector, 2010).

While the Czech research community agrees on the need for more objective funding decisions and a need to improve research output, and sees the development of accountability through research assessment as an important step to achieve this, the Evaluation Methodology has not fulfilled these requirements: ‘If the assessment system is wrong, the allocation of money based on this system simply cannot be right’ (interview rector, 2010). As such, the Evaluation Methodology can be seen as a symptom of the lack of trust in the Czech system, and its operation confirms the observation that not only measurement but also trust is needed for effective governance of the research system. Instead of improving the quality of research, the Evaluation Methodology has threatened the stability and the continuity of research organizations in the Czech Republic and has become a serious problem of the Czech research and innovation system. Responsible policymakers in the Czech Republic are aware of the problems related to the Evaluation Methodology, and they are in the process of transforming the way in which funding decisions are made. After the International Audit presented its results—with similar conclusions as presented here—a number of emergency measures were taken (for example, decreasing the share of funding allocated through the Evaluation Methodology) to mitigate the threats to the stability and continuity of the research system.

Internationally, the use of performance-based research funding systems has mixed consequences but does appear to bring benefits in terms of increasing accountability and transparency, linking performance with research funding. However, a great deal of care has to go into the design of the system, combining quantitative and qualitative methods, in order to avoid unintended—and sometimes perverse—consequences. These exercises are typically intended not only to reward or punish performance but also to influence behaviour, in particular to increase research quality. In the best case, PRFS can encourage research organizations to improve their research management and planning. In the Czech case, in contrast, the Evaluation Methodology causes discontinuity and instability in the system, making strategic planning next to impossible. This is a particular problem in a research and innovation system in transition, such as in the Czech Republic; giving only quantitative information on research outputs, it fails to support policymakers and research organizations with information needed to improve the quality of research and to develop their organization.

Our analysis of the Evaluation Methodology emphasizes the problematic sides a performance-based research funding system can have. It serves as a negative example of a performance-based research funding system that is purely metrics-based and focused on research outputs only. In its unique radicalism, it is of interest for the academic community studying performance-based research funding systems and their impacts, as well as to policymakers considering the introduction of such a system. As such, it contributes to debates in the literature about the advantages and disadvantages of such systems.

Finally, our analysis of the contemporary dynamics of the research system in the Czech Republic directs attention of scholars of research policy to significant and noteworthy developments in Central and Eastern Europe. With some notable exceptions ( Radosevic and Lepori 2009 ; Linkova and Stöckelova 2012 ; Fiala 2013 ; Vanecek 2013 ), the research evaluation and science policy communities have until now paid scant attention to this region. This is a missed opportunity, both for the research policy community and Central and Eastern European countries. As this article shows, research can benefit from analyses of these research systems, especially when studying interactions between politico-economic regimes and research policy, their developments, and dynamics. In turn, the countries under study would benefit from academic analysis, both from scholars within and outside the country, learning about the functioning of research systems.

Acknowledgements

This article is based on research on the Czech Evaluation Methodology conducted in 2010 and 2011 in the context of the ‘International Audit of R&D&I in the Czech Republic’ commissioned by the Czech Ministry of Education, Youth, and Sports. The authors would like to thank the two anonymous reviewers for their valuable comments.

Notes

1. Government of the Czech Republic, Government Resolution No. 644 of 23 June 2004 on the evaluation of research and development and its results, article III.2. (downloaded on 16 June 2011 from http://www.vyzkum.cz/storage/att/4095103B3DF675FBB4E74B73874615F5/Metodika%20hodnoceni%2%200vav.pdf).
2. All versions of the Evaluation Methodology can be found at: http://www.vyzkum.cz/FrontClanek.aspx?idsekce=18748 . We based our analysis on English translations of the documents for 2004, 2009, and 2010
3. For more detailed information see Arnold et al., 2011 , especially pages 38–42.
4. Official statistical analyses of the performance of the Czech research and innovation system during the last decade show that Czech performance and that of other Eastern European new EU member states lag behind that of Western European countries. Nonetheless, and while there was scope for a great deal of catching up, Czech academic research output was growing substantively, while citation and productivity performance were improving faster than in most other new member states ( Arnold, 2011 ).
5. That is, ministries and the Academy of Sciences. In practice, the Evaluation Methodology was also used by research-performing organizations to distribute institutional funding internally. An exception was the Academy of Sciences which uses its own internal evaluation system to allocate research funding to its various institutes.
6. Some fields (especially in the humanities) emphasize publication in monographs or books; others (notably the basic ‘hard’ sciences) in journals. Applied scientists and engineers often communicate more via conference proceedings rather than through learned journals or, especially engineers, in journals not listed in the Web of Science (WoS). Mathematicians write few but extensive articles; chemists produce many, short articles; and so on. Performance-based research funding systems which use publication as an indicator need to take account of the major differences in ‘propensity to publish’ among fields.
7. Austria, Hungary, Finland, Sweden, Norway, Belgium, and Switzerland.
8. This categorization was developed by the Czech RDI Council and differs from more widely used categorizations like the FOS defined by the OECD.
9. The Swedish system is currently under review.

References

Abramo
G
D’Angelo
CD
Di Costa
F
‘National Research Assessment Exercises: A Comparison of Peer Review and Bibliometric Rankings’
Scientometrics
 , 
2011
, vol. 
89
 (pg. 
929
-
41
)
Arnold
E
‘International Audit of Research, Development & Innovation in the Czech Republic’, Synthesis Report
2011
Brighton
Technopolis Group
Arnold
E
Good
B
Ohler
F
Tiefenthaler
B
Vermeulen
N
Institutional Funding and Research Evaluation in the Czech Republic and abroad
 , 
2011
Brighton/Vienna
Technopolis Group
Auranen
O
Nieminen
M
‘University Research Funding and Publication Performance—An International Comparison’
Research Policy
 , 
2010
, vol. 
39
 
6
(pg. 
822
-
34
)
Bence
V
Oppenheim
C
‘The Evaluation of the UK’s Research Assessment Exercise: Publications, Performance and Perceptions’
Journal of Educational Administration and History
 , 
2005
, vol. 
37
 
2
(pg. 
137
-
55
)
Boston
J
Martin
J
Pallot
J
Walsh
P
Public Management: The New Zealand Model
 , 
1996
Oxford
Oxford University Press
Butler
L
‘Explaining Australia’s Increased Share of ISI Publications—The Effects of a Funding Formula Based on Publication Counts’
Research Policy
 , 
2003
, vol. 
32
 (pg. 
143
-
55
)
Butler
L
Impacts of Performance-Based Research Funding Systems: A Review of the Concerns and Evidence
 , 
2010
Paris
OECD
Carlsson
H
‘Allocation of Research Funds Using Bibliometric Indicators—Asset and Challenge to Swedish Higher Education Sector’
InfoTrend
 , 
2009
, vol. 
64
 
4
(pg. 
82
-
8
)
Ferlie
E
Ashburner
L
Fitzgerald
L
Pettigrew
A
New Public Management in Action
 , 
1996
Oxford
Oxford University Press
Fiala
D
‘Science Evaluation in the Czech Republic: The Case of Universities’
Societies
 , 
2013
, vol. 
3
 
3
(pg. 
266
-
79
)
Foss Hansen
H
Performance Indicators Used in Performance-Based Funding Systems
 , 
2010
Paris
OECD
Geuna
A
Martin
B R
‘University Research Evaluation and Funding: an International Comparison’
Minerva
 , 
2003
, vol. 
41
 (pg. 
277
-
304
)
Gläser
J
Laudel
G
Whitley
R
Gläser
J
‘Evaluation Without Evaluators’
The Changing Governance of the Sciences. The Advent of Research Evaluation Systems
 , 
2007
(pg. 
127
-
51
Springer, Dordrecht: Sociology of the Sciences Yearbook, 26
 
Government of the Czech Republic. Resolution of the Government of the Czech Republic of 23 June 2004, no. 644 on the Evaluation of Development Research and is Results, Article III.2. <http://www.vyzkum.cz/storage/att/4095103B3DF675FBB4E74B73874615F5/Metodika%20hodnoceni%2%200vav.pdf> accessed 16 June 2011
Hicks
D
‘Performance - based University Research Funding Systems’
Research Policy
 , 
2012
, vol. 
41
 (pg. 
251
-
61
)
Hölttä
S
Rekilä
E
‘Ministerial Steering and Institutional Responses: Recent Developments of the Finnish Higher Education System’
Higher Education Management and Policy
 , 
2003
, vol. 
15
 
1
(pg. 
57
-
70
)
Lepori
B
Masso
J
Jablecka
J
Sima
K
Ukrainski
K
‘Comparing the Organization of Public Research Funding in Central and Eastern European Countries’
Science and Public Policy
 , 
2009
, vol. 
36
 
9
(pg. 
667
-
81
)
Linková
M
Stöckelová
T
‘Public Accountability and the Politicization of Science: The Peculiar Journey of Czech Research Assessment’
Science and Public Policy
 , 
2012
, vol. 
39
 
5
(pg. 
618
-
29
)
Mäkeläinen
U
2010
 
Efficiency and Effectiveness of Public Expenditure on Tertiary Education in the EU, Country Fiche Finland, ‘Joint Report by the Economic Policy Committee (Quality of Public Finances) and the Directorate-General for Economic and Financial Affairs’, European Economy Occasional Papers No 70. Brussels: European Commission
Martin
B
Whitley
R
Whitley
R
Gläser
J
Engwall
L
‘The UK Research Assessment Exercise: A Case of Regulatory Capture?’
Reconfiguring Knowledge Production: Changing Authority Relationships in the Sciences and their Consequences for Intellectual Innovation
 , 
2010
Oxford
Oxford University Press
(pg. 
51
-
80
)
Martin
B
‘The Research Excellence Framework and the ‘impact agenda’: Are We Creating a Frankenstein Monster?’
Research Evaluation
 , 
2011
, vol. 
20
 
3
(pg. 
247
-
54
)
Molas-Gallart
J
‘Research Governance and the Role of Evaluation: A Comparative Study’
American Journal of Evaluation
 , 
2012
, vol. 
33
 (pg. 
577
-
92
)
RDI Council
Analysis of the Existing State of Research, Development and Innovation in the Czech Republic and a Comparison with the Situation Abroad in 2008
 , 
2008
Prague
Office of the Government of the Czech Republic
Radosevic
S
Lepori
B
‘Public Research Funding Systems in Central and Eastern Europe: Between Excellence and Relevance: Introduction to Special Section’
Science and Public Policy
 , 
2009
, vol. 
36
 
9
(pg. 
659
-
66
)
Roberts
G
2003
 
Review of Research Assessment. ‘Report to the UK Funding Bodies’ http://www.ra-review.ac.uk/reports/roberts.asp
Rodrigues-Navarro
A
‘Sound Research, Unimportant Discoveries: Research, Universities and Formal Evaluation of Research in Spain’
Journal of the American Society for Information Science and Technology
 , 
2009
, vol. 
60
 
9
(pg. 
1845
-
58
)
Sivertsen
G
‘A Performance Indicator Based on Complete Data for the Scientific Publication Output at Research Institutions’
ISSI Newsletter
 , 
2010
, vol. 
6
 
1
(pg. 
22
-
8
)
Vanecek
J
‘The Effect of Performance-based Research Funding on Output of R&D Results in the Czech Republic’
Scientometrics
 , 
2013
, vol. 
10
 (pg. 
1
-
25
)

Author notes

These authors have contributed equally to the paper.