Research excellence in Africa: Policies, perceptions, and performance

Our article discusses various features of research excellence (RE) in Africa, framed within the context of African science granting councils (SCGs) and pan-African RE initiatives. Our survey, collect-ing responses from 106 researchers and research coordinators across Africa, highlights the diversity of opinions and preferences with regards to Africa-relevant dimensions of RE and related performance indicators. The results of the survey conﬁrm that RE is a highly multidimensional concept. Our analysis shows how some of those dimensions can be operationalised into quantiﬁable indicators that may suit evidence-based policy discourses on research quality in Africa, as well as research performance assessments by African SCGs. Our indicator case study, dealing with the top 1 per cent most highly cited research publications, identiﬁes several niches of international-level RE in the African continent while highlighting the role of scientiﬁc cooperation as a driving force. To gain a deeper understanding of RE in Africa, it is important to take into account the practical challenges faced by researchers and research funding agencies to align and reconcile socioeconomic interests with international notions of excellence and associated research performance indicators. African RE should be customised and contextualised in order to be responsive to African needs and circumstances.


What is 'research excellence'?
Research excellence (RE) has become a fashionable policy-relevant concept in the world of science funding and assessment. The meaning of RE, and its implementation in research practice and management, is influenced by political considerations and also by the varied social, cultural, and organisational environments in which researchers and scholars have to operate. Scientific performance is also affected by economic conditions and the availability of human resources. Globally, including the African continent, there has been increasing interest to pursue RE-often geared towards creating an enabling environment to groom and attract high-quality researchers. Such 'top performers' are strategically identified by public sector agencies and funding organisations. With demands increasingly outstripping the supply of available resources, thus driving pleas for more selectivity in resource allocation and transparency in decision-making processes, the need for defining, identifying, and operationalising RE is becoming increasingly urgent for all stakeholders concerned.
Unfortunately, there is no agreement on what is meant by 'excellence'-there has never been. Attempts to objectify and operationalise excellence face an entangled web of fuzzy concepts and ambiguous meanings (Tijssen 2003). In trying to capture the essence of excellence, we sought guidance from one of today's many online information sources, Wikipedia, to find the following descriptions of and critical commentary on 'excellence': • a talent or quality which is unusually good, and so surpasses ordinary standards; • a continuously moving target that can be pursued through actions of integrity, being frontrunner in terms of products / services provided that are reliable and safe for the intended users, meeting all obligations and continually learning and improving in all spheres to pursue the moving target; • is frequently criticised as a buzzword that tries to convey a good impression often without imparting any concrete information.
Another online source, the Oxford Dictionary, simply defines RE as 'the quality of being outstanding or extremely good'. 1 Stating that someone or something is 'unusually good, and so surpasses ordinary standards' has three major implications in terms of passing judgement on research proposals, activities, or scientific achievements: 1. sufficient knowledge of the subject matter to pass a credible, evidence-based value judgements of research quality; 2. existence of meaningful 'ordinary standards' that enable convincing definitions or descriptions of 'unusually good'; 3. widely acceptable operationalisation and quantification of 'unusually good' to identify and describe excellence in terms of 'exceptionally good performance' or other dimensions of superiority.
This article addresses these three issues from a perspective of African science in general, more specifically that of the Science Granting Councils (SGCs) of sub-Saharan African countries.

Analytical framework and research questions
Given the global rise of science policy initiatives to promote 'excellence', we need convincing and transferable evidence on if and how such high levels of performance occur. From a decision-making viewpoint, one should distinguish between procedural value (i.e. transparency and fairness of decision-making processes) and evidence value (the type and weight of the evidence needed to justify a decision or recommendation). Focusing mainly on the second of these two values, this article aims to develop a clearer understanding of RE in terms of instrumental issues related to comparative value judgements across units of assessment.
As for the African science context, improving the quality of research has become a central objective of science, technology, and innovation (STI) policies in many African countries. Like anywhere else worldwide, African research outputs are expected to comply with generally-accepted quality criteria (convincing, competent, relevant, rigorous, and applicable). However, scarcity of R&D (research and development) resources and the continent's socioeconomic challenges pose major obstacles to achieving such ambition. While the continent accounts for 15.5 per cent of the world population, the money available for R&D accounts for only 1.3 per cent of global expenditures (UNESCO 2015: 26). One may argue that publishing research articles in high impact peer-reviewed international scholarly journals, is of lesser relevance than conducting locally relevant research that deals with African socioeconomic problems. Given the state of science in many African countries, the key ambition is to create sufficient research capacity. This involves the development of individual skills and facilities of its scientists and scholars, but also upgrading general infrastructure such as adequate funding frameworks and quality assessment systems that allow an efficient distribution of scarce funding for research.
Moreover, there are many interpretations of 'excellence' and ideas about how it could or should be applied within the African context-often accompanied by passionate pleas for Africacustomised notions-as, for example, expressed by Ndofirepi and Cross (2016): Excellence, in our view, will only be realised if the African university adopts an African-centred paradigm, providing a space for African peoples to decipher their own experiences on their own terms, philosophies and constructions, instead of being directed through a Eurocentric lens. In their search for world-class university status, African universities are caught up in persistently trying to maintain equilibrium between building a globally competitive university and being nationally responsive. These need not be mutually exclusive goals. After all, fundamentally, the notion of excellence is a concept which works as a grand vision, buttressing broad-minded, strategic decision-making and planning in universities.
Nonetheless, African research must also try to transcend the confines of Africa as a geographical space to remain globally competitive. Alignments and conflicts between these global and local objectives, point to a need for closer analysis of quality concepts and performance indicators, especially with regards to defining and capturing Africa-specific dimensions of RE.
Framing the notion of RE within the context of a research performance monitoring, measurement, and assessment, our article touches on three fundamental conceptual and methodological questions in terms of how science is funded and evaluated by African SCGs (Méndez 2012 This article addresses these questions from both a global and local (African) perspective. In doing so, we focus our attention on: • analytical frameworks that may help SGCs to assess research performance and RE; • dimensions and sub-dimensions of RE that seem particularly relevant in African research-performing organisations (universities and non-university research centres).
Our empirical study relies on three sources of information: (1) a desktop review of existing literature on RE, (2) online surveys and interviews with informants at selected key African universities and research-performing organisations, and (3) bibliometric data on African research publications. 2 The next section introduces the public policy background, defined by a series of excellence-related initiatives that were launched in Africa over the last 10-15 years. Section 3 describes the key results of our online survey to gather African perceptions on RE. Section 4 introduces one of the scarce performance indicators currently available to gauge RE across Africa: highly cited research publications produced by African scientists and scholars. The final section presents our general conclusions and suggestions on how to contextualise and customise RE within African science.

African excellence-related policy initiatives
We live in an era where excellence-promoting initiatives have emerged as high-profile policy instruments in the world's more advanced economies (OECD 2014). Their national research systems are increasingly faced with a hypercompetitive environment for ideas, talent, and funds. The current focus on excellence provides both a driving force and a policy framework to justify large-scale, long-term funding to designated organisations that (have the capability to) engage in high-quality research. Usually the policy goal is to encourage or foster research that, ultimately, will generate positive socioeconomic impacts and benefits. Similar organisational restructuring processes are now also taking place in the African continent.
The following examples indicate that excellence is not only seen as a major marker of performance, but also as a driving force for forward-looking policies with high levels of political and organisational ambition.
Back in 2002, the Biosciences Eastern and Central Africa Network (BecA) became the first of four sub-regional hubs to be established by New Partnership for Africa's Development (NEPAD), with support from the Canadian government. In 2005, the Science and Technology Consolidated Plan of Action 2005-2014 (CPA) constituted Africa's first attempt to articulate the continent's collective commitment to move towards an innovation-led knowledge economy. The CPA acknowledged that science and technology had to be produced and used to solve specific African problems. The pursuit of RE was emphasised in the CPA, and resulted in multiple centres of excellence being launched across Africa. Initial efforts were led by the NEPAD, by identifying Centres of Excellence in science and technology for Africa's sustainable development, in water and biosciences, forming new forms of regional and sub-regional networks. Networks of Centres of Excellence were identified in Eastern, Western, Southern, and Northern Africa through calls for interest where selected organisations had to demonstrate their sustainability and strong experience in their respective sectors.
In South Africa, the South African Research Chairs Initiative was established in 2006 to increase the number of 'excellent' black and female researchers. In the same country, the Centres of Excellence funding scheme launched in 2004 currently has a network of fifteen research centres, five of which were established in 2014 (UNESCO 2015).
In 2006 NEPAD launched the Programme for the Support and Development of Regional Centres of Excellence of the West African Economic and Monetary Union (WAEMU/UEMOA). It was implemented as a component of the strategic framework for the African Union to combat poverty and underdevelopment throughout the African continent. The first and second phases (running from 2006 to 2010 and 2012 to 2016, respectively) resulted in the identification and support of twenty Centres of Excellence-higher education and research institutions of the UEMOA/WAEMU zone. In 2009 NEPAD also initiated a programme to build regional networks of Centres of Excellence in water sciences in Southern Africa and Western Africa. This programme launched its second phase in 2016.
In 2013 the Pan-African University (PAU) was launched, supported by the African Union (AU), to offer postgraduate training and a research network of university nodes in the five AU geographic regions (Western, Eastern, Central, Southern, and Northern Africa). The PAU is receiving most support from the European Union and the African Development Bank (AfDB). It is expected that the PAU will incorporate fifty Centres of Excellence under its five academic hubs across Africa.
The African Institute for Mathematical Sciences (AIMS) is a pan-African network of centres of excellence for postgraduate education, research, and outreach in mathematical sciences established in 2003. This was followed more recently by the AIMS Next Einstein Initiative, the goal of which is to build fifteen centres of excellence across Africa by 2023. The Canadian government made an investment of US$ 20 million in 2010, through its International Development Research Centre, and numerous governments in Africa and Europe have followed suit (UNESCO 2015).
In 2014, the AfDB approved bilateral loans to develop five centres of excellence in biomedical sciences in East Africa. Also in 2014, the World Bank launched the Africa Centres of Excellence Project in collaboration with West and Central African countries. This project provides funds in the form of loans to fifteen centres selected after competitive bidding and external evaluation, in areas of agriculture, health, science, and technology. The aim of this project is to promote regional specialisation among participating universities in areas that address specific common regional development challenges. In turn, this will strengthen the capacities of these universities to deliver high-quality training and applied research, and to meet the demand for skills required for Africa's development, such as those needed in the extractive industries.
In 2017, the Alliance for Accelerating Excellence in Science in Africa (AESA) was established as a pan-African platform created by the African Academy of Sciences (AAS) and the NEPAD Agency. AESA offers an opportunity for long-term development of research leadership, scientific excellence, and innovation in an effort to fund, conduct, and facilitate research projects that will effectively target the continent's shared challenges.
African excellence-related initiatives are designed to recruit researchers, for PhD training, support research cooperation, and the improvement or extension of physical infrastructures (MacGregor 2015(MacGregor , 2016. These organisations are not only expected to create sustainable levels of high-quality research capacity, but also meant to 'generate greater impact' and 'be role models for other higher education institutions' (MacGregor 2016). This diverse list of organisational objectives indicates that, despite its wide usage, the concept of excellence is not well understood.

Observability, quantification, and measurement
In the absence of objective and verifiable standards, qualifications such as 'unusually good', 'highest quality', or 'excellent' remain judgement calls with an inevitable degree of subjectivity. Applying meaningful and feasible standards is essential to produce transparent valid judgements: first, to establish the baseline of performance; secondly, the cut-off points where 'good science' becomes 'excellent'. Which research quality criteria should one select? The choice of appropriate criteria and their weighting is both a context-and timedependent concept (Méndez 2012;Ofir et al. 2016). Secondly, and more importantly, assigning the RE label is the outcome of decisionmaking and social stratification processes within scientific communities or user communities, where excellence tends to be found at the top of an empirically observed performance distribution. On the other hand, RE may be assigned due to quality stratification based on scientific community opinions, which are usually focussed on specific features of content as perceived by peers and expert reviewers. Any convincing operationalisation of RE will have to meet basic criteria of 'observability'. For research efforts, outputs or impacts to be perceived as 'excellent', they need to be, at the very least: visible and recognisable (to others); attributable (to research contributors and participants); comparable (within a generally accepted frame of reference); and categorised in terms of quality judgement (by external experts or other observers).
The type of operationalisation also depends on the research performance model. Traditional models still predominate when it comes to assessment and evaluation, usually driven by a straightforward 'input/output' approach, where peer review and expert panels cast a judgement-weighing one against the other. A research project or programme's success is usually judged by its outputs, while its longer term impacts and benefits are likely to be ignored, or to remain unobservable, given the time-window concerned. Such a crude 'one-dimensional' model does little justice to the research's intent and contentlet alone provide a balanced view of RE within local African contexts. A more sophisticated approach is to decompose research objectives, processes, and outcomes into several performance-related dimensions, each with their own set of analytical sub-dimensions which are amenable to further operationalisation and assessment.
These sub-dimensions may comprise a wide variety of researchrelated information, ranging from the quality of allocated resources all the way to establishing the extent of longer term impacts of scientific breakthroughs on society. Rather than mechanical counting of research publication outputs, RE dimensions should also try to capture research 'throughputs' and processes (such as teamwork, international research consortia, and human resources development) or 'impacts' on knowledge users outside of science. The process of research itself, and how its products are applied and appreciated, constitutes a multidimensional understanding that allows for incorporating feedback and views from stakeholders on the value of the research for (end) users.
According to Yule, the range of research quality dimensions should also include utility, accessibility, and quality of outputs geared to end users (Yule 2010: 1). Hence, rather than restricting our view to research inputs and outputs, excellence should also be sought in the research process itself and how its ultimate products are applied and appreciated in everyday African life. Capturing the various dimensions of research impact remains a central challenge both in the literature on the subject and in practice. In fact, some authors regard impact as the differentiating factor between quality and excellence (Grant et al. 2010). However, capturing research-induced change has proven to be an elusive goal, as impact may manifest in changes in understanding a particular topic and changes in attitude, or changes in behaviour either in policy or practice (Huberman 1994). In all these cases, attribution is a key problem, especially when we take into account that the returns of research may take several decades to materialise, and that multiple factors may be at play in determining a particular change in terms of attitudes, policy-making, and social behaviour. There are various modalities currently at work to assess research impact (both ex-ante and ex-post, both qualitative and quantitative), ranging from bibliometric analyses to cost-benefit analyses, surveys, and case studies. Ideally, the assessments should not only involve the opinions from primary actors (i.e. researchers, research managers, and funders), but also views from external stakeholders and potential users. Moreover, the role of users in research evaluation has started to gain visibility in the literature (Beresford 2002), although in practice user involvement remains riddled with challenges, such as overcoming power imbalances, difficulties in involving disadvantaged communities, and building the necessary capacities to become an active participant.
Although comparative judgements of research quality benefit from traditional advantages of peer-review-based assessment processes (Abelson 1980), they are also subject to its shortcomings which have become increasingly manifest in recent years (e.g. Ware 2008; Kelly et al. 2014). Peer review remains the most common practice in assessing RE. The major advantages of peer-review processes are well known: integral assessments of performance characteristics and determinants (considering all the activities and the career stage or trajectory of applicants and grantees) within a broader setting of background knowledge about the research area or field (emerging lines, new approaches, circumstances of the researchers, etc.). However, important limitations lurk beneath the surface and various concerns about peer-review processes have been raised in the literature. These include subjectivity (Tijssen 2003); prejudices and conflicts of interest (Langfeldt 2006); difficulties in selecting the members of the review panels (Yates 2005); conservative tendencies that may discriminate against new and 'revolutionary' research (Langfeldt 2006); and the intensity of resources required in terms of time and money (O'Gorman 2008). The great advantage of quantification and measurement is the ability to introduce some degree of objectivity to assessment procedures of research quality and implement standardisation and transparency. Combining peer-review assessment with empirical bibliometric data is one way to counteract the subjectivity of expert review and opinions (e.g. Abramo and D'Angelo 2011).
Metrics provide comparability and transparency. Focusing on comparative measurement, such as ratings on a five-point scale, one can apply a scoring guide (referred to here as a metric), to customisable research quality dimensions. The metric contains evaluative criteria of each dimension, quality definitions for those criteria at particular levels of performance, and a scoring strategy to categorise, and perhaps quantify, the available value judgements. The metric should be a valid and meaningful reflection of the quality dimension. Customisable evaluative metrics of value judgements may include qualitative opinions and quantitative measures as they are derived from different types of information sources (such as peer-review judgements or bibliometric data). The measures are derived from the process of quantification, where a metric is a measure of an entity's research quality dimensions. (The 'entity' could be a research grant proposal, individual research activity, grantee's research output, or the citation impact of a specific research publication).
It is important to realise that quantification processes reduce a variety of multidimensional, and sometimes ambiguous, value judgements on characteristics of research quality into one or more 'onedimensional' scales. This information selection and compression process inevitably introduces incompleteness, inaccuracies, and bias. Moreover, it can also become a black box, determined by complex computations, which might not necessarily reflect and promote what they were supposed to.
One of the main objectives of metrics is to minimise the risk of unacceptable loss of relevant information. The metric should clarify how the (sub)dimensions of research quality are quantified or measured, and it should explain how and why top-ranking categories or scores are defined and operationalised. Provided a sufficiently large number of entities are subjected to the same quantification and measurement process, resulting in a statistically-robust performance distribution, the highest level of achievement may qualify as 'excellent'. This upper tail in a performance distribution might be research proposals that were rated '5' on a five-point scale, or research publications within the top 10 per cent of those most highly cited worldwide.

Background and prior studies
Unfortunately, the exact meaning of the word 'excellence' is left undefined in most African policy initiatives. Implicitly, the concept of excellence can be interpreted as striving for the highest possible quality given the circumstances. As such, none of the assessments or evaluations of research quality in Africa is done in an institutional or political vacuum, or without implicit notions or perceptions of what quality or excellence entails.
Zooming in from a 'global excellence' viewpoint to those features that are of particular relevance to Africa: what do researchers in the 'global South' think of RE? More specifically, what does excellence in international development research look like and how do different perspectives inform it? A small-scale exploratory study by IDRC (International Development Research Centre) provides some clues (Singh et al. 2013). It is important to note that this surveybased study, conducted among 300 IDRC grantees, does not make a distinction between 'research quality' and 'RE'. However, there was wide agreement on the need to evaluate research in terms of excellence, backed by the general belief that without evaluation, poor quality research would lead to unreliable data, misleading conclusions, and incorrect approaches to critical policy formulation. Having agreed on this guiding principle, the respondents exhibited a wide range of perspectives and ideas in discussing the notion of RE. When asked to describe or define relevant dimensions of excellence, the majority of the 160 responses showed a preference for 'scientific merit' (91 per cent), 'impact and influence' (81 per cent), or 'relevance' (68 per cent). In other words, a distinction should be made between intrinsic characteristics of the research or the researcher (merit), the final effect of the research outcomes on others (impact), and a value judgement regarding the external usefulness of those outcomes (relevance). Although respondents did not provide clear definitions of either impact or influence, they emphasised the importance of research effects on practice or policy. There was much less consensus on which performance indicators should be selected to cover these three key dimensions. In this respect, most of the 337 respondents suggested performance indicators related to publication and citation counts (136) or peer-review notions of scientific merit, like 'rigour' (59), or to 'changes at the policy and community levels' (58).
We looked at some of these aspects in detail within the context of Africa. Particularly, our study explored those perceptions of RE from two points of view: (1) research coordinators in SCGs and (2) active researchers. Based on an online survey, these preliminary findings reflect ideas and views that could be found among researchers and research coordinators in various African countries.

Methodology and sampling
Data on the perceptions and practices of RE in Africa were collected via two online surveys distributed between October 2016 and February 2017. One survey targeted researchers based in African organisations, including public universities, public and private research institutions, non-profit organisations, as well as the private sector. This survey was made available to 294 researchers, both in natural and social sciences, of which eighty responded. 3 Respondents represented all four African regions, although North Africa had a lower number of respondents as compared to the other regions, as indicated in Fig. 1. A larger percentage of respondents from Southern, East, and Central Africa were recipients of a research grant (either national or international) as compared to North and West Africa.
Seventy per cent of those respondents were recipients of a research grant, and the majority were the primary researcher of the grants obtained, most of them coming from international sources (65 per cent). When we look at the size of the grants, responses indicate that national funders tend to fund smaller research projects (less than US$ 100,000), while international donors more frequently fund larger projects, including those of more than US$ 1million (Fig.  2). This is an indication of the importance of international sources of funding in the Africa research landscape. USAID, Bill & Melinda Gates Foundation, the World Health Organisation, EU Commission, SIDA, DANIDA, GIZ, DFID, IDRC, and IFAD are some of the funders most commonly mentioned by the respondents in our survey.
The second survey targeted research coordinators working in African SCGs with knowledge of and responsibilities related to the allocation, disbursement, and evaluation of research grants in their respective countries. The survey was made available to sixty-four research coordinators, of which twenty-six responded representing thirteen African countries (namely Botswana, Burkina Faso, Ethiopia, Ivory Coast, Malawi, Mozambique, Rwanda, Senegal, South Africa, Tanzania, Uganda, Zambia, and Zimbabwe).

Research granting and evaluation practices in Africa
SCGs across Africa are under growing pressure to identify highquality proposals that qualify for the scarce funding available for research. The majority of the organisations surveyed do allocate and disburse research grants, as indicated by twenty-one (81 per cent) out of the twenty-six respondents. In three other cases such mandates were given but the implementation had not started by the time of the survey. For instance, Rwanda's National Commission of Science and Technology (NCST) indicated that although grant disbursement did not constitute one of its functions in the past, a revised mandate approved in 2015 gives that function to the NCST. However, they had not yet started disbursing grants. Similarly, all organisations that reported to disburse grants also indicated that they regularly evaluate the research they fund.
The SGCs identified in this study are all national agencies that fulfil national missions. Concerning research activities, the way they define their missions is slightly different. The role of coordination and support of quality research that promotes social and economic progress in their respective countries tends to be common ground. Their mission often expands to advising government in matters related to research, especially in setting national priority research areas. Functions such as supporting technology transfer, dissemination of research, as well as monitoring and evaluation of research, are not always explicit in their mission statement.
Granting mechanisms to the research community follow different formats, and these organisations fund various research activities, such as basic research, applied research, innovation & commercialisation of research outputs, technology transfer, research collaboration, and research dissemination. Funding mechanisms are also used to support researchers in various aspects, including: completing their dissertation, travel, organising events, and producing publications for an academic journal.
Most respondents operate as research granting agencies, with standard processes of grant allocation, which include: launching a call, selection of eligible submissions, peer-review process of selection, decision by the funding council, signature of contracts, research funds disbursement, and monitoring and evaluation. In this respect most of the funding is disbursed on a competitive basis. However, a portion of research is commissioned (rather than supported through competitive research grants). In these cases, SGCs approach individual researchers or specific research institutions in order to solve specific problems of national interest, or to promote new and emerging technologies. Most calls for research grant proposals, and guidelines for submission, do not make specific mention of RE, and in the cases where it is mentioned, specific parameters to measure excellence are not provided. An exception was found in Uganda, where the Uganda National Council for Science and Technology (UNCST), assessed RE of research proposals on the basis of: (1) quality in relation to the highest international standards of scientific excellence in all of the sectors and disciplines that the proposal includes; (2) addition of new knowledge to the field; and (3) feasibility of the research methods proposed.
Research is regularly evaluated by the funding bodies-either foreign or national. It is interesting to note here that our interviews with research coordinators of African SGCs highlighted the different views that international donors and national funding agencies have in terms of the performance parameters and indicators that are relevant and applicable to measure research quality and excellence. In this respect it was noted that some of the indicators expected from international funding agencies are often non-existent or nonapplicable in an African context. One given example refers to cases where international donors evaluate research on the basis of publications in international peer-reviewed journals. In this respect, it was noted that often outputs from African researches find difficulties to get published in such journals (due to a number of obstacles, including thematic relevance and language). Publications in local magazines or local journals with broader domestic visibility often remain invisible to international funders. A similar case applies to patents, where African researchers producing significant discoveries find difficulties translating them into a patent. In these cases their discoveries go unnoticed by the international donors. It was mentioned that a simultaneous effort should be made to (1) address the obstacles preventing African researchers from fully accessing the international publication and patent systems, and (2) expand the range of indicators used by international donors in their research evaluations to ensure they capture research outputs that are relevant for the African context. Despite the difficulties in reporting back on certain indicators, it was mentioned that such indicators still hold value for African SGCs as tools for self-reflection and learning.

Perceptions of African excellence
Perceptions of RE are examined by aggregating the views from both researchers and research coordinators in SGCs-who provide a total of 106 observations.
When asked 'what criteria would you use to describe an "excellent" researcher?' respondents place the highest weight on 'training and supporting future generations of researchers'-a reflection of the severe shortage of research skills in the continent, and one of the main impediments to the advance of African scientific performance. Creating new knowledge in the field, producing work with great social impact, and being well published follow the list of criteria in terms of perceived relevance. It is important to note that eighteen dimensions of excellence are considered as 'relevant' or 'very relevant', and only three are considered on average by respondents as 'somewhat relevant' (i.e. patenting, continuity of work, and receiving awards). This gives a strong indication that excellence is perceived largely as a multidimensional concept.
Research coordinators and programme officers at SGCs and other research funding agencies generally select those research proposals that are most likely to represent excellence and generate significant impact. The following question enquires 'which performance indicator(s) should the science council in your country apply to assess a research proposal?' In response, respondents qualify ten dimensions as 'relevant' or 'very relevant'. Among the latter, they emphasise the quality of the proposal in terms of methodology and scientific rigour above other aspects, followed by potential of the proposal for social impact and policy influence. Still valued but with lower scores are performance indicators of the researchers (publications and citations), as well as peer-review scores and credentials of the researchers' organisation. These results suggest that researchers feel that too much weight is given to peer-review scores and 'bibliometric indicator' (numbers of publications and citations) in allocating research funding.
Research funders generally want to support research that leaves a positive impact; therefore, excellence is also sought in ex-post evaluations. Research evaluations have become not only commonplace in many African countries but also increasingly complex. The results of the survey suggest that there is still work ahead in developing reliable ways of identifying and supporting the most-impactful research work. Respondents answered the question 'what performance indicator(s) should the science council in your country apply to assess the 'quality' of research outputs or impacts?' The top three suggested indicators are: (1) creating awareness of societal issues, (2) direct benefits to disadvantaged communities, and (3) new technological developments. This is an indication of the perceived need for a closer connection between research outputs and end users (communities). However, publications in top international journals are also acknowledged as a relevant indicator of the quality of research outputs and impacts. At the bottom of the list are the direct impacts on the researcher or the research team, such as moving to more prestigious positions nationally and abroad, or winning awards.
The respondents were also asked to describe an 'excellent research output' in their own words; the most common answers have to do with its ability to solve a problem, improve the lives of people (particularly those marginalised or disadvantaged), or change policy. The survey also collected concrete suggestions for new indicators. When asked 'what indicators of excellence have been somewhat overlooked in mainstream research evaluation?' many respondents highlight economic, social, and policy impacts (see Table 1). In particular, indicators of social impact are highly noted as missing by the research community. More detailed responses indicate that gender and age indicators remain disregarded by mainstream evaluation indicators. In this respect, it is suggested that the evaluation of RE should measure the extent to which the research has led to gender equity and the promotion of young scholars-gender equity constituted a more frequent concern for research coordinators in SGCs than for researchers. Measuring the utilisation of research outputs by the communities of users and primary beneficiaries also constitutes a perceived gap both by researchers and research coordinators. Research coordinators in SGCs expressed the need to better measure the commercialisation of research outputs and the impacts in terms of innovation and new technologies emerging from research activities.
Based on the survey responses, any acceptable portfolio of performance ratings or metrics should comprise a mixture of bibliometric indicators and peer-review information. Moreover, our study finds the same blend within research assessments conducted in the 'global North', which suggests the existence of generally-held notions of how to identify and assess RE. This does not necessarily mean that operationalisations of RE and associated quality standards can be transposed to the global South without further contextualisation and customisation.

Challenges to achieve excellence
The results of our survey indicate that the allocation and evaluation of research funding needs to be based on a more multidimensional understanding of RE in the context of Africa. SGCs are increasingly turning their attention to funding research that can demonstrate direct economic, social, and cultural impacts-by way of gender equity, technology development, commercialisation, and the creation of the next generation of researchers. However, our analysis suggests that there are still many obstacles to the attainment of RE in African science. This section captures some of these challenges from the viewpoints of both researchers and research coordinators at SGCs.
Respondents indicated that certain features of the research environments in which they work necessitate a contextualised interpretation of RE. They highlighted that: • The time available for research is too limited. Given the shortage of qualified people, African scholars often work in environments where teaching takes priority over research. Heavy teaching loads tend to result in fewer qualified staff being assigned to research activities and less time dedicated to research. It is generally agreed among respondents that this limitation influences the interpretation of RE in Africa. • Research infrastructure is also less developed. Limited access, outdated infrastructure, and scarcity put serious barriers to achieving RE in the continent.
• The engagement and collaboration of African researchers with various stakeholders is considered a key factor in shaping the relevance and 'local excellence' of research. In this respect, several respondents highlighted that action-based research and participatory research in Africa may require different parameters when it comes to identifying or measuring RE. • The goals of the research were seen as central to the interpretation of RE; especially research of national relevance and geared towards solving societal issues.
Keeping this in mind, respondents identified specific challenges, which are summarised in Table 2. According to both researchers and SGC research coordinators the two largest obstacles to achieving RE are: insufficient funding and poor research infrastructure and equipment. In this respect, it was mentioned that private sector participation in research and innovation funding remains very limited, and that further efforts should be made to strengthen public-private partnerships. Another obstacle is the shortage of qualified researchers. Due to their heavy teaching loads, most African scholars lack time and incentives to actively engage in research. They also indicated that they experience difficulty in accessing top-rated journals to publish their research outputs due to their thematic focus on Africa or their language barriers.

Quantitative indicators of excellence
While peer review remains a key element in the ex-ante opinionbased case-by-case process of selecting excellent research proposals, ex-post evaluation processes of research outputs have come to rely more and more on quantitative data and standardised routines. 4 A pervasive shift towards quantification of research output and its impacts has ushered in a range of 'easy' bibliometrics, usually dealing with aggregates of research publications, to identify high-quality science and prolific researchers. Bibliometric data tends to provide relevant supplementary information (Bornmann 2013). Deriving measures of research quality from those publication outputs has become widely available to all stakeholders, not only because of commercial software packages and evaluation tools such as Elseviers' SciVal or Thomson Reuters' 5 InCites, but also freely available  (3) Impact across disciplines (2) User uptake (2) User uptake (3) Mentorship and promotion of young researchers (2) Innovation-commercialisation of research outputs (2) Innovation-commercialisation of research outputs (3) Gender equity (3) Ethical compliance (2) Alignment with national development priorities (2) Source: Authors' survey (November 2016 to January 2017). Note: Numbers in brackets indicate the frequency of the response.
web-based information on Google Scholar. Instead of going through a more costly and time-consuming process of checking the actual content of the publications themselves, these sources provide instant analysis and readily available metrics such as the H-index. However, the lack of consensus on which performance indicators are most relevant within the African context, as mentioned in the previous section, presents major issues on to how develop widely acceptable quantitative indicators for large-scale implementation. At this point in time only very few quantitative indicators seem to be feasible. Just one option is now readily applicable to measure excellence within an African comparative context: highly cited research publications. It is not held in high regard by many survey respondents, but it nonetheless presents an interesting case on how an established performance indicator, which has become increasingly popular in more mature economies, can in fact be upgraded and contextualised for evaluative applications within African science. The next subsection presents a customised application of this 'highly-cited' indicator. Vinkler (2007) notes that if we accept the argument that large numbers of citations are an adequate approximation for research quality, a range of RE indicators become feasible: number of highly cited research publications, number of publications in highly cited journals, or the number of highly cited authors employed by an organisation or located within a country. The starting point is a performance distribution of those research publications, scholarly journals, or authors-in descending order of number of citations they received from other publications. For reasons of comparability this distribution has to be appropriately normalised. Hence, the next step is to introduce the notion of the 'upper tail': usually top 1 per cent, 5 per cent, or 10 per cent performers in a distribution. A second essential normalisation parameter relates to the research domain: the top percentile should be defined per separate (sub)field of science to correct for domain-specific differences in citation patterns. Introducing this top percentile approach, Tijssen et al. (2002) suggested a focus on either the top 1 per cent or the top 10 per cent most highly cited research publications per field of science. The top percentile approach has become a generally accepted method for identifying features of RE in international science. Rankings of universities published by CWTS (Leiden Ranking), based on Web of Science (WoS)-indexed publications, and SCImago (SIR), based on the Scopus database, use the top 10 per cent definition as RE indicators (Bornmann et al. 2012;Waltman et al. 2012).

Top 1 per cent most highly cited research publications
In this case study, we adopt a very selective definition of RE: the top 1 per cent most highly cited publications per subfield of science. Given the fact that African countries represent a mere 2 per cent of worldwide research publications in the WoS database, we expect very few 'excellent' publications with African (co-)authors. Given that skewed distribution of global science, most of the citations to publications in this uppermost part of the upper tail will also originate from publications produced by the dominant nations in the world science system. 6 In order to account for contributions of those nations in African science, we incorporate information pertaining to research cooperation (more specially, the institutional or geographical spread of research partners). Earlier research has shown that international research cooperation is a key contributor to African knowledge production of the kind published in scholarly journals (Tijssen 2015). The empirical findings suggest that the type of research-active university, and its orientation towards international mainstream science, heavily affect the probability of producing highly cited 'excellent' research publications.
Examining the relationship between RE and research cooperation within African science, we defined the following subcategories of research publications according to the countries listed in the author affiliate addresses of each publication: • global cooperation: at least one of the co-authoring main organisation(s) is located in a foreign country (may include other African countries); • intra-Africa cooperation: at least one of the co-authoring main organisation(s) is in another African country (excludes non-African countries); • domestic cooperation: all co-authoring main organisation(s) are based in the same country;  (11) Poor research infrastructure and equipment (11) Heavy teaching loads/lack of incentives to research/insufficient time (6) Heavy teaching loads (3) Lack of human resources/low research capabilities (5) Limited human and institutional capacity (4) Poor access to top-rated journals (5) Poor access to top-rated journals (1) Weak collaborations/networks of researchers (2) Weak collaborations/networks of researchers (2) Weak collaborations with stakeholders/users (2) Inadequate legislations (2) Inadequate legislations (2) Poor ethical-based culture (1) Lack of support to researchers (1) Over reliance on publications (1) Low remuneration of researchers (1) Own ability to generate ideas (1) Insufficient mentorship of young researchers (1) Lack of administrative support to researchers (1) Insufficient gender transformation (1) Poor monitoring and evaluation of funded projects (1) Lack of commercialisation of research outputs (1) Source: Authors' survey (November 2016 to January 2017). Note: Numbers in brackets indicate the frequency of the response.
• no cooperation: no affiliate author addresses referring to another main organisation.
For practical reasons only, we have focused our meso-level case study on a selection of universities in sub-Saharan Africa. We have assumed that the cooperation patterns within these universities are sufficiently representative for research in those countries and SGCfunded science in Africa in general. We present results at the aggregate ('main organisation') level only. Our information source to extract those publications is the in-house version of the WoS database at CWTS 7 . The data comprises the publication years 1996-2015 and the citation count distributions are calculated across the main field worldwide. We have selected a set of large, research-intensive universities in sub-Saharan Africa that managed to produce more than 100 WoS-indexed research publications in the period 1996-2015 that were among the world's top 1 per cent most highly cited in their subfield of science. 8 In other words, each of these universities produced at least an average of five 'top publications' per year. These numbers are sufficiently large to address two key questions: • are 'top 1% publications' a meaningful RE indicator in the case of African science? • what is the effect of international research collaboration? Table 3 presents the data for the twelve selected universities, where the 'top 1%' publications were identified at the level of subfields of science. Not only do the numbers of these publications differ by an order of magnitude, the distribution across collaboration categories also differs significantly. Where the University of Cape Town (South Africa) is by far the largest in terms of quantities (440 top 1 per cent cited publications), it is not the most 'globalised' one at this level of performance; that position goes to Eduardo Mondlane University (Maputo, Mozambique). The vast majority of these top 1 per cent publications are the product of 'global cooperation' with non-African nations, irrespective of the field of science. 9 Hardly any top 1 per cent publications are the product of collaboration with other African countries exclusively. With the possible exceptions of the Universities of Mauritius and Botswana, none of these universities seem to have benefited much from cooperation with partners on the African continent to generate publications that are highly cited worldwide. The same applies to domestic cooperation within the same country. A sizeable share is the result of research without extra-mural cooperation. The University of Botswana, however, has a remarkably large share of highly cited 'no cooperation' publications, which suggests the (former) presence of niches of local excellence independent of external research partnerships.

Validity and relevance
Most of the highly cited publications result from international cooperation with countries outside Africa. Hence, the 'global top 1% most highly cited' criterion is not the most appropriate frame of reference to assess African RE on its own merit. 10 One could replace 'top cited in worldwide science' by 'top cited in African science'. The percentiles in the upper tail would then become an Africanormalised standard for RE, but still framed within a global (citation impact) context. Replacing 'global excellence' by 'African excellence' could be achieved by selecting only those highly cited African publications (i.e. those with African author addresses exclusively) that are cited exclusively or predominantly by other Africaauthored publications. These intra-Africa citation links are very likely to reflect topics of local interest and relevance.
However, this data reduction processes will narrow down the scope for comparison to a minute fraction of world science. Is such a restriction justifiable and meaningful to assess RE, a concept that has now been broadened to incorporate research that addresses specific local issues or problems ('local impact') supplementary to 'global impact'? From a technical viewpoint, such a broadening is valid by way of an alternative to the 'ordinary standard' mentioned in Section 1.1. But from a normative perspective it is questionable because it undermines the 'unusually good' criterion insofar as many outside Africa are likely to be equally good in performance levels, or (much) better, given the 'weaker' Africa-restricted delineation of the most highly cited. Moreover, as the numbers of highly cited publications become smaller and annual citations counts tend to fluctuate much more, the need for additional information increases to support strong claims of excellence.

Research quality criteria and performance indicators
Do we need international quality standards and generally accepted indicators to identify and appreciate RE within Africa? Yes, we do. The 'Top 1% most highly cited' indicator is a case in point: the method enables comparisons of universities across the continent in a global frame of reference. However, it is clearly insufficient and inappropriate for all scientific research in Africa. Establishing a broad set of quality dimensions is an essential first step towards appropriate rubrics, associated standardised ratings, and meaningful metrics.
But for any process to start identifying African RE, or contemplate how to select or design appropriate RE indicators, one needs a proper understanding of the accountability frameworks in which many African science funding agencies operate-insofar as they are expected to identify, select, and fund research of high qualityeither at the level of individuals, research projects, or large-scale programmes. Resource-poor research funders in Africa (or NGOsupported excellence initiatives) may tend to focus on incentivizing 'incremental' research or application-oriented research. These are marked by lower risks of failure and more reliable returns on investment, which tend to be removed from prioritizing cutting-edge research projects or programmes aimed at achieving 'world-class excellence'. Within such application-oriented contexts one needs to separate 'merit' from 'relevance' of sub-dimensions of RE. Where merit demonstrates that Africa-based researchers are of the same global quality standards (regardless of whether these standards are fully valid or appropriate in Africa), 'relevance' is more likely to be assessed in terms of local expectations or needs. Any Africa-centric notion of RE should go beyond international research publications and scientific impact in the academic community, to embrace the wider impacts of researchers in their local or domestic environments. Truly excellent researchers should also be assessed on their ability to create broader impacts such as science-based teaching and training, fund raising, networking, mobility and cooperation, commercialisation, and innovation. Research performance evaluation in terms of 'successful outputs' and 'significant impacts' should therefore take a longer term perspective of RE with regards to identifying possible impacts and follow-on activities of the researchers.
Research performance metrics are merely surrogates-what you measure is what you get. Even a top 1 per cent most highly cited research publication is unlikely to tick all the excellence boxes when the research was primarily designed to address local African issues or problems. The options, preferences, and choices for particular indicators should be informed by the longer term funding strategies and short-term research portfolios of these research funders. Any meaningful notion of excellence should go beyond the production of research publications in international journals, and counting citations to those publications from colleagues or peers in global academic communities. When judging specific African features of research grant proposals or final scientific results, supplementary information will have to come from an expanded, customised set of Africa-relevant indicators and quality standards.
In order to become useful and generally accepted these indicators need to provide meaningful information, be convincing, and be perceived as fair. Ideally, each indicator should be 'locally relevant and Africa consistent' and will require a critical review of data resources within Africa and the possibilities for comparative data either according to 'weak measurement' methods (rating categories on a scale) or 'strong measurements' (performance scores on a statistical distribution).

External information sources
One of the main methodological challenges, irrespective of the kind of metric or quantification, is the ability to compare and assess very different types of research. In addition to the choice of quality standards and reference values, as discussed in the case of highly cited publications (see Section 4), the domain of sciences concerned also matters in RE perceptions.
Where researchers from the 'hard' sciences are more likely see certain citation impact metrics as useful, those who are active in the 'soft sciences' generally see such metrics as problematic. This is because international information sources (such as the WoS database and Scopus database) and related bibliometric indicators (e.g. the H-index) tend to serve those who publish in English-language scholarly journals and scientific conference proceedings. Many researchers in the social sciences and humanities (still) publish predominantly in local language journals and/or books. We need more information sources to capture outputs and impacts across all fields or science, even partially.
Addressing the need to collect a wider range of informationincluding freely-available open access (OA) sources-science funding agencies could introduce mandatory Google Scholar (GS) profiles for each researcher or principal investigator who submits a research grant proposal to an African SCG. The free available web-based GS profiles may contain all publications by a researcher (from blog posts on English-language websites to books in the local language) where Google automatically tracks how often each publication is mentioned ('cited') on the internet within the global research literature. Supplementary information from service providers, like Almetrics.com, may also help assess the impact of research in social media.
Clearly, putting such OA sources on the indicator menu should be supported by all major institutional stakeholders, including researchers. To benefit optimally from such sources, SGCs should consider establishing online platforms and publication repositories to make their SGC-funded research more available and visible to the outside world. It goes without saying that such research publications should mention SGC funds in a footnote or funding acknowledgement.
Establishing the added value of indicators based on OA sources requires a series of pilot studies in Africa to validate if and how such quantifications (either weak or strong) may indicate (sub)dimensions of RE that reflect the societal goals and daily realities of African research. It is relatively straightforward to test the possibilities of introducing mandatory GS profiles for each principal investigator who submits a research grant proposal to an African SGC.
In our bibliometric case study of highly cited publications (Section 4), we have demonstrated that RE can be identified across countries and fields of science by applying automated computational algorithms to 'big data' information sources. One could easily extend the 'top 1% most highly cited publications per field' study presented in this article across all research-intensive universities in Africa, or apply a series of top percentiles (ranging from the top 1 per cent to the top 25 per cent). The associated RE indicators may offer added value in assessments and evaluation of African scientific research-especially with a global or national comparative context and especially where international research cooperation is concerned. Our micro-level units of analysis in these case studies-either individual researchers or their published outputs-can also be used in meso-level assessments and evaluations of research programmes funded by African SGCs.

Adopting good practices
Whether or not such additional performance indicators are truly able to capture African RE in a convincing way, depends on the degree to which the data and the indicator meets a series of quality criteria related to 'user acceptability': • information value (reduce complexity and extract meaningful information); • operational value (based on acceptable concepts, definitions, and criteria); • analytical value (produce accurate data, measurements, and performance indicators); • assessment value (present relevant information and knowledge for users); • stakeholder value (create credibility among stakeholders and public confidence).
Given this multitude of interrelated criteria, there is no single best way of judging the usefulness of an indicator; it will always be context-and goal dependent.
Of course, many key characteristics of scientific research are not amenable to the kind of large-scale comparative data collection. Many dimensions of research quality and RE are difficult to disentangle and are not measurable in any convincing systematic fashion. These methodological limitations are not typical of Africa; they are equally applicable to research worldwide. Nonetheless, a certain degree of measurement and associated quantitative indicators would be extremely helpful to bring about greater standardisation and precision in research assessment and evaluation processes. The Leiden Manifesto for research metrics introduces general principles to guide the design and implementation of this transparency process (Hicks et al. 2015).
It is important to realise that expert opinions should always be the prime source of information for value judgements on research quality and excellence. Neither a predominantly peer-review-based evaluation system, nor one based mainly on quantitative metrics will ever be the best solution, as both have their inherent problems and their advantages. Acknowledging this opens up possibilities for mixing qualitative opinions with quantitative statistics ('narratives with numbers') where experts complement their assessments with bibliometric data, for example.
Applying a mix of qualitative information and quantitative data requires dealing with the lack of information, interpretative inconsistencies, and informational trade-offs. In this delicate balancing act between oversimplification and undue complexity, there is a clear need to consider and incorporate contextual factors. Peer review provides an avenue to address these factors, since subject experts who are (or were) active in the same research area are adept at accurately judging the quality and relevance of a given piece of research: excellence indicators cannot replace expert judgment. Such 'informed peer-review' methods do not necessarily help young researchers (without a publication track record), minorities working outside mainstream science, or those who work on problems that are very difficult to fully comprehend and assess by others.
The accumulating good practices across Africa's numerous RE initiatives (Section 2.1) may also serve as an information source to establish quality assurance mechanisms, assessment practices, and performance benchmarks. However, understanding and operationalising the multifaceted notion of RE in Africa, from an evidencebased perspective, is mostly uncharted territory. Our survey findings suggest that a quality-driven research culture has yet to be developed, accompanied by an increase in the remuneration of researchers, gender transformation within the research landscape, and an ethical base that guides research activities. Generally held beliefs and common notions about research quality and excellence are very often dominated by specific ways in which opinion-leaders in science policy and academic disciplines tend to perceive 'good quality' research. These views, usually embedded in implicit scientific norms regarding quality standards or driven by selected showcases of successful research, may not be shared by African SGCs or be applicable in day-to-day assessment and evaluation processes.