Abstract

There is an increasing interest in demonstrating the outcomes from research for the purposes of learning, accountability, or to demonstrate the value of research investments. However, assessing the impact of social science research on policy and practice is challenging. The ways in which research is taken up, used, and reused in policy and practice settings means that linking research processes or outputs to wider changes is difficult, and timescales are hard to predict. This article proposes an empirically grounded framework for assessing the impact of research—the Research Contribution Framework. A case study approach was adopted to explore the nature of research impact and how it might be assessed. Findings were used to design, develop, and test a framework to assess the contribution of research to relevant areas of policy and practice and to articulate wider benefits. The framework has been adapted from contribution analysis, using the idea of ‘contribution’ to help explain the ways research is taken up and used to influence policy and practice. The framework allows for a focus on the roles of research users, and examines both processes and outcomes. It is argued that this approach gets round some of the common problems in assessing impact. It provides a method of linking research and knowledge exchange to wider outcomes whilst acknowledging and including contextual factors that help or hinder research impact. It is practical, balancing robustness with feasibility. It is adaptable for a wide range of content, types of impact assessment, and purposes.

1. Introduction

There is a growing interest in methods to assess the impact of research beyond the academy, fuelled in 2009–11, by a more explicit interest in this area by funders of research in the UK (Higher Education Funding Council 2011), but also growing in Europe (League of Eurpoean Research Universities 2013), the USA (Hicks 2004), and Australia (Jones et al. 2004). This reflects a concern with the roles university research plays in wider society in terms of economic growth and societal well-being (REF 2014). However, research utilization sits within the messy and complex worlds of policymaking and practice, and this presents challenges for any approach aiming to assess its effectiveness. The indirect nature of impact, with research being modified or partially used, or influencing the terms of debate over a long period, add to these challenges: ‘the ways in which research affects society are based on complex, iterative, self-reinforcing processes, distributed unequally across research initiatives’ (Molas-Galart 2000: 172).

This article seeks to address some of these issues by presenting an empirically based framework for assessing research, which develops the current literature on assessing research impact. Drawing on an impact case study which both defined and set out a framework for assessing research impact using an evaluation approach based on contribution analysis (Mayne 2008), the article throws light on the ways that this approach can help to overcome the main challenges in assessing impact.

2. Understanding research impact

2.1 Definitions

In order to create shared understanding about the processes of research impact, some definitions are necessary.

The concepts of ‘research uptake, use, and impact’ were defined in the following way in this study:

  • Research uptake: research users have engaged with research: they have read a briefing, attended a conference or seminar, were research partners, were involved in advising and shaping the research project in some way, or engaged in some other kind of activity which means they know the research exists.

  • Research use: research users act upon research, discuss it, pass it on to others, adapt it to context, present findings, use it to inform policy, or practice developments.

  • Research impact: changes in awareness, knowledge and understanding, ideas, attitudes and perceptions, and policy and practice as a result of research (Morton 2015).

The phrase ‘research uptake, use, and impact’ sets out a process-orientated definition of research utilization and implies a pathway of engagement, activity, and change that creates impact and is more fully described in another paper (Morton 2015). The categories help to unpick the linked processes of research to action and, as illustrated below, different types of impact can occur at the different levels of the model. They also require further definition and refinement for impact assessment, and the model below interrogates them and the links and overlaps between the categories. However, setting out these categories helps to push thinking from engagement, through use to address the key questions about what difference research has made to impact settings. This helpfully links research-related activity to wider outcomes as is required by outcome agreements, the UK Research Excellence Framework, and other outcome-focused evaluation drivers. In addition, the following definitions are used in this article:

  • Research users: members of the public, policy, or practice communities who use research in conceptual or instrumental ways.

    • Knowledge exchange (KE): activities to increase the uptake of research.

These understandings of process are underpinned by models from a variety of fields including both research use in policy and practice and across disciplines. KE, here, is based on the understandings of research use that emphasize the importance of interactions between people and ideas whether in policy or practice. This draws on an interactive model (Weiss 1979), where relationships and networks are the most important way in which research is shared, used, and reused (Haas 1992; Sabatier and Jenkins Smith 1993; Kingdon 1995; Best and Holmes 2010). It also emphasizes the fact that research interacts with existing knowledge (Daley 2001), and acknowledges the role of organizational constraints and enablers of change (Williams 2011).

2.2 Framing impact studies

When conducting research impact studies, it is important to consider the purpose and scale of the evaluation, and to have a theoretical and conceptual underpinning to the approach.

Impact assessments may be conducted for various purposes (Nutley et al. 2007, Penfield et al. 2013), such as accountability, assessing value for money for the public purse, auditing evidence-based policy and practice, or more recently as part of measures to determine the public funding for universities (Donovan 2008), and understanding the impact on the environment (Bennett et al. 2012).

Different approaches to impact assessment are appropriate for programme, project, research centre, or other units of analysis. Indeed Lavis et al. suggest that, ‘Impact measures need to be fine-tuned for each target audience and both the types of decisions they face and the types of decision making environments in which they live or work’ (Lavis et al. 2003: 166).

The work reported here is based on a model of research utilization as a complex interactive process. Nutley and colleagues (2007) suggest that it is increasingly common for research impact to be seen not just as a handoff of research findings but as a process of engagement with research users ‘around multiple stages, for example, developing research questions, clarifying the research design, interpreting the research data and communicating the research implications’ (Nutley et al. 2007: 286).

In this interactive approach, the ways in which research is conducted, communicated, and taken up are as important to understanding and assessing impact as wider utilization. An interactive model also acknowledges the importance of networks and of research impact as a process involving many actors interacting and communicating over time.

This complex and interactive model of the research utilization process creates many challenges to assessing research impact. However, there is a small and growing body of work that seeks to address these challenges (Bell et al. 2011; Donovan 2011). These provide useful pointers for impact assessment in two ways. Firstly, they help frame the core considerations behind any impact study, and secondly, they set out core approaches and methods. However, as discussed below, many existing approaches are expensive, large scale, focus on types of impact rather than processes, and are not useful for the evaluation of KE, nor for planning and reflecting.

2.3 Main methods for research impact assessment

There are three main approaches to impact assessment, forward tracking, backward tracking, and evaluation of mechanisms to increase research use, all of which can be incorporated into the framework presented here. Forward tracking studies start with research and trace forward into policy or practice settings to investigate impact (Molas-Gallart et al. 2000; Nason et al. 2007). They are more common but rely heavily on the researcher’s and research user’s own recollections of research use (Nutley et al. 2007; Donovan 2011). Backward tracking approaches analyse a policy or practice setting to explore the use and impact of research (Gabbay and le May 2004; Smith 2007; Jung and Nutley 2008). In backward tracking studies, behaviour can be examined and tracked back to research. Specific interventions such as KE activity can be assessed. However, this type of assessment raises questions such as whether it will be possible to show impact of specific research projects or programmes (Buxton 2011) and, if so, if this will be generalizable. Evaluations of KE initiatives aim to investigate the success of activities aiming to increase the impact of research (van Eerd et al. 2011). Focusing on KE activities themselves may only demonstrate immediate uptake and use of research and make it harder to identify impact over any longer time period. There are some other studies that do not take these approaches but instead focus on user communities (Molas-Gallart et al. 2000; Gabbay and le May 2004; Molas-Gallart and Tang 2011).

There is general agreement in the utility of a case study approach for assessing impact in order to capture the context-specific and variable nature of impact (Boaz 2009), and whilst interviews were often the most useful source of information (Molas-Gallart and Tang 2011), mixed methods have been useful particularly in dealing with different timescales, and as a way of identifying research users for further follow-up (Bell et al. 2011; Phipps 2012). Sampling here was informed by other studies that suggest that impact evaluation will be more successful when based on cases where there has been some KE effort and where research has been used rather than a broad sampling approach (Grant et al. 2000; Donovan 2008; Bell et al. 2011).

2.4 Key challenges in assessing research impact

The key challenges to assessing the impact of research on policy and practice are issues of timing, attribution, and difficulties in addressing context (Adam et al. 2012; Graham et al. 2012; Penfield et al. 2013), explained in more detail below.

Timing of research impact studies involves a pay-off between the reliability of shorter-term recall of participants with the longer term nature of research impacts emerging over time. Proposed ways of addressing this include a combination of early documentary analysis with workshop-based follow-up after a time lag (Bell et al. 2011); immediate studies to capture short-term and local impacts, with a time lag needed in order to understand long-term and wider impacts (Meagher et al. 2008); or a gap of 1–2 years between research outputs and evaluation (Molas-Gallart and Tang 2011).

The issue of attribution is highlighted in several studies. In an interactive model of research utilization, where research findings are incorporated with existing beliefs and understandings, can it ever be reasonable to attribute change to research? Meagher et al. suggest this is further complicated by a difficulty in separating the influence of individual research projects from the researchers that conducted them, with an academic’s views, and interaction with research users drawing from their body of work rather than an individual study. Consequently, and similarly to Grant et al. (2000), Saapen and van Drooge (2011) and Kok and Schuit (2012), the concept of contribution rather than attribution has been developed here, suggesting that research is one factor amongst many influencing outcomes.

Bell and colleagues (2011) suggest that better analysis of context can help illuminate attribution issues. Indeed, understanding the context for research use emerges as important in many of the studies outlined and has been addressed in the work presented here through consideration of the external influences on change throughout the processes of the Research Contribution Framework (RCF). Bell suggests that complexity-informed approaches which focus on networks and relationships and take account of context will be important for future studies (Bell et al. 2011). Most studies do not offer tools to assess contextual factors, with the exception of RAPID framework (Court and Young 2004), which perhaps offers the most developed approach to this. However, the RAPID framework recommends ‘an historical, contextual, and comparative methodology, the aim of which would be to create a narrative of policy continuity and change’ (Court and Young 2004: v)—perhaps not easily done as part of the implementation of a KE strategy, but more realistic, like the Payback framework as a funded external evaluation.

2.5 Progressing impact assessment

In order to take forward an interactive model of the research utilization process, impact assessment needs to understand both processes and outcomes. Some of the existing frameworks for assessing impact focus on categories of impact rather than processes (Hanney et al. 2004; Kuruvilla et al. 2006). The RCF presented here allows for more process-focused understanding, as well as a process-orientated method for assessing research impact. The definition and description of research uptake, research use, and research impact (Morton 2015) as distinct but connected and overlapping processes is an important element of this, creating ways of categorizing and assessing impact. The study presented here seeks to better understand the detail missing in the current models by drilling down into processes like those described in the Payback model as interaction, dissemination, policymaking, and adoption (Hanney et al. 2004).

The key challenges of timing, attribution, and context are all addressed in the framework. The exact time for any impact assessment can be tailored to the specific circumstances of any research utilization context, allowing for enough flexibility to make sensible choices about follow-up times. As well as using a logic modelling approach to understand attribution, it adopts the concept of contribution, acknowledging that there are many factors influencing outcomes alongside research. A contribution approach also creates tools for understanding the role of context in influencing research outcomes.

Existing frameworks for assessing impact offer useful starting points for developing approaches to assessing research impact, particularly in paying attention to the purpose of any impact evaluation. There are similarities between the approach taken here and ‘contribution mapping’ as applied to research assessment by Kok and Schuit (2012). However, the approach explained in this article differs in several ways: it is inspired by an interactive model of KE and as such interrogates engagement as a key process to achieving change; is based on a ‘theory of change approach’ informed by the evaluation literature (see below); focuses on research use, rather than research production processes; and can be used prospectively, retrospectively, or in real time as a planning and assessment tool running alongside a KE strategy. In addition, the task of undertaking an evaluation based on ‘contribution mapping’ like many other approaches discussed above is large and expensive. The RCF, like the framework of Kuruvilla et al. (2006), could be used by researchers themselves or KE practitioners, and following Lyall et al. (2012) and Bell et al. (2011), aims to create opportunities for ongoing learning and evaluation rather than one-off assessment. It is only through this kind of evaluation process that we can start to improve KE practice through impact assessment, which seems like an important task in the current climate of pushing researchers towards creating impact.

2.6 Building in evaluation approaches

A number of concepts and approaches from the evaluation literature have been used to help frame this study. A common method for the evaluation of programmes is theory-based evaluation, also referred to as theory of change, programme theory or programme logic, results-chain, logic modelling, or impact pathway analysis (Rogers 2008). This approach requires articulation of the intentions of a programme by those involved in delivering and planning it, that is, the setting out of a ‘theory of change’—how they expect the activities of the programme to address the outcomes they seek to achieve. Setting out such logic creates a series of steps in programme activities and learning, against which they can be evaluated. There are some immediate resonances with the Research Council UK’s approaches to developing work on research utilization, using a ‘pathways to impact’ terminology (Research Councils UK 2011), where researchers are required to articulate the ways they believe their research may influence wider society. These logic modelling-based approaches require the development of a picture of how resources and inputs might be linked to outputs, outcomes, or impact.

Given research utilization is a complex interactive process involving many actors and in which research is used in unanticipated ways, a logic modelling approach might be a useful way of connecting research and KE activities with wider outcomes. Many practitioners and theorists involved in evaluation have recently developed approaches which seek to address complexity (e.g. Douthwaite et al. 2003; Rogers 2008; Hawe et al. 2009; Montague 2009; Forss et al. 2011; Patton 2011). Some of these approaches have developed the idea of logic modelling in order to apply it to these more complex systems—acknowledging the many external factors at play, and using a logic model as a map through the complex world. Indeed Graham et al. (2012) have developed this into a research impact assessment framework.

Setting out the key considerations for evaluation of complex systems draws attention to the need to recognize the many elements of a system interacting with each other in unpredictable ways (Patton 2011). Patton suggested a more developmental approach to evaluation where the evaluator is part of the team, helping organizations to learn. This chimes with ideas about third-generation knowledge to action developed by Best and Holmes (2010), where researchers are embedded within organizations to aid learning.

Montague (2008) argues that evaluation should acknowledge the context for programmes, and include an emphasis on the actors within a system and their capacity for change. He particularly emphasizes a planning and evaluating cycle for programme evaluation, which addresses some of the concerns outlined above by allowing for learning to be built into evaluation, and for the adaptation of performance indicators as a programme unfolds. Montague, in particular, builds on work by Mayne (2011) in developing an approach called ‘contribution analysis’ which has also been developed by Wimbush and Beeston (2010) and others (Delahais and Toulemonde 2012; Lemire et al. 2012; Biggs et al. 2014). This contribution analysis approach has been adapted in this project as a basis for evaluating research impact—a new iteration of the approach to apply to the processes of research uptake, use, and impact. There are a number of reasons for reworking the contribution analysis approach in this way. It is adaptable and usable for different approaches to evaluation—both retrospective and in evaluation as research impact unfolds. In addition, contribution analysis approaches developed by Montague (2011) point to interrogation of the processes of engagement, which resonate with the uptake element of research impact which is so essential to KE success. It allows for both process and outcome evaluation. Finally, resonance between the processes of research use as set out by Nutley and colleagues (2007), namely, changes in awareness, knowledge and understanding, behaviours and practices, and the categories of contribution analysis bore further exploration. More recently others have acknowledged the potential for contribution analysis for developing effective knowledge mobilization approaches (Bannister and O’Sullivan 2013).

However, contribution analysis has been developed to evaluate social programmes, and there are some clear differences between this and using it to evaluate research. Social programmes by their nature aim to affect change. Research and social scientists might have wide societal aims in mind when conducting research or engaging with policy or practice, but this is only one role they might play. Van de Ven and Johnson (2006) and Best and Holmes (2010) argue for an engaged scholar model where social scientists are partners with other actors within the system, and where aims might be clearly agreed or might evolve as learning from research is integrated within the system. Alternatively, researchers might aim to help policy or practice but see themselves as improving the efficiency and effectiveness of policy or services, taking a neutral stance to specific changes (Weiss 1995). They might place themselves on the sidelines of public policy in order to maintain a critical stance in relation to developing agendas (Rein 1976), or work specifically to challenge or change dominant agendas either through championing voices seen as outside policymaking, or creating debate around the nature of policy trends (Rein 1976; Weiss 1995). These different orientations to the nature of social science research have different effects on the ability to identify links between research and wider outcomes. Some researchers will struggle to identify what changes might be linked to their research, whilst others may have very specific aims in mind.

Whilst activities to engage research users usually aim to increase the utilization of research, they do not always have a specific aim in mind in terms of a societal or end outcome either. However, it would be difficult to identify outcomes where there have been no activities to increase the uptake of research (Meagher et al. 2008). A sensible starting point may be to look at KE activities as a basis for an assessment of research impact, rather than the research itself, as these have a clearer link to ideas of research uptake and use. KE activities might not aim to create change, but they do at least aim to create audiences for research, and it is less difficult to then start to articulate why those audiences are important and what consequences result from engaging with them. Ideally, impact assessment is built in from the planning stage when the links between how research is done, who is engaged, and its potential impact are all considered at the start. This RCF framework can be used in that way. However, impact assessment is often carried out retrospectively, and the example in this article is looking back and tracking impact.

So it is with some reservations about links between research and KE and wider social change that the contribution analysis framework has been used and adapted as a framework for assessing research impact. However, the resonances between it and the process of research use and impact were strong enough to warrant its further exploration as a tool to assess impact.

3. Methods

The research impact assessment model described in this article was devised as part of a research study investigating the impacts of research conducted in partnership between an academic research centre (The Centre for Research on Families and Relationships) and a voluntary organization (ChildLine Scotland). The study (Morton 2012) sought to investigate both how research impact occurred and how it might be assessed. The main findings about how research impact occurred are reported elsewhere (Morton 2015). This article focuses on the questions about how research impact can be assessed. The RCF was constructed from the empirical findings about how impact occurs, by operationalizing these alongside concepts from the literature. The study itself was the basis of a four-star REF impact case study (HEFCE 2014). Since the development of the framework, it has been used to assess the impact of several ESRC investments (Genomics Network), the impact of participatory research (Morton and Flemming 2013), and the Knowledge to Action work of NHS Education for Scotland. This article uses the original study in order to present the method, and uses an example from that study to illustrate how research impact can be assessed using the RCF.

The following research questions and sub-questions in relation to research impact were investigated in the study.

Research questions:

How can research impact be assessed?

  • A: Can research impact be captured in robust ways?

  • B: What are the appropriate methods for assessing impact in local and devolved policy contexts?

  • C: What data should be collected to assess research impact?

  • D: How and when should data be collected?

  • E: What is the effect of assessing impact at different times?

  • F: Are different methods required for assessing short-term and long-term impact?

  • G: Who might be the appropriate person to assess impact?

The impact assessment focussed on the impacts from research carried out by a partnership. This partnership linked the Centre for Research on Families and Relationships (CRFR) and ChildLine Scotland (CLS), a voluntary organization children’s telephone helpline receiving thousands of calls a year. Two main projects were conducted: the first sought to understand the concerns of children who called the helpline about other members of their families, mainly parents, and covered topic such as drugs and alcohol, mental and physical health, and domestic abuse (Project One: ‘significant others’)—with the main impact being on alcohol policy. The second project looked at children’s calls about sexual health issues (Project Two: ‘sexual health’)—where the main impacts were on sex education practice. Extensive KE activities were carried out to ensure findings from both projects engaged with relevant non-academic stakeholders.

The impact assessment study, taking a case study approach, investigated the processes that led to research having an impact on policy or practice. Studies were selected where there was a good chance of finding impact in order to understand these processes of impact rather than whether or not research had an impact, reflecting sampling approaches across the literature (Grant et al. 2000; Donovan 2008; Meagher et al. 2008). Methods included both forward tracking from research users to impact, and backward tracking from policy and practice contexts to understand how research had been used. A fuller description of these methods is published in a previous paper (Morton 2015).

Through tracking research users and investigating policy, many ways in which the research had be used and reused were identified. The impact assessment found many examples of research from the CRFR/CLS partnership being used in several sectors. However, there were only three examples of clear links between the research and wider change. These were used to build the RCF set out in this article. The example of impact on sex education practice is used in this article as a worked example to illustrate the framework. In this example, the framework was used to understand in retrospect, and over different timescales, the contribution the research had made to policy and practice change. The framework can also be used at other times, prospectively and in current time.

Building on findings from previous studies of research impact (Crew and Young 2002; Hanney et al. 2003; Boaz et al. 2009; Molas-Gallart and Tang 2011), the case study combined both forward (from research to impact) and backward (from policy/practice to research) tracking approaches to uncover research use and impact.

A main topic guide was developed for all data collection methods, and a purposive or theoretical sample was necessary to address the research questions within this design (Silverman 2001; Bryman 2004). In order to identify impacts wherever they occurred, a semi-structured approach to interviews was taken (Knight 2002), in three phases, starting with key interviewees (the project partners), others who had taken part in activities related to the research (conferences, seminars, etc) or were potential research users identified through backward tracking, e.g. practitioners involved in KE and dissemination activities, voluntary sector, and local authority personnel who had used the research, and then a return to key interviewees to check validity and explore the accounts from other research users.

3.1 Analytical methods and building an impact framework

As an exploratory study seeking better understandings of the processes of research impact on policy and practice, it was necessary to learn from and build on each element of data collection in order to trace and understand impact. Trialling methods and rethinking the processes of impact were carried out in an iterative way, allowing learning from the data and diversion to areas that emerged as significant, as common in qualitative approaches (Eisenhardt 2002; Richards 2005). The framework to assess impact was built once the main analytical work on how impact occurred had been undertaken.

Initial, broad, conceptual coding was on three bases: using the research questions, key impact concepts from the literature, and coding other issues as they emerged from the data. As ideas were developed and a framework was built, further coding reflected this and allowed for linkage between different data sources. As themes emerged, they were tested against the data across sources to identify inconsistencies and counterarguments. This led to the differentiation of research uptake, use, and impact as outlined above, and the development of categories across the impact framework.

4. Findings

The ways in which research from the CRFR/CLS partnership had come to have an impact were mapped onto adapted contributions analysis categories in order to set out the processes and learning that led to impact. As described earlier, contribution analysis categories had resonance with the concepts set out on the conceptual-instrumental research use spectrum (Nutley et al. 2007). This was used as the basis to construct the RCF, by mapping different processes of research uptake, use, and impact onto this framework. In this section, the example of impacts on sex education practice is set out. (A further example is available in Morton and Flemming (2013))

In this example, the ways that the research had been taken up by relevant research users, their reactions, and changes in awareness, knowledge, and skills are explained through the RCF. How this contributes to policy and practice change leading to wider outcomes can also be set out in this model. To create categories for impact assessment, it was necessary to further define the specific processes underpinning research uptake, use, and impact, and how they are linked Fig. 1.

Basic pathway to impact.
Figure 1.

Basic pathway to impact.

Pathway to impact on sexual health.
Figure 2.

Pathway to impact on sexual health.

The process for applying the RCF is: (1) to conduct contextual analysis; (2) to develop a logic model for the unit of assessment identified by the participants (project, programme, or centre); (3) assess assumptions and risks; (4) identify possible evidence and evidence gaps; and (5) assemble a research contribution story or report based on the work. In this case, it was carried out retrospectively, but the aim was to develop a pragmatic tool for planning and carrying out impact-generating activities, and for assessing their success. Questions to guide the development of a pathway are set out in Table 1.

Table 1.

Questions to guide the development of a RCF model

Questions to guide pathway creation for RCF
Final outcomes
From contextual analysis, where might research be/have been used? Are/were there clear needs for evidence? What change might it/has it contribute/d to? What other factors were influencing the agenda (social, political, environmental, and economic)?
Behaviours and practices
What were the practices and behaviours of individuals and groups? How might/did research influence these?
Capacity/knowledge/skills
What are/were the policy/practice implications of the research and how do/did these relate to the potential for change? Can clear needs be identified?
What capacity do the target audiences have for using research?
Awareness/reaction
What was the aim in terms of the user’s awareness of the issues addressed? How will/did they react to the work?
Engagement/involvement
Are/were there problems or gaps in the participation, engagement, or involvement of research users who are key to the area of interest?
How will this be assessed?
Activities/outputs
What KE activities will be/were carried out and how do these address the issues identified in the research and contextual analysis?
Inputs
What level of financial, human, and technical resources are/was available? What is/was achievable within these?
Questions to guide pathway creation for RCF
Final outcomes
From contextual analysis, where might research be/have been used? Are/were there clear needs for evidence? What change might it/has it contribute/d to? What other factors were influencing the agenda (social, political, environmental, and economic)?
Behaviours and practices
What were the practices and behaviours of individuals and groups? How might/did research influence these?
Capacity/knowledge/skills
What are/were the policy/practice implications of the research and how do/did these relate to the potential for change? Can clear needs be identified?
What capacity do the target audiences have for using research?
Awareness/reaction
What was the aim in terms of the user’s awareness of the issues addressed? How will/did they react to the work?
Engagement/involvement
Are/were there problems or gaps in the participation, engagement, or involvement of research users who are key to the area of interest?
How will this be assessed?
Activities/outputs
What KE activities will be/were carried out and how do these address the issues identified in the research and contextual analysis?
Inputs
What level of financial, human, and technical resources are/was available? What is/was achievable within these?
Table 1.

Questions to guide the development of a RCF model

Questions to guide pathway creation for RCF
Final outcomes
From contextual analysis, where might research be/have been used? Are/were there clear needs for evidence? What change might it/has it contribute/d to? What other factors were influencing the agenda (social, political, environmental, and economic)?
Behaviours and practices
What were the practices and behaviours of individuals and groups? How might/did research influence these?
Capacity/knowledge/skills
What are/were the policy/practice implications of the research and how do/did these relate to the potential for change? Can clear needs be identified?
What capacity do the target audiences have for using research?
Awareness/reaction
What was the aim in terms of the user’s awareness of the issues addressed? How will/did they react to the work?
Engagement/involvement
Are/were there problems or gaps in the participation, engagement, or involvement of research users who are key to the area of interest?
How will this be assessed?
Activities/outputs
What KE activities will be/were carried out and how do these address the issues identified in the research and contextual analysis?
Inputs
What level of financial, human, and technical resources are/was available? What is/was achievable within these?
Questions to guide pathway creation for RCF
Final outcomes
From contextual analysis, where might research be/have been used? Are/were there clear needs for evidence? What change might it/has it contribute/d to? What other factors were influencing the agenda (social, political, environmental, and economic)?
Behaviours and practices
What were the practices and behaviours of individuals and groups? How might/did research influence these?
Capacity/knowledge/skills
What are/were the policy/practice implications of the research and how do/did these relate to the potential for change? Can clear needs be identified?
What capacity do the target audiences have for using research?
Awareness/reaction
What was the aim in terms of the user’s awareness of the issues addressed? How will/did they react to the work?
Engagement/involvement
Are/were there problems or gaps in the participation, engagement, or involvement of research users who are key to the area of interest?
How will this be assessed?
Activities/outputs
What KE activities will be/were carried out and how do these address the issues identified in the research and contextual analysis?
Inputs
What level of financial, human, and technical resources are/was available? What is/was achievable within these?

Using these questions works best when they are addressed by a team, preferably with input from non-academic advisors or partners with a sound knowledge of the context for research impact. Whilst it is easy to identify the broad domains in which research may contribute, and research activities and inputs may be well defined, the process of identifying the steps in between takes time and usually is not linear, but requires working back and forth across the categories until everyone is satisfied with the picture created. It is important to note that a framework can be created at the start of a project and used to guide impact-generating activities and tracking (as is currently the case for several large research council investments the author is working with), but should remain a working document and be adapted as the context changes, feedback from early activities is collected and analysed, and evidence starts to assemble. This process of setting out how research might link to change is a theory-based approach, drawing on evaluation methodology to make research impact-generating activities more focused and clearly addressing the system in which they are most likely to influence change.

Having developed a pathway using the questions above as a guide, the risks and assumptions for each step in the chain can be analysed in order to start to identify suitable indicators for the processes and outcomes set out in the model. Common risks and assumptions draw on the existing research on what helps and hinders research utilization to inform the analysis Table 2 (Mitton et al. 2007; Nutley et al. 2007; Ward et al. 2009). For example, research is more likely to be used if it is timely and relevant to user’s needs, if it fits with their current thinking. Analysing and discussing risks and assumptions helps to identify indicators and potential ways to demonstrate impact.

Table 2.

Common assumptions and risks with potential indicators

graphic
graphic

aAssumptions and risks are from one step in the model to the next so there is no data for this final step.

Table 2.

Common assumptions and risks with potential indicators

graphic
graphic

aAssumptions and risks are from one step in the model to the next so there is no data for this final step.

Indicators for assessing the impact of research have been generalized based on the same literature on what helps and hinders research uptake (Mitton et al. 2007; Nutley et al. 2007; Ward et al. 2009). This provides a starting point for indicator development and context-specific indicators added. Table 3 has developed these typical indicators building on more recent work by Montague (2011).

Table 3.

Indicators for KE processes

graphic
graphic
Table 3.

Indicators for KE processes

graphic
graphic

Once a pathway to impact has been set out across the RCF, assessed for risks and assumptions, and indicators set, evidence can be sought and assembled along the model. As stated earlier, the process of using this approach requires evidence gathering, assessment of gaps, followed by further evidence gathering to fill these gaps. In the case of the sexual health example, Table 4 shows data gathered to support the impact claims being made in the model:

The categories in the framework create a clear data collection matrix through which they tell a convincing story of the impact of the work. This can be used to report in different ways for different purposes—for example, a plain language ‘story’ was created to provide feedback to CLS staff and volunteers, in contrast to a REF 2014 impact case study which utilized the same information.

Table 4.

Evidence of Impact on sexual health

Pathway to impactEvidence
ImpactFinal outcomes and contributionChildren’s concerns about sexual health issues are addressed, e.g. more information at earlier ages, more discussion-based sex education, better support from parents, teachers, and youth workersParents’ behaviour change as evidenced by service level evaluation (n = 150).
Research user’s views on teachers’ behaviour change (n = 800)
Changes in behaviour and practicesParents, teachers, and youth workers deliver better support and education on sexual health matters to children and young peopleFit with sexual health policy and practice agenda (policy analysis)
Training courses delivered differently
Use of research in development of sexual health strategy at health board level
Citation in policy document
UseCapacity, knowledge, and understandingParents, teachers, and other workers change their views about children’s sexual health issues. Knowledge, ability, and skill levels developedEvidence of research use by
Health improvement professionals
Partner agency
Sexual health practitioners
Awareness/reactionPractitioners recognize usefulness of research for practice, discuss, and rework it to use in training. Picked up by other practitionersResearch user’s self-reported uptake of the research
Evaluation from launch conference
Media interest: three press articles, one television
UptakeEngagement/involvementRelevant policy, practice and public audiences are engaged activities, read the briefings.Launch conference (attendees 114)
Letters to directors of social work local authorities (32)
Letters to directors of education local authorities (32)
Presentation to network conference (wish) (over 100–unknown)
Activities and outputsResearch findings, briefing, seminar, press release, activities with young people, presentation to sexual health strategy groupBriefing given to 6,800 teachers, 500 printed copies, web downloads unknown
Attendance at seminar high and cross sector. Uptake in media
InputsConduct joint research and related activities to communicate the findings to a range of relevant audiences who might use itResearch report, planned activities
Pathway to impactEvidence
ImpactFinal outcomes and contributionChildren’s concerns about sexual health issues are addressed, e.g. more information at earlier ages, more discussion-based sex education, better support from parents, teachers, and youth workersParents’ behaviour change as evidenced by service level evaluation (n = 150).
Research user’s views on teachers’ behaviour change (n = 800)
Changes in behaviour and practicesParents, teachers, and youth workers deliver better support and education on sexual health matters to children and young peopleFit with sexual health policy and practice agenda (policy analysis)
Training courses delivered differently
Use of research in development of sexual health strategy at health board level
Citation in policy document
UseCapacity, knowledge, and understandingParents, teachers, and other workers change their views about children’s sexual health issues. Knowledge, ability, and skill levels developedEvidence of research use by
Health improvement professionals
Partner agency
Sexual health practitioners
Awareness/reactionPractitioners recognize usefulness of research for practice, discuss, and rework it to use in training. Picked up by other practitionersResearch user’s self-reported uptake of the research
Evaluation from launch conference
Media interest: three press articles, one television
UptakeEngagement/involvementRelevant policy, practice and public audiences are engaged activities, read the briefings.Launch conference (attendees 114)
Letters to directors of social work local authorities (32)
Letters to directors of education local authorities (32)
Presentation to network conference (wish) (over 100–unknown)
Activities and outputsResearch findings, briefing, seminar, press release, activities with young people, presentation to sexual health strategy groupBriefing given to 6,800 teachers, 500 printed copies, web downloads unknown
Attendance at seminar high and cross sector. Uptake in media
InputsConduct joint research and related activities to communicate the findings to a range of relevant audiences who might use itResearch report, planned activities

In this example, data were assembled retrospectively. If using the RCF to plan, monitor, and evaluate, then short-, medium-, and longer-term goals could be set.

Table 4.

Evidence of Impact on sexual health

Pathway to impactEvidence
ImpactFinal outcomes and contributionChildren’s concerns about sexual health issues are addressed, e.g. more information at earlier ages, more discussion-based sex education, better support from parents, teachers, and youth workersParents’ behaviour change as evidenced by service level evaluation (n = 150).
Research user’s views on teachers’ behaviour change (n = 800)
Changes in behaviour and practicesParents, teachers, and youth workers deliver better support and education on sexual health matters to children and young peopleFit with sexual health policy and practice agenda (policy analysis)
Training courses delivered differently
Use of research in development of sexual health strategy at health board level
Citation in policy document
UseCapacity, knowledge, and understandingParents, teachers, and other workers change their views about children’s sexual health issues. Knowledge, ability, and skill levels developedEvidence of research use by
Health improvement professionals
Partner agency
Sexual health practitioners
Awareness/reactionPractitioners recognize usefulness of research for practice, discuss, and rework it to use in training. Picked up by other practitionersResearch user’s self-reported uptake of the research
Evaluation from launch conference
Media interest: three press articles, one television
UptakeEngagement/involvementRelevant policy, practice and public audiences are engaged activities, read the briefings.Launch conference (attendees 114)
Letters to directors of social work local authorities (32)
Letters to directors of education local authorities (32)
Presentation to network conference (wish) (over 100–unknown)
Activities and outputsResearch findings, briefing, seminar, press release, activities with young people, presentation to sexual health strategy groupBriefing given to 6,800 teachers, 500 printed copies, web downloads unknown
Attendance at seminar high and cross sector. Uptake in media
InputsConduct joint research and related activities to communicate the findings to a range of relevant audiences who might use itResearch report, planned activities
Pathway to impactEvidence
ImpactFinal outcomes and contributionChildren’s concerns about sexual health issues are addressed, e.g. more information at earlier ages, more discussion-based sex education, better support from parents, teachers, and youth workersParents’ behaviour change as evidenced by service level evaluation (n = 150).
Research user’s views on teachers’ behaviour change (n = 800)
Changes in behaviour and practicesParents, teachers, and youth workers deliver better support and education on sexual health matters to children and young peopleFit with sexual health policy and practice agenda (policy analysis)
Training courses delivered differently
Use of research in development of sexual health strategy at health board level
Citation in policy document
UseCapacity, knowledge, and understandingParents, teachers, and other workers change their views about children’s sexual health issues. Knowledge, ability, and skill levels developedEvidence of research use by
Health improvement professionals
Partner agency
Sexual health practitioners
Awareness/reactionPractitioners recognize usefulness of research for practice, discuss, and rework it to use in training. Picked up by other practitionersResearch user’s self-reported uptake of the research
Evaluation from launch conference
Media interest: three press articles, one television
UptakeEngagement/involvementRelevant policy, practice and public audiences are engaged activities, read the briefings.Launch conference (attendees 114)
Letters to directors of social work local authorities (32)
Letters to directors of education local authorities (32)
Presentation to network conference (wish) (over 100–unknown)
Activities and outputsResearch findings, briefing, seminar, press release, activities with young people, presentation to sexual health strategy groupBriefing given to 6,800 teachers, 500 printed copies, web downloads unknown
Attendance at seminar high and cross sector. Uptake in media
InputsConduct joint research and related activities to communicate the findings to a range of relevant audiences who might use itResearch report, planned activities

In this example, data were assembled retrospectively. If using the RCF to plan, monitor, and evaluate, then short-, medium-, and longer-term goals could be set.

5. Discussion

This article sets out a new theory-based approach to evidencing the impact of research on policy/practice or society by operationalizing concepts from both evaluation and research utilization literature, and the findings of an empirical study of how research impact occurs, into a pragmatic framework. It emphasizes key drivers of research uptake and use, such as the key role played by networks of research users and the importance of successful engagement with them. The framework requires an examination of the learning that underpins changes in practice or behaviour which are essential for research impact to occur. It allows for contextual analysis over different time frames and creates a logical argument for the contribution of research to policy or practice in a way that acknowledges all of the other factors influencing wider outcomes.

This discussion section sets out how this new framework addresses some of the key issues and challenges of research impact assessment: those of attribution, complexity, process assessment, and timing. It also considers the limitations of the framework.

The approach of contribution analysis suggesting that we can only contribute to outcomes, rather than cause them, goes some way to address the challenges of attribution raised in impact assessment (Grant et al. 2000; Boaz et al. 2009; Spaapen and van Drooge 2011). Mayne (2008) in his description and development of contribution analysis suggests that in assessing the challenges to a contribution story, some acknowledgement of other factors that might have caused the change is required, and some strengthening of the contribution story to address issues of counterfactual—what would have happened without the contribution of the programme. In practice, he gives this little attention, and through researching the literature and conversations with other practitioners utilizing a contribution analysis approach, I have not found a robust way of carrying out this task, although the recent work by CIGAR as analysed by Bell and colleagues (2011) is promising. Whilst the idea of contribution is helpful, and the RCF framework allows for a clear description and analysis of how research has contributed to outcomes, what might have happened if the research had not been included in the change process remains, to some extent, open to speculation. The analysis of risks and assumptions can help include external factors into the model, and test to what extent they have been influential, allowing for clearer contribution claims to be made.

Patton (2011) argues that complexity-sensitive developmental evaluation approaches make ideas about the counterfactual meaningless because there are far too many variables in a complex system, and the nature of dynamic interactions emerging into various patterns of activity means that it is difficult to conceptualize counterfactuals in a useful way. Indeed the previous discussion about the ways in which research is taken up by interested users, used, and reworked within specific contexts, leading to previously unforeseen outcomes means that the idea of a counterfactual becomes difficult. Approaches that have been developed rely on asking actors for their assessment of what would have happened without the contribution of research. If research is one factor which leads to specific actions but only within contexts where other drivers mean it is useful and relevant, then the idea of being able to assess what would have happened without the research becomes less meaningful and more speculative. Research is also produced within the system, with funders and drivers for topics for research coming from government, research users, and academics within a system, meaning that topics funded will often be linked to existing defined issues and problems. In the example on which this framework is based, a close relationship with research users meant that the research was funded on the basis of its links with the system and identified problems and issues. Untangling this from the ways in which research is used is also difficult and complex.

The complexity arguments suggest that we can sidestep issues of the counterfactual by arguing that it is irrelevant in a complex system. However, that does not mean that this issue disappears. Mayne’s (2001) approach is to utilize the logic models to create a reasonable claim about the influence or contribution of an initiative, with the robustness of the evidence of the steps in each logic model being used to judge the validity of the claim. Certainly the language of contribution is helpful in that it acknowledges research as having a role rather than a causal effect. However, the starting point is very much to show how research contributed rather than an assessment of whether it contributed or not.

However, this approach to research assessment could also usefully illustrate why research had not achieved expected uptake, use, or impact through the same approach. If research users found research challenging, if it was counter to current policy trends, this approach could utilize contextual analysis and feedback from research users to show that lack of impact was not related to the research itself but to the context for research use. This approach might also be used to suggest impact over a longer time frame or to realign activities to address the contextual factors, e.g. through working with the media to raise debate about an issue or to create a challenge to a dominant policy direction.

Both Montague (2011) and Mayne (2008) have been interested in developing contribution analysis as a systems tool for evaluation. In order to integrate systems thinking into a logic modelling approach, Montague acknowledged that developing complex versions of logic models, which acknowledge the wider environment and include representations of the multiple flows and feedback loops, means that such models become too complex to be helpful for evaluators (Montague 2011). Figure 3 shows this more complex picture for the pathway to impact on sex education based on the example set out earlier. The RCF offers a simplified version of the processes through which research was used, with feedback loops and ways in which different actors were involved at different times removed. Figure 3 is, an annotated version of this pathway illustrating the more complex picture behind the impact story.

Annotated pathway to impact.
Figure 3.

Annotated pathway to impact.

Figure 3 shows how research might be used by different actors at different times and that the processes set out on the pathway are often cyclical. For example, various different timescales of engagement, learning, and change in practice are inherent in this model. Engagement with research users during the research process shaped the analysis. Practitioners talk to others about the research, and it is included in training packs meaning re-engagement with research outputs. During these processes, research might be used and reused and the context may change, creating new opportunities for research users to engage with it.

The RCF overcomes some of the problems with the categorization of types of impact, often suggested in other approaches to impact, by focussing instead on emphasizing processes. So rather than looking for example at the types of outputs or benefits to specific sectors, it focuses on the ways research is taken up and used, and allows these to define the contributions research has made. The language of contribution overcomes some of the problems with attribution and provides a more practical way of looking at how research interacts with other drivers to create change. Although unable to solve all of the challenges of assessing research impact, it does go some way to addressing the main ones in a practical way that can be taken up by researchers or KE professionals. Importantly, although it could be utilized by external evaluators, it has the adaptability to be utilized by those involved in research production or KE activities to plan, reflect, learn, and evaluate the impact of research. When used in this way, it is more likely to be able to address issues of timing between research production and impact.

The framework sidesteps many of the timing issues inherent in impact assessment discussed earlier (Bell et al. 2011). It can be adapted to many timescales, and can be revisited on an annual or longer basis to capture new impact as it has happened. By setting out expected impacts at the start of a project, and reviewing and adapting these as evidence is assembled provides the best chance of being able to make a reasonable impact claim, over whatever timescales are relevant to the specific programme or interest area. The framework can also be used to assess previous impact; indeed, in the impact case study reported here, impacts were traced over 5- and 9-year time periods. Whilst there was some loss of data due to lack of recollection of some participants in engagement activities, there were sufficient numbers of research users with detailed accounts of the ways research had been used to create a clear picture of impact from the research over both of these timescales.

The RCF has been a useful tool in assessing the contribution of the research from the CRFR/CLS partnership. The approach has utilized the basic ideas from contribution analysis, in separating out the ideas of uptake, use, and impact, as well as demonstrating pathways from activities to outcomes. Contribution analysis, as adapted here, helps to give shape and form to logic models with useful categories for thinking about routes to impact (engagement, reaction, capacity, policy and practice changes, and eventual impacts).

5.1 Limitations

However, there are, of course, limitations of the approach presented. It is based on a logic modelling approach and implies an understanding and willingness to engage with such an approach. It can be overwhelming to ask researchers or KE professionals to articulate a pathway to impact in order to create a logic model, and there are issues about where to draw the boundaries around what should be included or excluded in such an approach. How the relevant context is defined and analysed remains a challenge. The approach can be used prospectively or retrospectively, although there is a danger of retrospectively claiming impact or over-claiming outcomes, especially if the alternative explanations are not adequately explored.

This is a small-scale study looking as specific research use in the Scottish context. The focus was on understanding processes leading to impact. It does not claim to understand all of the impacts from the research, nor to offer generalizable findings. It took a purposive sample in order to understand how impact occurs in order to start to build up empirically based work on the processes leading to research impact.

As with other impact studies (Boaz et al. 2009; Molas-Gallart and Tang 2011), timing was a key challenge for this study: with the opposing issues of a requirement for enough time to have passed for impact to have occurred, balanced against respondent’s abilities to recall their interaction with the research in question. The projects being investigated went back to 2005, with publication of research findings in 2005 and 2007, and impacts continuing up until the data collection period (2009–11). This timing issue led to unsuccessful attempts to interview policymakers involved in The Scottish Parliament’s sexual health strategy group where Project Two: ‘sexual health’ had been presented. Those contacted did not initially respond to emails and it became clear that they could not recall much about the research and so did not want to be interviewed. In other interviews, the ability to recall was also sometimes obviously hampered by time lapse. However, many respondents recalled very specific events and uses of research with clarity: their interaction with the findings provided a rich enough seam of data to draw conclusions.

6. Conclusions

The framework presented in this article has been built using the literature on evaluation, research utilization, and an empirical study of how research impact occurs. It adapts a contribution analysis approach, linking this with core ideas about what research impact is, and aims to address some of the key challenges to assessing impact, in particular the complex nature of research impact processes, the need to capture both processes and outcomes, and the challenges of timing, attribution, and additionality. It also addresses the need to include models and theories of the research impact process into assessment approaches.

The RCF is useful particularly for developing more thoughtful KE activities that carefully consider the needs of research users and the potential for wider outcomes. It can be used as the basis for partnership and advisory group discussions, helping to explore potential impact and proving a framework for assessing actual impact over a variety of time frames.

It offers a learning/reflecting cycle which aids the development of strong effective practices to increase the uptake of research and can help research producers think more deeply about what impact processes might look like. Research use is a complex process and careful systems approaches are needed, rather than replicating simple handoff procedures from research to research users that fail to acknowledge this complexity. The RCF method offers promising potential to do this and to help develop impact practice to ensure it is the best it can be.

References

Adam
P.
et al. . (
2012
)
‘Assessment of the Impact of a Clinical and Health Services Research Call in Catalonia’
,
Research Evaluation
,
21
/
4
:
319
28
.

Bannister
J.
O’Sullivan
A.
(
2013
)
‘Knowledge Mobilisation and the Civic Academy: The Nature of Evidence, the Roles of Narrative and the Potential of Contribution Analysis’
,
Contemporary Social Science
,
8
/
3
:
249
62
.

Bell
S.
et al. . (
2011
)
‘Real-world Approaches to Assessing the Impact of Environmental Research on Policy’
,
Research Evaluation
,
20
/
3
:
227
37
.

Bennett
J. W.
et al. . (
2012
)
‘Integration of Environmental Impacts into Ex-post Assessments of International Agricultural Research: Conceptual Issues, Applications, and the Way Forward’
,
Research Evaluation
,
21
/
3
:
216
28
.

Best
A.
Holmes
B.
(
2010
)
‘Systems Thinking, Knowledge and Action: Towards Better Models and Methods’
,
Evidence and Policy: A Journal of Research, Debate and Practice
,
6
:
145
59
.

Biggs
J. S.
et al. . (
2014
)
‘A Practical Example of Contribution Analysis to a Public Health Intervention’
,
Evaluation
,
20(2): 214–29
.

Boaz
A.
et al. . (
2009
)
‘Assessing the Impact of Research on Policy: A Literature Review’
,
Science and Public Policy
,
36
/
4
:
255
70
.

Bryman
A.
(
2004
)
Social Research Methods
.
Oxford; New York:
Oxford University Press
.

Buxton
M.
(
2011
)
‘The Payback of ‘Payback': Challenges in Assessing Research Impact’
,
Research Evaluation
,
20
/
3
:
259
60
.

Court
J.
Young
J.
(
2004
)
‘Bridging Research and Policy in International Development :An Analytical and Practical Framework’, in Research and Policy in Development Programme Briefinng Paper
.
London
:
Overseas Development Institute
.

Crew
E.
Young
J.
(
2002
)
Bridging Research and Policy:Context, Evidence, Links
.
London
:
Overseas Development Institute
.

Delahais
T.
Toulemonde
J.
(
2012
)
‘Applying Contribution Analysis: Lessons from Five Years of Practice’
,
Evaluation
,
18
/
3
:
281
93
.

Donovan
C.
(
2008
)
‘The Australian Research Quality Framework: A Live Experiment in Capturing the Social, Economic, Environmental, and Cultural Returns of Publicly Funded Research’
,
New Directions for Evaluation
,
118
:
47
60
.

Donovan
C.
(
2011
)
‘State of the Art in Assessing Research Impact: Introduction to a Special Issue’
,
Research Evaluation
,
20
/
3
:
175
9
.

Douthwaite
B.
et al. . (
2003
)
‘Impact Pathway Evaluation: An Approach for Achieving and Attributing Impact in Complex Systems’
,
Agricultural Systems
,
78
/
2
:
243
65
.

Eisenhardt
K. M.
(
2002
)
‘Building Theories from Case Study Research’
, in
Huberman
A. M.
Miles
M. B.
(eds),
The Qualitative Researcher's Companion
.
London
,
Sage
.

Forss
K.
et al. . (
2011
)
Evaluating the Complex : Attribution, Contribution, and Beyond
.
New Brunswick
:
Transaction Publishers
.

Gabbay
J.
le May
A.
(
2004
)
‘Evidence Based Guidelines or Collectively Constructed “Mindlines?”. Ethnographic Study of Knowledge Management in Primary Care’
,
BMJ
,
329: 2013
.

Graham
K. E. R.
et al. . (
2012
)
‘Evaluating Health Research Impact: Development and Implementation of the Alberta Innovates – Health Solutions Impact Framework’
,
Research Evaluation
,
21
/
5
:
354
67
.

Grant
J.
et al. . (
2000
)
‘Evaluating “Payback” on Biomedical Research from Papers Cited in Clinical Guidelines: Applied Bibliometric Study’
,
BMJ
,
320
:
1107
1111
.

Hanney
S.
et al. . (
2004
)
‘Proposed Methods for Reviewing the Outcomes of Health Research: The Impact of Funding by the UK's ‘Arthritis Research Campaign'’
,
Health Research Policy and Systems
,
2
/
1
:
4
.

Hanney
S.
et al. . (
2003
)
‘The Utilisation of Health Research In Policy-making: Concepts, Examples and Methods of Assessment’
,
Health Research Policy and Systems
,
1
/
1
:
2
.

Hawe
P.
et al. . (
2009
)
‘Knowledge Theories can Inform Evaluation Practice: What can a Complexity Lens Add?’
,
New Directions for Evaluation
,
2009
/
124
:
89
100
.

Hicks
D. M.
(
2004
)
‘Bibliometric Evaluation of Federally Funded Research in the United States’
,
Research Evaluation
,
13
.
2
:
76
86
.

Higher Education Funding Council
(
2011
)
REF Assessment Framework and Guidance on Submissions
.
Bristol
,
HEFC
.

Jones
L. R.
et al. . (
2004
)
Strategies for Public Management Reform
.
Amsterdam; London
,
Elsevier
.

Jung
T.
Nutley
S. M.
(
2008
)
‘Evidence and Policy Networks’
,
Evidence and Policy
,
4
/
2
:
187
207
.

Knight
P.
(
2002
)
Small Scale Research
.
London
,
Sage
.

Kuruvilla
S.
et al. . (
2006
)
‘Describing the Impact of Health Research: A Research Impact Framework’
,
BMC Health Services Research
,
6: 134
.

Lavis
J.
et al. . (
2003
)
‘Measuring the Impact of Health Research’
,
Journal of Health Service Research and Policy
,
8
/
3
:
166
70
.

Lemire
S. T.
et al. . (
2012
)
‘Making Contribution Analysis Work: A Practical Framework for Handling Influencing Factors and Alternative Explanations’
,
Evaluation
,
18
/
3
:
294
309
.

LERU (League of Eurpoean Research Universities)
(
2013
)
Research Universities and Research Assessment. Position Papers. Leuven, LERU
.

Mayne
J.
(
2001
)
‘Addressing Attribution through Contribution Analysis: Using Performance Measures Sensibly’
,
Canadian Journal of Program Evaluation
,
16
/
1
:
1
24
.

Mayne
J.
(
2008
)
‘Contribution Analysis: An Approach to Exploring Cause and Effect’
, in
ILAC Brief
.
ILAC Institutional learning and Change Institute
.

Mayne
J.
(
2011
)
‘Contribution Analysis: Addressing Cause and Effect’
, in
Forss
K.
Marra
M.
Schwartz
R.
(eds),
Evaluating the Complex
.
Transaction Publishers
.

Meagher
L.
et al. . (
2008
)
‘Flows of Knowledge, Expertise and Influence: A Method for Assessing Policy and Practice Impacts from Social Science Research’
,
Research Evaluation
,
17
/
3
:
163
73
.

Mitton
C.
et al. . (
2007
)
‘Knowledge Transfer and Exchange: Review and Synthesis of the Literature’
,
Milbank Quarterly
,
85
/
4
:
729
68
.

Molas-Gallart
J.
Tang
P.
(
2011
)
‘Tracing ‘Productive Interactions' to Identify Social Impacts: An Example from the Social Sciences’
,
Research Evaluation
,
20
/
3
:
219
26
.

Molas-Gallart
J.
et al. . (
2000
)
‘Assessing the Non-academic Impact of Grant-funded Socio-economic Research: Results from a Pilot Study’
,
Research Evaluation
,
9
/
3
:
171
82
.

Montague
S.
(
2009
)
‘Structured Contribution Analysis’
, in
Presentation to Evaluation Summer School May
.
Stirling
,
NHS Health Scotland
.

Montague
S.
(
2011
)
‘Practical (Progress) Measurement and (Impact) Evaluation for Initiatives in Complex Environments’
,
Performance Management Network.

Morton
S.
(
2012
)
‘Exploring and Assessing Research Impact’, in Social Policy
.
Edinburgh
:
University of Edinburgh
.

Morton
S.
(
2015
)
‘Creating Research Impact: The Roles of Research Users in Interactive Research Mobilisation’
,
Evidence and Policy: A Journal of Research, Debate and Practice
,
11
/
1
:
35
55
.

Morton
S.
Flemming
J.
(
2013
)
Assessing Research Impact: A Case Study of Participatory Research
, p.
66
.
Edinburgh
:
Centre for Research on Families and Relationships: Research Briefing
.

Nason
E.
et al. . (
2007
)
Policy and Practice Impacts of Research Funded by the Economic and Social Research Council: A Case Study of the Future of Work Programme, Supporting Data
.
Santa Monica, CA
:
RAND Corporation
.

Nutley
S.
et al. . (
2007
)
Using Evidence: How Research can Inform Public Services
.
Bristol
,
Policy Press
.

Patton
M. Q.
(
2011
)
Developmental Evaluation : Applying Complexity Concepts to Enhance Innovation and Use
.
New York
:
Guilford Press
.

Penfield
T.
et al. . (
2013
)
‘Assessment, Evaluations, and Definitions of Research Impact: A Review’
,
Research Evaluation
.

Phipps
D.
(
2012
)
‘A Report Detailing the Development of a University-Based Knowledge Mobilization Unit that Enhances Research Outreach and Engagement’
,
Scholarly and Research Communication
,
2
/
2
.

Rein
M.
(
1976
)
Social science and Public Policy
.
Harmondsworth
:
Penguin Education
.

Research Councils UK
. (
2011
)
‘Pathways to Impact’
, .

Richards
L.
(
2005
)
Handling Qualitative Data : A Practical Guide
.
London Thousand Oaks, CA
:
SAGE Publications
.

Rogers
P. J.
(
2008
)
‘Using Programme Theory to Evaluate Complicated and Complex Aspects of Interventions’
,
Evaluation
,
14
/
1
:
29
48
.

Silverman
D.
(
2001
)
Interpreting Qualitative Data : Methods for Analyzing Talk, Text and Interaction
.
London
:
Sage
.

Smith
K. E.
(
2007
)
‘Health Inequalities in Scotland and England: The Contrasting Journeys of Ideas From Research Into Policy’
,
Social Science and Medicine
,
64
/
7
:
1438
49
.

Spaapen
J.
van Drooge
L.
(
2011
)
‘Introducing ‘Productive Interactions' in Social Impact Assessment’
,
Research Evaluation
,
20
/
3
:
211
18
.

Van De Ven
A. H.
Johnson
P. E.
(
2006
)
‘Knowledge For Theory and Practice’
,
Academy of Management Review
,
31
/
4
:
802
21
.

van Eerd
D.
et al. . (
2011
)
A Systematic Review of the Quality and Types of Instruments Used to Assess KTE Implementation and Impact
.
Toronto
:
Institute for Work and Health
.

Ward
V.
et al. . (
2009
)
‘Developing a Framework for Transferring Knowledge Into Action: A Thematic Analysis of the Literature’
,
J Health Serv Res Policy
,
14
/
3
:
156
64
.

Weiss
C. H.
(
1995
)
‘Chapter 3 The Haphazard Connection: Social Science and Public Policy’
,
International Journal of Educational Research
,
23
/
2
:
137
50
.

Wimbush
E.
Beeston
C.
(
2010
)
‘Contribution Analysis:What is it and What Does it Offer Impact Evaluation?’
, in
The Evaluator
, pp
19
24
.
Spring
.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.