Assessing improvement capability in healthcare organisations: a qualitative study of healthcare regulatory agencies in the UK

Abstract Objectives Healthcare regulatory agencies are increasingly concerned not just with assessing the current performance of the organisations they regulate, but with assessing their improvement capability to predict their future performance trajectory. This study examines how improvement capability is conceptualised and assessed by healthcare UK regulatory agencies. Design Qualitative analysis of data from six UK healthcare regulatory agencies was conducted. Three data sources were analysed using an a priori framework of eight dimensions of improvement capability identified from an extensive literature review. Setting The focus of the research study was the regulation of hospital-based care, which accounts for the majority of UK healthcare expenditure. Six UK regulatory agencies that review hospital care participated. Participants Data sources included interviews with regulatory staff (n = 48), policy documents (n = 90) and assessment reports (n = 30). Intervention None—this was a qualitative, observational study. Results This research study finds that of eight dimensions of improvement capability, process improvement and learning, and strategy and governance, dominate regulatory assessment practices. The dimension of service-user focus receives the least frequency of use. It may be that dimensions which are relatively easy to ‘measure’, such as documents for strategy and governance, dominate assessment processes, or there may be gaps in regulatory agencies’ assessment instruments, deficits of expertise in improvement capability, or practical difficulties in operationalising regulatory agency intentions to reliably assess improvement capability. Conclusions The UK regulatory agencies seek to assess improvement capability to predict performance trajectories, but out of eight dimensions of improvement capability, two dominate assessment. Furthermore, the definition and meaning of assessment instruments requires development. This would strengthen the validity and reliability of agencies’ assessment, diagnosis and prediction of performance trajectories, and support development of more appropriate regulatory performance interventions.


Introduction
Unexplained variations in healthcare performance continue to be a significant focus of public and political attention [1,2]. In response to such variations, widespread concerns about patient safety [3,4], and high-profile instances of failures in healthcare [5,6], many governments have introduced or strengthened systems for formal oversight, accountability and regulation in healthcare [7,8]. However, regulatory agencies themselves have often faced criticisms that their regulatory methods or regimes are not able to assess performance or quality accurately, or to diagnose and intervene to improve performance and quality effectively [9,10]. In addition, the costs and benefits of regulation have also been questioned [11].
In response to such criticisms, some regulatory agencies have sought to move beyond directly assuring organisational performance or quality of care through mechanisms such as inspection and assessment, and are implementing programmes to strengthen the underlying organisational characteristics for organisations to develop and sustain their own improvement programmes through improvement approaches. In the wider academic literature, these characteristics are termed 'improvement capability', defined as the 'organisational ability to intentionally and systematically use improvement approaches, methods and practices, to change processes and products/services to generate improved performance' [12, p3]. This builds on a dynamic capabilities view, which suggests that organisational performance is driven through bundles of routines, described as distinctive capabilities, that are used to purposefully create and modify resources and routines that are contingent on local circumstances [13,14]. However, the tacit nature of capabilities creates significant barriers to imitation, substitution or assessment [15]. A comprehensive literature review identified eight dimensions of improvement capability to support its assessment and development (Table 1).
For regulatory agencies, assessing improvement capability may be important for two reasons. First, it may provide greater assurance about current performance; regulatory agencies can only undertake limited direct assessments of the quality of care, but may take some assurance that organisations with relatively higher improvement capability can monitor and assure quality for themselves. Second, improvement capability may have some value in predicting future performance, especially if problems with the quality of care are found. Organisations with extensive improvement capability may be more able to deal with such problems and bring about improvement for themselves, while those with limited improvement capability may need external support and intervention. More fundamentally, by focusing on assessing improvement capability, regulatory and improvement agencies are likely to encourage healthcare organisations themselves to pay greater attention to how they build and sustain improvement capability.
There are six regulatory agencies (Since this research was completed Monitor and the TDA have become a part of the same organisation with the operational name of NHS Improvement.) overseeing the healthcare system across the four countries of the UK, with several incorporating some assessment of improvement capability into their inspection, assessment or oversight regimes. However, there is little published literature that has examined how regulatory and improvement agencies can assess improvement capability within organisations. This research study examines how improvement capability is conceptualised and assessed in practice by healthcare regulatory agencies in the UK.

Methods
The focus of the research study was the regulation of hospital-based care, which accounts for the majority of UK healthcare expenditure. Thus, the six healthcare regulatory agencies based in the UK that have responsibility for the oversight of hospital care were selected for the research study and all agreed to participate. These are the Care Quality Commission (CQC), Monitor, and the Trust Development Authority (TDA) in UK; Healthcare Improvement Scotland (HIS); Healthcare Inspectorate Wales (HIW) and the Regulatory and Quality Improvement Authority (RQIA) in Northern Ireland.
Qualitative methods offer an effective, flexible and common approach for data gathering. Three data sources from the agencies were used: policy documents, interviews and assessment reports ( Table 2).
In order to understand how regulatory agencies define and conceptualise improvement capability and expressed intentions, published policy documents were identified, including agency strategies, operational plans and annual reports (n = 90). Following ethical approval to proceed, directors of policy, strategy or regulation within the agencies were contacted to take part in the research study and they aided the identification of suitable interview participants. About 7-9 interviews were held per agency (n = 48) representing a cross-section of clinical and non-clinical employees, including board-level roles, back office support and inspectors. Interviews were conducted face-to-face or via the telephone between October 2014 and April 2015; participation was voluntary and confidential. A semi-structured interview framework was used to examine agency purpose, intent, roles, methods and their understanding and assessment of improvement capability. Testing of the questions took place through pilot interviews. Interviews were recorded, anonymised and transcribed, and verbatim transcriptions were shared with Table 1 Dimensions of improvement capability [12] Coding dimension Description Organisational culture The core values, attitudes and norms and underlying ideologies and assumptions within an organisation Data and performance The use of data and analysis methods to support improvement activity Employee commitment The level of commitment and motivation of employees for improvement Leadership commitment The support by formal organisational leaders for improvement and performance Service-user focus The identification and meeting of current and emergent needs and expectations for service users Process improvement and learning Systematic methods and processes used within an organisation to make improvements through ongoing experimentation and reflection Stakeholder and supplier focus The extent of the relationships, integration and goal alignment between the organisation and stakeholders such as public interest groups, suppliers and regulatory agencies Strategy and governance The process in which organisational aims are implemented and managed through policies, plans and objectives participants to clarify any inaccuracies in the recordings. A few amendments were requested and were largely limited to grammar improvements and clarifications to recording problems. Finally, five assessment reports per agency were selected (n = 30). The selection criteria required that they were publicly available, they represented a range of organisational performance, or they were referred to by interview participants as specific examples related to improvement capability. Two agencies in the sample do not routinely publish the results of their assessment processes; instead assessment reports for these agencies were identified through a detailed review of board reports from the agencies and regulated organisations. This may have influenced the extent to which the sample collected was representative of the range of organisational performance assessed.
Policy documents, interview transcripts and assessment report texts were loaded into electronic qualitative analysis software (NVivo10), and the eight dimensions of improvement capability (Table 1) were used as an a priori coding template to support content and thematic analysis of the policy documents, interviews and assessment reports [16,17]. The combination of sources allowed a comparison of agencies' expressed intent with practice. Coding consistency was reviewed using NVivo10 functionality to compare coding density and frequency across data sources [18].

Results
This section begins by describing the UK agencies' aims for improvement capability. It then discusses the content analysis of the data sources using the identified improvement capability dimensions, comparing agencies where appropriate. Following this, analysis themes are discussed.

Aims of regulatory and improvement agencies
Analysis of agency policy documents identifies that agencies express intentions to strengthen improvement capability, with some agencies more explicit in their aims than others (Table 3). HIS and Monitor have specific strategic aims linked to developing improvement and associated capability in the National Health Service (NHS) in Scotland and England, respectively. Other agencies were less explicit, such as in Northern Ireland and Wales where a governmental aim to build capability exists rather than a specific one as within RQIA and HIW. Figure 1 presents the content analysis of the assessment reports and compares the coding frequency of the improvement capability dimensions within agency policy documents, interviews and assessment reports.

Content analysis
Overall, the analysis revealed that the dimensions of process improvement and learning, and strategy and governance, were most frequently found. Other dimensions were found less frequently, with service-user focus being the least frequent, and this skewed pattern was relatively consistent across agencies and data sources. Table 4 shows representative examples of quotations from across the dimensions and were selected to illustrate how the agencies conceptualise the dimensions across the three data sources. Table 4 highlights how each dimension is used by the different regulatory agencies, adding depth to the initial content analysis in Fig. 1. This also enabled the examination of coding consistency across the data sample and to understand why dimensions were coded with different frequencies. For example, 'leadership' is a named assessment criterion for three of the regulatory agencies (Monitor, TDA and CQC); however, the content analysis indicated a low frequency of coding to the leadership commitment dimension across the data sources compared to other dimensions. The coding was reviewed for consistency, indicating that whilst assessment reports used leadership commitment as a high-level criterion, specific leadership activities, such as developing plans or communicating widely, fall within other dimensions in the assessment reports in this analysis and was coded as such. Perhaps this is to be expected as leadership commitment can cover many different aspects, can be used as a 'catch all', and does not exist within a vacuum.
Three themes emerge from the analysis of the assessment of improvement capability: conceptualisation, assessment data and assessment practices.

Conceptualisations
The first theme identified is that it was problematic to define, conceptualise and operationalise improvement capability. For example, policy documents and interviews stressed the importance of developing improvement capability but faced definitional difficulties when articulating precisely, and consistently, what was meant by improvement capability. Furthermore, it was evident that the term was used inconsistently, boundaries between dimensions were blurred, and that it is a nebulous, ambiguous and subjective concept. For example, whilst interview participants were keen to stress the importance of organisational culture, it was acknowledged that is was a difficult concept to grasp, assess and for organisations to influence (see Table 4).

Assessment data
The second theme highlights the challenges resulting from regulatory access to data, and identifies that existing and available data are used as proxy data sources in the absence of more appropriate data sources. For example, in Table 4 three data sources from Wales are highlighted within the dimension of employee commitment. These demonstrate the differences between assessment intentions and the data used during assessments. These examples, together with the other quotations in this dimension, also highlight how annually collected and available data from NHS staff surveys or locally produced staff turnover and vacancy data are used as indicators, despite policy intentions stressing employee contribution, ownership and engagement, rather than staffing numbers and proxy measures for employee commitment, such as vacancy rates and resilience.

Discussion
This research set out to explore how improvement capability is conceptualised and assessed by the UK healthcare regulatory agencies. The research study found that agencies aim to assess and develop improvement capability, but that two dimensions of improvement capability from a framework of eight dominate assessment: process improvement and learning, and strategy and governance. Other dimensions identified from the literature, such as employee commitment or organisational culture, are used less frequently during assessment, with some variation between agencies. Finally, in contrast to agency strategic messages to place the patient at the centre of their work, this research identifies that the area of lowest content frequency within policy documents and interviews was service-user focus, with only a small increase in the frequency of use in assessment documents. Three themes emerge from this analysis of the assessment of improvement capability: conceptualisation, assessment data and assessment practices. A limited conceptualisation of improvement capability is operationalised by agencies when compared with the literature [12]. In line with other healthcare studies, for example, Brennan et al. [25], this research study finds that the assessment data used by agencies need further development to ensure that evidence collected does measure dimensions of improvement capability, which will strengthen the validity of the assessments. Furthermore, there are concerns that impact on measurement consistency, validity and reliability [9,10], and the dependence on value-judgements of inspectors and surveyors [26], Finally, assessors need further skills, knowledge and guidance to assess across the broad range of improvement capability dimensions, in order to strengthen assessment in practice and reliability. These findings suggest that current assessments focus on dimensions which are easier to measure with more tangible evidence, such as the existence of a strategic plan, in contrast to the assessment of dimensions that are more ambiguous and difficult to assess. There are a number of existing validated models that could be used to strengthen assessments in these dimensions, for example, for organisational culture there are a number of existing models [27,28], which could strengthen assessment effectiveness and resultant regulatory judgements. These findings suggest 'We will focus in particular on the capabilities that drive long-term performance: […] We will also place more weight on the assessment of these capabilities […].' [23] England:~90 Non-Foundation Trusts TDA To provide the oversight, scrutiny, and performance management of non-foundation trusts on behalf of the Department of Health and develop them into foundation trusts.
'We want more than ever to focus on support and development …Our assessment of the credibility of plans, will focus on five broad areas [… including] leadership capability' [24] a regulatory intent that is still emerging, has been more difficult to implement in practice than anticipated, or that agency policies are not being implemented.
The themes provide suggestions for the development of agencies' assessments of improvement capability within organisations. Regulatory agencies may use their assessments to determine and  'The culture of the team working within the department was one of cohesiveness, with staff displaying a very high level of professionalism and enthusiasm for the work they did.' (Report B, CQC) 'The planning guidance also covered a range of other areas in relation to building a safety culture.' (Board Papers, Trust Development Authority, March 2015) 'We perhaps talk in random terms about the culture of the organisation but we really don't get to grips with the culture of the organisation … I suppose there are some sources of information that we would look at but whether you really understand the culture of the organisation from that.' (Interview participant C, Monitor).
'The vast majority of staff we spoke with said that they were unable to understand how decisions were made and were also unable to consistently describe to us the lines of accountability. There was a strong and consistent reference to a dysfunctional management structure and a 'reactive culture'.' (Report A, HIS) They want to see how change will bring value and benefits to the people they care for. They also need to see how they can contribute to the changes, how their voice will be heard and, importantly, how they will be enabled to work differently in a way they know will bring about better, more quality-focused services to their patients and clients.' (Together for Health. A Five-Year Vision for the NHS in Wales, Welsh Assembly Government, 2011) 'I would want to see that there's a good connection between the board and the operational side of the organisation, that there's strong clinical engagement, that they're starting to just take ownership of some of the issues and get a grip of some of them.' (HIW, Interview participant H) 'We found that staff were committed to delivering good quality care and they were kind and caring, in many cases, we found issues with staff numbers, vacancies, resilience and skill mix.' (HIW, Report B) There has been a lack of leadership within the organisation which has resulted in the failure to unite staff behind a common purpose.' (Report A, HIS).  Table continued design their enforcement approach with organisations. Without a broader conceptualisation of improvement capability, enforcement approaches may not be designed to adequately meet organisational needs and be less effective. For example, agencies may inadequately or inaccurately assess an organisation's capability to improve, instead indicating that external support is required, leading to the poor use of resources by both agencies and organisations, and negatively impacting morale. Finally, inaccurate assessments may undermine confidence in an organisation by local populations and stakeholders. Building on these suggestions will assist agencies in meeting their aims of developing improvement capability through more effective assessment. This would enable agencies to ensure that organisations focus across all dimensions holistically. Furthermore, a broader conceptualisation would support increased attention on patient care across care pathways and between organisations, supporting the development of service integration through the use of the stakeholder and supplier, and service-user focus dimensions; it would also strengthen the reliability and validity of regulatory assessments.

Table continued
Further research is required to support assessment and the subsequent tailoring of improvement support to organisations. This needs to be based on an understanding of their existing improvement capability, and to strengthen understanding about how improvement capability emerges or dissipates within organisations. Nevertheless, it is important to acknowledge the limitations of this research study, which largely relies on cross-sectional data from a regulatory agency perspective. Perspectives from assessed organisations would provide richer data and help to continue to build our understanding of improvement capability.

Conclusions
This research study set out to consider how regulatory and improvement agencies assess improvement capability. Its analysis of policy documents, interviews and assessment reports shows that whilst all these agencies aim to assess and develop improvement capability within healthcare organisations, two out of eight dimensions of improvement capability dominate assessment. This may be due to the difficulty in operationalising the dimensions that comprise improvement capability due to measurement, knowledge and practice gaps.
Empirically, this research study has addressed a gap in the knowledge regarding the assessment of improvement capability, and the results provide a starting point for the development of which factors could be considered in the assessment of improvement capability. Better understanding and assessment of improvement capability will allow more tailored development approaches by regulatory and improvement agencies. This research study has highlighted the need for regulatory agencies to further conceptualise improvement capability in order to inform their assessment and subsequent development. This will strengthen agencies' assessment, diagnosis and prediction of organisational performance trajectories, and support the development of more appropriate and effective regulatory interventions.