Development and validation of an index to assess hospital quality management systems

Objective The aim of this study was to develop and validate an index to assess the implementation of quality management systems (QMSs) in European countries. Design Questionnaire development was facilitated through expert opinion, literature review and earlier empirical research. A cross-sectional online survey utilizing the questionnaire was undertaken between May 2011 and February 2012. We used psychometric methods to explore the factor structure, reliability and validity of the instrument. Setting and participants As part of the Deepening our Understanding of Quality improvement in Europe (DUQuE) project, we invited a random sample of 188 hospitals in 7 countries. The quality managers of these hospitals were the main respondents. Main Outcome Measure The extent of implementation of QMSs. Results Factor analysis yielded nine scales, which were combined to build the Quality Management Systems Index. Cronbach's reliability coefficients were satisfactory (ranging from 0.72 to 0.82) for eight scales and low for one scale (0.48). Corrected item-total correlations provided adequate evidence of factor homogeneity. Inter-scale correlations showed that every factor was related, but also distinct, and added to the index. Construct validity testing showed that the index was related to recent measures of quality. Participating hospitals attained a mean value of 19.7 (standard deviation of 4.7) on the index that theoretically ranged from 0 to 27. Conclusion Assessing QMSs across Europe has the potential to help policy-makers and other stakeholders to compare hospitals and focus on the most important areas for improvement.


Introduction
In a recent review on instruments assessing the implementation of quality management systems (QMSs) in hospitals, the authors conclude that hospital managers and purchasers would benefit from a measure to assess the implementation of QMS in Europe. The results of the review show that there is currently no well-established measure that has also be used to assess the link between quality management at hospital level, quality management activities at departmental level and patient outcomes [1]. In the context of the European cross-border directive and the Council Recommendation on patient safety, it is even more important to measure and compare the implementation of QMS across countries to get insight into existing prerequisites for safe patients' care and possible gaps in the quality management within or between countries. QMSs definition used in this article is 'as a set of interacting activities, methods and procedures used to monitor, control and improve the quality of care' [2].
In the recent review, 18 studies to assess the implementation of QMSs have been described. Only nine of these studies reported methodological criteria in sufficient detail and were rated as good [1]. Only two of them have been used in several European countries, e.g. the European Research Network on Quality Management in Health Care (ENQuaL) questionnaire for the evaluation of quality management in hospitals [3,4], and the Methods of Assessing Response to Quality Improvement Strategies (MARQuIS) questionnaire and classification model for quality improvement systems [5]. Despite their good evaluation, both instruments have important limitations. The ENQuaL questionnaire was developed in 1995 and does not cover more recent quality management topics, such as 'use of indicator data' and 'learning from adverse events' [2]. The MARQuIS questionnaire is very long (113 items) and focusses mainly on leadership (36 items), policy, planning, documents (20 items), quality strategies (laboratory) (20 items) and structure (19 items), but less on the evaluation of care processes by indicator data. The latter is an important step in the quality improvement cycle [5].
The objective of this study was to develop and validate an up-to-date and more concise survey instrument to assess hospital QMSs in European countries, and to compute an index-the Quality Management Systems Index (QMSI)representing its developmental stage. Specifically, we report on its structure, reliability, validity and descriptive statistics of the QMSI and its scales.

Conceptual considerations
A broad range of activities can be used by an organization to maintain and improve the quality of care they deliver. These activities might change over time because of new evidence, changing expectations of the public or new (national) regulations regarding accountability. When developing a multi-item measurement instrument, we need to know the underlying relationship between the items (quality activities) and the construct to be measured, e.g. the QMS. The new instrument has, like the earlier developed ENQuaL and MARQuIS questionnaire, partly been based on the nine enabler and resulting themes of the existing theoretical framework of the European Foundation of Quality Management model (EFQM) [6], but some of the questionnaire items had to be changed to represent actual developments in quality management practice.

Development of the instrument
The questionnaire was a web-based multi-item and multidimensional instrument to assess the development of QMSs in hospitals. The aim of the questionnaire was to focus on the managerial aspects of quality management such as policy documents, formal protocols, analyzing performance and evaluating results, and not on leadership, professional and patient involvement or organizational culture, as these are different theoretical concepts within the Deepening our Understanding of Quality improvement in Europe (DUQuE) framework ( Fig. 1) and which are assumed to influence the implementation of QMS. The literature review revealed that earlier studies have distinguished six domains of quality management, e.g. procedures and process management, human resource management, leadership commitment, analysis and monitoring, structures and responsibilities and patient involvement. Most of the instruments do not cover all the domains. If they do, they have a large number (179) of items [1].

Figure 1 Conceptual model of DUQuE.
Validation of a quality management systems index Several steps were applied to develop a more concise instrument that still covers the most important domains of QMS presented in the literature.
To select items from earlier questionnaires (ENQuaL and MARQuIS) [2,5] or develop new items for the DUQuE questionnaire, we first used the expert opinion of other DUQuE project members. They considered the most relevant and possibly most influencing activities for the improvement of patient-related outcomes (n = 10). The experts have a long history in healthcare, especially in quality management in the various countries. For a concise instrument, only the most relevant activities are important. Second, items related to the more managerial focal areas of the theoretical framework of the EFQM model were selected ( policy documents, human resources, processes and feedback of results such as patient and professional experience, comparison of clinical and societal performance). The wording of the items and the answer categories were compared with accreditation manuals and the review on existing instruments. In the end, most questionnaire items for the new instrument came from the ENQuaL and MARQuIS questionnaire, but because of our focus on managerial aspects of quality management, not all focal areas of these instruments were selected. Finally, the answer categories of all items were standardized with a focus on the extent of implementation and four answer categories.
The questionnaire was first developed in English and was translated into seven languages using a forward-backward translation process for validation. Respondents could rate each item on a four-point-Likert-type scale, with answer categories ranging from 'Not available' to 'Fully implemented' and from 'Disagree' to 'Agree'.
The content validity of the final questionnaire used in the DUQuE project was approved and judged completely by the 10 experts from different quality research areas involved in the project who were not involved in the Quality Management Systems Index (QMSI) development. The questionnaire consisted of 56 items divided over 5 dimensions: quality policy (10 items), quality resources (9 items), performance management (7 items), evidence-based medicine (13 items) and internal quality methods (17 items).

Setting and participants
The study took place within the context of the DUQuE project that ran from 2009 to 2013 [7,8] in 7 European countries: France, Poland, Turkey, Portugal, Spain, Germany and Czech Republic. These countries represent the diversity of Europe (e.g. countries from the East/West, North/South, regional/national healthcare system, system in transition/longer established system). In each country, 30 hospitals were randomly recruited if they had >130 beds and were treating patients with acute myocardial infarction, hip fracture and stroke and handled child deliveries. The conditions were chosen for their high financial volume, high prevalence of the condition and percentage of measureable complications, and the different types of patients and specialists they were covering [7,8]. Of the 210 approached hospitals, 188 were able to participate (89.5% response rate). The DUQuE QMSI questionnaire was administered online to the quality managers of the 188 participating hospitals (response N = 183; 97%). A quality manager of a hospital was defined as the person who is responsible for the coordination of quality improvement activities. He/she should have a good overview of all activities toward quality improvement (questionnaire instruction). The quality manager was allowed to ask other people in the hospital if he/she was not sure about the right answer, but only one questionnaire per hospital was expected to be filled in. The instruction also said that it was not necessary for a hospital to have all activities mentioned in the questionnaire and that it was expected that hospitals would be in different phases of implementation for different activities.
Ethical approval was obtained by the project coordinator at the Bioethics Committee of the Health Department of the Government of Catalonia (Spain).
Data collection. Respondents who participated in the DUQuE project were invited by a letter and personally by the country coordinator. Questionnaires were completed anonymously and directly entered in the online data platform. The data were collected between May 2011 and February 2012. All participants were sent passwords to access the web-based questionnaire and sent reminders.
Statistical analyses. We began by describing the hospitals and quality managers that provided responses to the main questionnaire used to develop the index. Next, we used psychometric methods to investigate the structure, reliability and validity of the QMSI instrument. We assumed that our ordinal data approximated interval data and conducted exploratory factor analysis and confirmatory factor analysis, reliability coefficient, item-total scale correlation and inter-scale correlation analyses [9][10][11]. These were done separately for each of the theoretical themes. We explored the factor structure of the questionnaire using split-file principal component analysis with oblique rotation and an extraction criterion of eigenvalues of >1 while requiring three or more item loadings. Items were grouped under the factor or scale where they displayed the highest factor loading. Only items that had loadings of at least 0.3 were assigned to a factor [10]. Confirmatory factor analysis was then used on the second half of the sample to determine whether the data supported the final factor structure. A root mean square residual of <0.05 and a non-normed fit index of >0.9 indicated good fit of the scale structure to the data. We then performed reliability analysis using Cronbach's alpha where a value of 0.70 or greater indicated acceptable internal consistency reliability of each scale [12,13]. We also examined the homogeneity of each scale using item-total correlations corrected for item overlap. Item-total correlation coefficients of 0.4 or greater provided adequate evidence of scale homogeneity. Finally, we assessed the degree of redundancy between scales by estimating inter-scale correlations using Pearson's correlation coefficients, where a correlation coefficient of <0.7 indicated non-redundancy [11,14].
Once we had a final factor structure, we computed the score for each of the scales by taking the mean of items used to build the scale. We used appropriate multiple-imputation techniques to handle missing data for hospitals with missing data for four or fewer scales used to build the final QMSI [15]. The scores of the extracted scales of our analysis were then summed in order to construct the final QMSI. We subtracted the number of factors or scales from this sum in order to bring the lower bound of the scale down to zero.
In order to validate our instrument, we further examined correlations with two other measures of quality management based on on-site visits by external auditors. These other constructs were the Quality Management Compliance Index (QMCI) and the Clinical Quality Improvement Index (CQII) [16]. The QMCI measures the compliance of healthcare professionals, managers or others responsible in the hospital with quality management strategies. The CQII measures the implementation of clinical quality strategies by healthcare professionals. Both measures are based on on-site visits of external auditors and are described separately in this supplement [16].
We used Pearson's correlation coefficients to assess the relationship between QMSI, QMCI and CQII, deeming coefficients between 0.20 and 0.80 as acceptable [10,14,17]. If the QMSI measures the implementation of QMS in hospitals, it is expected that there would be a positive non-collinear relationship between the QMSI and the two more independent measures of quality management: QMCI and CQII. Because only some parts of the content of the three instruments overlap, the coefficients will not be very high.

Participants
A total of 188 hospitals participated in the DUQuE project. Quality managers of all the hospitals responded to the questionnaire, but five quality managers provided not enough data to calculate the nine scales and the QMSI. Background characteristics of the participating hospitals and the quality managers who filled in the questionnaire are given in Table 1. Table 2 gives an overview of factor loadings, Cronbach's alphas and corrected item-total correlations for each of the nine scales retained from factor analysis that were used to build the QMS index. These nine scales were quality policy documents (three items), quality monitoring by the board (five items), training of professionals (nine items), formal protocols for infection control (five items), formal protocols for medication and patient handling (four items), analyzing performance of care processes (eight items), analyzing performance of professionals (three items), analyzing feedback and patient experiences (three items) and evaluating results (six items). We eliminated 10 of the original 56 items in the questionnaire due to low factor loadings. As seen in Table 2, factor loadings ranged from 0.34 ('benchmarking') to 0.89 ('professional training in quality improvement methods'), with most items achieving acceptable factor loadings (> 0.40). Confirmatory analysis supported this final structure (not reported here).

Structure, reliability and validity
Cronbach's alphas for internal consistency reliability were satisfactory for all scales (Cronbach's alpha = 0.72-0.87) except 'analyzing feedback & patient experiences' (Cronbach's alpha = 0.48). Based on the theoretical importance of feedback of patient experiences and benchmarking, we decided to keep this scale in the QMSI. The item-total scale correlations were acceptable within the range of 0.20 to 0.80. The correlation coefficients for items in the scale 'feedback of patient experiences and benchmarking' were consistently lower than those for the other scales. As shown in Table 3, the inter-scale correlation ranged from 0.11 (between 'feedback of patient experience' and 'formal protocols for infection controls') to 0.70 (between 'evaluating results' and 'analyzing performance of care processes'). For all scales, each inter-scale correlation was below the pre-specified 0.70 threshold and deemed acceptable.
The validity of the QMSI was further explored by analyzing its correlations with two other measures of quality management, namely the QMCI and the CQII. Correlation coefficients were within the acceptable range of 0.20 to 0.80 (Table 4).

Descriptive statistics for the QMSI and its scales
Descriptive statistics of items used to build the scales and the index are provided in Table 5. All items of the questionnaire were on a Likert-type response scale from 1 to 4. The average score on the individual items was ∼3, with a lower average score for items related to the analysis of the performance of professionals. Some floor and ceiling effects were found,   where a high proportion of the respondents had a score at the lower or upper end of the answer categories, e.g. especially for patient complaint analysis and monitoring patient opinions.
More than 80-90% of the hospitals had implemented the activities. The overall QMSI ranged from 0 to 27 points based on nine scales. The mean score of participating hospitals is 19.7 points (SD of 4.7).

Discussion
We set out to develop and validate an index (QMSI) to measure QMSs in European hospitals. We found that the QMSI has 46 items to be reliable and is valid for the assessment of QMSs in European hospitals. The answers to the 46 items could be summarized in an index to express the extent  of implementation of quality management activities, such as quality policies, methods for continuous improvement and procedures for patient complaint handling or staff education. The QMSI was found to be useful to differentiate between hospitals on nine separate scales and on the index as a whole. The nine scales of the QMSI represent the managerial aspects of quality management and leave room for the investigation of associations of quality management with leadership, patient and professional involvement and organizational culture. These latter concepts are assumed to influence the extent of implementation of quality management in hospitals.

Comparison with earlier studies
The newly developed DUQuE instrument has good psychometric properties, consists of up-to-date questionnaire items, can be used in various European countries and is not too time-consuming for respondents (46 items; 9 dimensions). Earlier developed instruments have between 17 and 179 items, and 3-13 dimensions [1]. The DUQuE instrument for QMS covers four of the six domains found in the literature. Intentionally, the QMSI does not cover the domains leadership and patient involvement, because these are in the DUQuE framework influencing factors for the implementation of QMS and not part of the managerial aspects of quality management itself.
The clear sampling frame with random hospitals across EU countries has a higher external validity than existing research on QMS. In line with previous research, it seems that there is no individual focal area that accounts for the entire variance associated with the implementation of QMS. Quality management is a combination of policy, monitoring quality improvement by the board, professional development, monitoring of performance of processes and knowing relevant patient-related outcomes.

Limitations of the study
This study has some limitations. The QMSI is based on the perception of the quality manager of the hospital. Although data from the questionnaire were self-reported, it has been shown through on-site visits that they seemed to be reasonably reliable. Despite the random selection of hospitals, selection bias among participating hospitals cannot be ruled out. Especially in some countries, the number of participating hospitals was smaller than that was initially planned for that country. Furthermore, the final study sample was too small to carry out a cross-culture validation. Therefore, further data collection and analysis will be needed before we can recommend the instrument for official use in cross-country comparisons. A positive point is that hospital and country coordinators did not report problems with the understandability or applicability of the questionnaire.

Implications for research, policy and practice
As patients and purchasers expect the best possible quality of care, healthcare providers have to prove that they constantly work on quality improvement and safer healthcare. Our study has developed an efficient instrument to measure the implementation of quality management strategies on nine focal areas. We also kept some items with ceiling effects, which can still support policy-makers at EU-level stimulating the development of QMS in less-developed countries. Areas with floor effects, like monitoring physician performance, are recognized as important for the years to come and for further spread in European hospitals. The instrument and resulting index could be used for future comparative studies on quality management and for baseline assessment by hospitals or purchasers. A questionnaire is less time-consuming than a site visit and seems to give a quite reliable overall picture of the development and implementation of a QMS. Earlier research has shown that this kind of instrument is useful for monitoring       Analyzing performance of professionals (N = 174) 1-4 2.6 (1.0) 1. Hospital (management) board 'walk rounds' to identify quality problems and issues (management visits work units to discuss quality and safety issues) QMSI is the sum of all nine scales (minus 9). This result includes data that have been subjected to multiple imputations.
implementation of QMSs over time [18]. More importantly, it can be used to test the assumption that enforcing certain quality management policies and strategies will lead to the desired effects, possibly linking management strategies to quality and safety outcomes. Forthcoming DUQuE work will lend additionally validity to the QMSI through the investigation of its relationships with other constructs and outcomes such as hospital external assessment, quality orientation of hospital boards, social capital, organizational culture, safety culture, clinical indicators and patient-reported experience measures.

Conclusion
The newly developed and validated index (hence, instrument) of the implementation of QMSs presents an important tool for measuring, monitoring and, potentially, improving quality management in European hospitals. The QMSI is part of a broader group of instruments developed in the European research project 'Deepening our Understanding of Quality improvement in Europe' (DUQuE).