Case volume and hospital compliance with evidence-based processes of care

Background. For many complex cardiovascular procedures the well-established link between volume and outcome has rested on the underlying assumption that experience leads to more reliable implementation of the processes of care which have been associated with better clinical outcomes. This study tested that assumption by examining the relationship between cardiovascular case volumes and the implementation of twelve basic evidence-based processes of cardiovascular care. Method and results. Observational analysis of over 3000 US hospitals submitting cardiovascular performance indicator data to The Joint Commission on during 2005. Hospitals were grouped together based upon their annual case volumes and indicator rates were calculated for twelve standardized indicators of evidence-based processes of cardiovascular care (eight of which assessed evidenced-based processes for patients with acute myocardial infarction and four of which evaluated evi-denced-based processes for heart failure patients). As case volume increased so did indicator rates, up to a statistical cut-point that was unique to each indicator (ranging from 12 to 287 annual cases). t -Test analyses and generalized linear mixed effects logistic regression were used to compare the performance of hospitals with case volumes above or below the statistical cut-point. Hospitals with case volumes that were above the cut-point had indicator rates that were, on an average, 10 percentage points higher than hospitals with case volumes below the cut-point ( P , 0.05). Conclusion. Hospitals treating fewer cardiovascular cases were signiﬁcantly less likely to apply evidence-based processes of care than hospitals with larger case volumes, but only up to a statistically identiﬁable cut-point unique to each indicator.


Introduction
It has been well established that for many complex medical and surgical procedures, patients receiving services in highvolume hospitals have reduced risks of mortality and complications compared to patients receiving these same services in lower-volume hospitals [1 -5]. While this relationship between volume and outcome has been consistently documented in the medical literature, the underlying causal factors have been less well understood [6]. Despite this relative lack of understanding, many health care purchasers, led by the Leapfrog Group, have sought to take advantage of the presumed correlation between volume and outcome by initiating 'evidence-based referral' processes. Through evidence-based referral, patients are directed towards highvolume centers (and away from low-volume centers) using case volume thresholds as an indicator of quality for certain cardiovascular procedures [7]. Recent analyses, however, suggest that this approach may be of questionable benefit, is difficult to implement, and comes with its own potential risks to patients [8].
This should not be surprising. It has been observed that 'volume' is not an indicator of quality, but rather a structural characteristic that has been associated with quality [9]. Surgical volume, for example, is generally assumed to reflect institutional and surgeon experience with a procedure, which is further assumed to be related to better surgical technique and more reliable implementation of the processes of care which have been associated with superior clinical outcomes. While the relationship between certain processes of care and outcomes has been well established through randomized controlled trials and qualitative research, the assumed relationship between hospital volume and the reliable implementation of specific evidence-based processes of care has never been confirmed. Building upon the well established link between volume and outcome for cardiovascular procedures, we hypothesized that hospitals with higher cardiovascular case volumes (i.e. greater experience) would also be more likely to implement evidence-based cardiovascular processes of care.

Indicators
Since July 2002, US hospitals have been collecting data on standardized indicators of quality developed by The Joint Commission (a US-based accrediting body that evaluates over 4000 hospitals representing over 90% of the hospital beds in the US) and the US government's Centers for Medicare and Medicaid Services. These indicators have been endorsed by the National Quality Forum [10], adopted by the Hospital Quality Alliance, effectively used to track hospital performance [11,12] and used to demonstrate health care disparities [13,14]. Twelve of these quality indicators address evidencebased processes of care recommended by the American College of Cardiology/American Heart Association treatment guidelines for patients with acute myocardial infarction (AMI) and heart failure [15 -18], and they are reported to The Joint Commission on a quarterly basis as individual rates.
Indicator rates reflect the number of times a hospital treated a patient in a manner consistent with specific evidence-based clinical practice guidelines (numerator cases), divided by the number of patients who were eligible to receive such care (denominator cases). Indicator specifications exclude patients that are not eligible to receive certain types of care where appropriate (e.g. patients with contraindications to a specific medication, patients transferred to another hospital). Therefore, ideal performance should be characterized by indicator rates that approach 100% (although rare or unpredictable situations, and the reality that no indicator is perfect in its design, make consistent performance at 100% improbable). Similarly, indicators that address the appropriate timing of a procedure (thrombolysis within 30 min of arrival, percutaneous coronary intervention (PCI) within 120 min of arrival) include only those patients who actually received the procedure. Reliability of indicator data, which are self-reported by hospitals, has been reported to exceed 90% [19].

Participants
Data used for this study were extracted from The Joint Commission's performance indicator database. To be eligible for inclusion, hospitals had to submit data to The Joint Commission on AMI or heart failure indicators from January 2005 through December 2005. The total number of participating hospitals varied by indicator, ranging from 1260 hospitals (submitting data on PCI within 120 min of arrival) to 3138 hospitals (submitting data on both discharge instructions and left ventricular function assessment for heart failure patients). All hospitals included in the analyses were accredited by The Joint Commission.

Statistical analysis
To evaluate the relationship between case volume and performance, hospitals were divided into groups based upon their annual case volume for each indicator. The collective numerator and denominator counts for each indicator volume group were then aggregated to calculate an indicator rate for that group. For example, all hospitals with an annual count of 34 cases eligible for the aspirin prescribed at discharge indicator were grouped and analyzed together. An aggregate indicator rate was calculated from the total number of numerator and denominator cases for the group. This process was repeated for each discrete grouping of case volumes and the results were plotted for each of the 12 quality indicators in a bubble plot [i.e. a scatter plot where the area of each bubble in the plot is proportional to the number of hospitals that were aggregated at each data point ( Fig. 1)].
To explore the relationship between the indicator rate and volume graphically, a nonparametric curve was fit to the data, weighted by the number of hospitals aggregated at each data point, using the loess scatterplot smoother with accompanying 95% confidence intervals (CI) [20]. The resulting nonparametric curves consistently suggested a pattern of an initial increase in indicator rate followed by a leveling off of the curve to a constant plateau value. To determine the inflection point (cut-point) of each of the curves, an empirical segmented regression model was fit to the data with a quadratic curve fitting the initial rise in indicator rate followed by a plateau in the indicator rate [21]. The inflection point is a function of the estimated segmented regression model parameters and is produced as a byproduct of fitting the model. The inflection point can be considered a casevolume cut-point, or the point at which the curve stops rising and reaches a plateau. At that point case-volume is no longer associated with the indicator rate.
The case volume cut-points were then used to group and compare hospitals for each indicator. For each measure, a generalized linear mixed effects logistic regression model was fit to determine the impact of the case volume cut-point on the hospital measure rate using hospital as the random effect. The odds ratio (OR) of the fixed effect case volume cut-point (with hospitals less than the cut-point used as the reference group) with its 95% CI was used as a measure of the impact.
To evaluate if the effect of hospital characteristics (bed size, urban/rural, teaching/non-teaching) modulated the impact of case volume cut-point, a generalized linear mixed effects logistic regression model was fit to each indicator to determine the interactions of hospital characteristic variables with the cut-point, controlling for hospital characteristic variables and cut-point main effects. The logistic regression model used the hospital's rate as the dependent variable; cutpoint, teaching status, rural/urban status and bed size as the fixed effects; and hospital as the random effect. A fit of these models revealed that the interaction of urban/rural status and bed size with the cut-point was non-significant for most of the indicators, therefore these two hospital characteristics were only included as main effects in all of the models.
The ORs (with 95% CI) were then calculated using the fitted model for the main effects urban/rural status and bed size, and for the four cut-point by teaching status categories using the non-teaching hospital with sample size less than the cut-point as the reference group.

Results
For each cardiovascular process of care indicator, it was possible to calculate a unique cut-point -the number of annual cases a hospital needs to treat before the influence of Case volume and hospital compliance 81 case volume appears to dissipate (Table 1). On an average, hospitals with a case volume at or above the cut-point performed statistically significantly better (95% CI for the OR is greater than 1) than those hospitals with a case population below the cut-point (Table 2).
Once case volume rises to or above the cut-point, volume ceases to play a significant role in the indicator rates. In other words, after reaching the cut-point, increases in case volume are no longer associated with improved performance on the indicators (Fig. 1). The case volume cut-points are unique to each indicator and ranged from a low of 12 cases for the angiotensin converting enzyme inhibitor or angiotensin receptor blocker for patients with left ventricular systolic dysfunction prescribed at discharge (discharge medications for systolic dysfunction) indicator in the AMI set, up to 287 cases for left ventricular function assessment in the heart failure set. Absolute differences in the mean indicator rates between hospitals above and below the volume cut-points on individual indicators varied from 1.8 percentage points for discharge medications for systolic dysfunction in the heart failure set (83.2 at or above; 81.4 below) up to 14.5 percentage points for discharge instructions (59.2 at or above; 44.7 below).
Indicators on which overall performance was the poorest (thrombolysis within 30 min of arrival; discharge instructions; PCI within 120 min of arrival) were also the indicators on which there was the greatest difference between the performance rates of hospitals at or above the cut-points and hospitals below the cut-points. In only about a third (31.5%) of hospitals below the case volume cut-point can patients expect to receive thrombolysis within 30 min of arrival whereas nearly half (42.5%) of the hospitals above the cutpoint reach that threshold. For discharge instructions the difference is 14.5 percentage points (44.7 and 59.2%) and for the Smoking cessation advice/counseling -AMI indicator it is 13.1 percentage points (79.6 and 92.7%). It would appear, therefore, that the relationship between case volume and compliance with evidence-based processes is most obvious for indicators that show the greatest variation among hospitals. Conversely, the two indicators with the highest overall compliance rates (aspirin at arrival and aspirin at discharge) are among the indicators with the smallest difference in rates above and below the cut-point (4.2 aspirin at arrival; 6.9 aspirin at discharge).
On 10 of the 12 indicators, more hospitals in the sample had case volumes that fell below the cut-point than above it. On two indicators (discharge medications for systolic dysfunction -AMI and Discharge Instructions) the majority of hospitals had case volumes above the cut-point. For several indicators the difference between the number of hospitals with case volumes above and below the cut-point is quite large ( Table 1). For example, 78.7 % of the hospitals reporting on the heart failure smoking cessation counseling indicator had case volumes that fell below the cut-point, as did 89.8% of hospitals on the thrombolysis within 30 min of arrival indicator and 94.2% on the PCI within 120 min of arrival indicator.
Differences between hospitals with case volumes above or below the cut-points were maintained even after considering a hospital's characteristics. Results are presented in Table 3 for teaching hospitals, rural hospitals and hospitals with 100 or fewer beds. ORs for the cut-point interaction and teaching    Cut-point: a function of the estimated segmented regression model parameters and is produced as a byproduct of fitting the model. It is the point at which the curve reaches a plateau and case-volume is no longer associated with the indicator rate; cut-point odds ratio: the probability that a patient will receive evidence-based processes of care in a hospital with case volumes that equal or exceed the cut-point (e.g. 1: equal probability, 2: twice as likely, 0.5: half as likely).  Cut-point: a function of the estimated segmented regression model parameters and is produced as a byproduct of fitting the model. It is the point at which the curve reaches a plateau and case-volume is no longer associated with the indicator rate; odds ratio: the probability that a patient will receive evidence-based processes of care in a hospital with a given hospital characteristic versus a patient in a hospital with the reference characteristic (e.g. 1: equal probability, 2: twice as likely, 0.5: half as likely). The reference group for the teaching/volume interaction consisted of non-teaching hospitals with volume less than the cut-point with all the other predictors adjusted for in the model. There were 326 teaching hospitals, 934 rural hospitals, and 820 hospitals with bed size 100 in the sample. ‡ The odds ratio for this category could not be calculated since there were no teaching hospitals below the cut-point.
status variable are presented in Table 3. The OR for teaching and non-teaching hospitals above the cut-point, compared to a non-teaching hospital below the cut-point, was statistically significant for all indicators except the heart failure discharge medications for systolic dysfunction (for non-teaching hospitals) and thrombolysis within 30 min of arrival (for teaching hospitals). The ORs for teaching and non-teaching hospitals above the cut-point differed the most for aspirin prescribed at discharge (teaching hospitals OR ¼ 5.31; non-teaching hospitals OR ¼ 2.95), beta blocker prescribed at discharge (teaching OR ¼ 4.67; non-teaching OR ¼ 2.31), and left ventricular function assessment (teaching OR ¼ 3.56, nonteaching OR ¼ 1.37).

Discussion
This study explores the relationship between case volume and hospital performance on indicators of evidence-based processes of care. Analyses found that performance on the AMI and heart failure process indicators was strongly associated with case volume, but only up to a specific and statistically identifiable cut-point. Once that point is reached the influence of case volume dissipates. This relationship is illustrated by the graphs in Fig. 1. Hospitals with a case volume at or above the cut-points were, on an average, more likely to comply with evidence-based processes of care than hospitals where case volume fell below the cut-point. For example, clinical guidelines recommend that PCI be administered in a timely fashion in order to maximize its benefits [17]. Based upon the results of this study, a patient's chances of receiving timely PCI increase by up to 4.6% for every additional 10 PCI patients seen by the hospital. However, the rate of increase diminishes as the cut-point of 87 eligible patients per year is approached. A patient treated in a hospital with only 10 eligible PCI cases per year, for example would have a 54% chance of receiving timely reperfusion. In a hospital with 30 annual cases, the patient's chances would increase to 63%. By 50 cases, the patient's chances have increased to 69%. As the cut-point is approached, however, volume appears to have less influence. At 60 cases, the rate is 71%; at 70 cases, the rate is 72%; and at 80 cases, the rate of 73% has nearly reached the indicator's volume plateau. Once the cut-point has been reached, the impact of case volume is minimized by other factors, such as those identified through studies of door-to-balloon times [22][23][24].
Interestingly, the case volume cut-point varied widely across the indicators. So, while case volume is clearly associated with the implementation of these processes, the number of cases it takes for the association to reach its peak and level off is unique to each process. For indicators such as discharge medications for systolic dysfunction -AMI (12 cases), thrombolysis within 30 min of arrival (16 cases) and smoking cessation advice/counseling -AMI (32 cases) the case volume cut-point is relatively low. Hospitals do not have to see a large number of cardiovascular patients before the impact of case volume on these processes dissipates. Conversely, for aspirin prescribed at discharge (159 cases) and left ventricular function assessment (287 cases) the cutpoint is much higher. Hospitals have to see many more cases before they reach the cut-point threshold. One possible explanation for the large difference in measure cut-points may have to do with the specificity of the denominator population and the sensitivity of the indicator. The population of patients targeted by the discharge medications for systolic dysfunction -AMI indicator, for example, is very specific. A patient treated for AMI would need to have moderate to severe systolic dysfunction to be eligible for the indicator population -making the need for specific discharge medications more obvious. The prescription of aspirin at discharge, on the other hand, would be applied to a much larger, and perhaps less obvious, cohort of patients. When this is coupled with the high performance of most US hospitals on the aspirin at discharge indicator (90% of US hospitals have achieved a rate of 90% or better), it is not difficult to see why case volume might be less predictive for the indicator. Therefore, it is not enough to simply assert that patients will get better care in high case volume settings, at least as it relates to the processes of care represented by the indicators in this study. Individual processes need to be looked at independently.
The independent nature of the relationship between specific processes of care and case volume is reinforced by studies in other settings. For example, one recent study found that patients treated for cancer at high volume hospitals were more likely receive some processes of care, but not others. Patients in high-volume facilities, for example, were more likely to undergo stress tests (but not echocardiogram or pulmonary function tests), see medical or radiation oncologists preoperatively (but not other specialists) and receive invasive monitoring after surgery, than were patients at low volume hospitals [6]. A study of pneumonia care processes even demonstrated a statistically significant inverse relationship between hospital case volume and antibiotic timing, but found no relationship between case volume and other indicators of recommended care (i.e. antibiotic selection, blood culture collection and immunization rates) [25].
Indicators on which overall hospital performance was the lowest were the same indicators where the gap between the performance rates of hospitals on either side of the cut-point was widest. At the same time, indicators with the highest overall performance rates demonstrated the smallest difference between organizations above and below the cut-point. This could be, in part, the function of a ceiling effect. When indicator rates top 90% and the ceiling of 100% is approached, the speed at which improvement can occur slows. This allows the performers on the negative side of the distribution to narrow the gap and catch up. Nonetheless, the evidence here indicates that case volume makes the biggest difference on those evidence-based processes of care that, for whatever reasons, are the most neglected. In this study those processes are thrombolysis within 30 min of arrival, PCI within 120 min of arrival and discharge instructions.
The comparisons of hospitals above or below the volume cut-point were repeated within subgroups of hospitals (rural, urban, non-teaching, small and large hospitals) and the overall results were replicated. Additional analysis, examining the interaction between the cut-points and the hospital characteristic variable, provided additional support for the influence of case volume. For example, an AMI patient treated at a hospital with an annual case volume exceeding the cut-point (32 or more eligible cases) would be four times more likely to receive smoking cessation counseling than a patient treated at a hospital with fewer than 32 annual cases. This same pattern holds even after accounting for hospital bed size and rural/urban location. Teaching status, however, had a mitigating effect on the influence of volume for some indicators. This mitigating effect was statistically significant for aspirin prescribed at discharge, beta blocker given within 24 h after arrival, beta blocker prescribed at discharge and left ventricular function assessment. While the reasons for the relationship between teaching status on performance are not clear, previous studies have also shown that teaching hospitals tend to perform better than non-teaching hospitals on several of these process indicators [13,26].

Limitations
Only Joint Commission accredited hospitals were included in this study. The Joint Commission accredits over 80% of US hospitals, covering more than 90% of hospital beds in the country, thus, the majority of unaccredited hospitals are small. While smaller hospitals are likely to have lower case volumes, we do not know if unaccredited hospitals are also less likely to implement these evidence-based processes of cardiovascular care. The study relies on hospital self-reported performance indicator data. Despite the previously reported high reliability of these data, self-reported data could be a source of bias [19].

Conclusions
In conclusion, while previous studies have demonstrated a relationship between hospital volume and outcome for some complex cardiovascular procedures, this study is the first to demonstrate a relationship between case volume and hospital performance on specific evidence-based processes of cardiovascular care -perhaps offering support for the assumptions underlying the volume and outcome relationship. Hospitals with small case volumes, therefore, may need to adopt unique solutions to ensure compliance with evidencebased processes. Importantly, it is clear, based upon the graphs, that many small hospitals do consistently achieve high indicator rates. Qualitative study of these hospitals may provide insight that could significantly benefit small hospitals. Finally, in the absence of indicator data, results demonstrate that volume may serve as a useful proxy for cardiovascular quality. For hospitals with an annual cardiovascular case volume that exceeds 90 cases, however, volume ceases to be a very useful tool for differentiating hospital performance on evidence-based processes of care. Given these limits, volume statistics should only be used as a proxy for hospital quality when evidence-based process indicator data are not available.