Abstract

Objectives Contemporary electronic health records (EHRs) offer a wide variety of features, creating opportunities to influence healthcare quality in different ways. This study was designed to assess the relationship between physician use of individual EHR functions and healthcare quality.

Materials and Methods Sixty-five providers eligible for “meaningful use” were included. Data were abstracted from office visit records during the study timeframe (183 095 visits with 61 977 patients). Three EHR functions were considered potential predictors: acceptance of best practice alerts, use of order sets, and viewing panel-level reports. Eighteen clinical quality measures from the “meaningful use” program were abstracted.

Results Use of condition-specific best-practice alerts and order sets was associated with better scores on clinical quality measures capturing processes in diabetes, cancer screening, tobacco cessation, and pneumonia vaccination. For example, providers above the median in use of tobacco-related alerts had higher performance on tobacco cessation intervention metrics (median 80.6% vs. 66.7%; P < .001), and providers above the median in use of diabetes order sets had higher quality on diabetes low density lipoprotein (LDL) testing (68.2% vs. 59.5%; P == .001). Post hoc examination of the results showed that the positive associations were with measures of healthcare processes (such as rates of LDL testing), whereas there were no positive associations with measures of healthcare outcomes (such as LDL levels).

Discussion Among primary care providers in the ambulatory setting using a single EHR, intensive use of certain EHR functions was associated with increased adherence to recommended care as measured by performance on electronically reported “meaningful use” quality measures. This study is relevant to current policy as it uses quality metrics constructed by contemporary certified EHR technology, and quantitative EHR use metrics rather than self-reported use.

Conclusion In the early stages of the “meaningful use” program, use of specific EHR functions was associated with higher performance on healthcare process metrics.

BACKGROUND AND SIGNIFICANCE

The contemporary electronic health record (EHR) is a complex piece of software with multiple functions and capabilities. It enables a healthcare provider to record patient progress in free text, place prescription orders, receive decision-support alerts and reminders, order laboratory tests, receive and review results electronically, message patients or fellow providers, and perform a variety of other documentation and clinical tasks. The potential for all of these features to improve the quality and efficiency of care is the rationale for the federal EHR incentive program (the so-called “meaningful use” [MU] program), which is promoting EHR adoption and use among physicians and hospitals.1 Today, 6 years after the legislation was passed, nearly three-quarters of ambulatory physicians and more than half of hospitals use EHRs.2 Concerns about the potential adverse effects of EHRs, particularly centering around usability and integration with clinical work flow, remain strong. Nevertheless, a growing number of studies are suggesting that the use of EHRs can in fact be associated with quality improvements, including better adherence to clinical guidelines,3 fewer medication errors,3 improved diabetes management,4 more appropriate antimicrobial prescribing,5 and higher rates of ambulatory preventive care,6 among others.7,8

The 2009 legislation establishing the “meaningful use” program also launched the EHR certification program, which has imposed a certain degree of standardization across EHR products. Nevertheless, the EHR is so complex that individuals can use it in different ways. Both quantitative and qualitative studies are demonstrating that healthcare providers develop highly personalized ways of using EHRs.9,10 For example, a recent analysis of EHR data demonstrated great variability in the rates at which providers updated problem lists, used order sets, and responded to decision support alerts, even when using the same product in the same organizational setting.9 Similarly, a qualitative study demonstrated that providers at one organization differed in how frequently they used macros and other EHR features.10 Across organizations, variability in use may be even greater, as technology use is known to be influenced by factors including usability, the match between the technology and the clinical task it is intended for, and institution-specific workflow processes and training procedures.11–14

This variation among healthcare providers in their use of the many components of the EHR leads to a novel opportunity. Capturing this variability makes it possible to identify, not only whether the EHR affects quality, but what types of EHR usage are linked with quality changes. Capturing use patterns could add explanatory power to studies of the effects of EHRs on healthcare delivery, helping to confirm whether components of the EHR intended to improve healthcare are working as expected, elucidating mechanisms to explain the effects of the EHR, and helping to point out areas for improvement in EHR development.

OBJECTIVES

The objective of the study was to determine whether provider variability in EHR use was linked to healthcare quality. We selected three groups of EHR functions with the potential to improve healthcare quality, and tested the overall hypothesis that providers who used these functions more frequently would perform better on related healthcare quality measures than providers who used them less frequently. Hypotheses focused on (a) use of electronic reminders about recommended primary and preventive care, known as best-practice alerts; (b) use of order sets; and (c) use of panel reports. Healthcare quality was assessed with “meaningful use” clinical quality measures focusing on both processes and outcomes.

MATERIALS AND METHODS

Setting: The Institute for Family Health (IFH) is a network of federally qualified health centers (FQHCs) in New York City and environs. IFH is staffed largely by family practitioners and is recognized as a level III patient-centered medical home by the National Committee on Quality Assurance.15 IFH was an early adopter of EHRs, having used the same commercial EHR system (EpicCare, Epic Systems, Inc, Verona, WI, USA) since 2003. During the first part of data collection for the current study, the institution was preparing to attest for Stage I “meaningful use,” and about half of its family practitioners attested to “meaningful use” under the Medicaid program before data collection was concluded.

Study design: This retrospective cross-sectional study was conducted on data collected from January 2010 through June 2013.

Study sample: We considered all IFH healthcare providers in family practice who met the criteria for eligible provider under the Medicaid “meaningful use” program, who saw patients between January 1, 2012 and June 30, 2013, and who saw at least one patient in each of the 2 calendar years. We excluded family practitioners who saw fewer than 100 unique patients during this 18-month timeframe as these were considered to have insufficient exposure to the EHR and were probably not working full-time in this setting.

Dependent variables (quality measures): For the clinical quality measures (dependent variables), proprietary reports from the EHR were used to compute provider-level scores for 24 “meaningful use” core and menu set clinical quality measures for all included providers (both those who did and those who did not attest for “meaningful use”). Scores are reported as percentages per provider (number of patients for whom the measure was met divided by number of patients eligible for the measure). The proprietary reports included the numerator and the denominator for each physician for each year of data collection (calendar year 2012, and the first 6 months of 2013), but did not identify the patients included. As a result, it cannot be confirmed that the clinical quality metrics computed for each physician draw from exactly the same patient set as the variables computed from the visit-level data. We excluded measures with ceiling effects (median performance 98% or above), floor effects (median performance 0%), fewer than 8 providers with qualifying patients, or a focus solely on pediatric patients, leaving 18 quality metrics (listed in Table 3). The resulting list included both metrics that focused on healthcare processes (such as recommended lipid and hemoglobin A1c testing in diabetes) as well as metrics that focused on healthcare outcomes (such as controlled hemoglobin A1c in diabetes).

Independent variables (EHR usage): The development of the EHR usage metrics (reported elsewhere9) involved a team of researchers with training in informatics, medicine, biostatistics, and health services research. In brief, we were informed by task-technology-fit theoretical models, which focus on the interplay between tasks, users, and technologies within organizational settings.13,16 This perspective led us to develop a number of task-level metrics of healthcare provider use of EHRs. For the current study, we selected EHR usage metrics that had the potential to support performance on “meaningful use” clinical quality measures (see Table 2). These were:

  • Acceptance of best practice alerts (BPAs) relevant to 5 preventive services (tobacco cessation, breast cancer screening, colorectal cancer screening, pneumonia vaccination, and body mass index screening) and 4 medical conditions (diabetes, hyperlipidemia, hypertension, and depression). BPAs are reminders about recommended care pushed to the provider, typically at entry to the patient record;

  • Use of order sets relevant to the same preventive services and medical conditions, and

  • Use of an EHR function permitting providers to create reports on results for their entire panel.

We manually classified both BPAs and order sets into categories on the basis of the condition or preventive care service they were associated with (diabetes-related, hyperlipidemia-related, tobacco cessation-related, etc.). For BPAs, we considered the alert to be accepted if the provider clicked “accept” or opened the order set highlighted in the alert. BPA results are given as percentage accepted per provider over the visits during which the alert was triggered (0–100%). Order set usage is reported as number of order sets used per 100 visits per provider. Panel-level reports are reported as number used per provider over all visits. All measures are calculated annually.

For descriptive purposes, the Johns Hopkins Aggregated Diagnosis Groups count of comorbidities was computed on all patients and averaged within provider.17 (The algorithm, designed to be applied to claims data, was applied to the encounter/billing diagnoses captured in the EHR but not to the EHR problem list).

Hypotheses: Because the quality measures were available at the provider level only, we computed both independent and dependent variables at the provider level for all analyses. In other words, visit-level data were aggregated into provider-level measures. The hypotheses we tested fell into three groups.

  • A. Condition-specific BPA hypotheses: Use of BPAs about specific preventive or management services would be associated with quality performance on that same preventive or management service (e.g., breast cancer screening BPAs would be associated with breast cancer screening rates).

  • B. Condition-specific order set hypotheses : Use of order sets relevant to specific preventive or management services would be associated with quality metrics linked to that preventive or management service (e.g., diabetes-related orders would be associated with diabetes quality measures, and hyperlipidemia-related orders would be associated with hyperlipidemia-related quality metrics).

  • C. Panel report hypotheses: Use of a function to generate panel-level reports would also be associated with quality performance.

The complete list of hypothesized associations appears in Table 3.

Analyses

For each EHR usage measure, we computed the annual per-provider median; the full-year values for 2010, 2011, in 2012 were weighted at 1 and the 2013 data, because there were only 6 months available, was weighted by 0.5.

For each of the hypothesized associations, negative binomial regression models were constructed to estimate the median quality scores for providers above and below the median usage level. The models were also used to assess the associations between EHR function use and the quality scores. The offset for each model was set equal to the denominator of the quality score (number of patients eligible for each measure) and the outcome was the numerator (number of patients for whom measure was met). All models account for clustering within provider. Otherwise, the models are unadjusted for provider/patient characteristics for two reasons. First, these quality measures are not being risk-adjusted in “meaningful use” program. Second, the goal for the majority of the measures is 100% compliance, regardless of provider or population characteristics, making it inappropriate to adjust for these characteristics (e.g., appropriate LDL testing is recommended for all eligible diabetic patients).

We report the incidence rate ratios and 95% confidence intervals (CIs), which compare providers who fell above the median in usage of that EHR function (“more intensive users”) and those at or below the median in usage (“less intensive users”). An incidence rate ratio of, for example, 1.30 indicates that providers who were more intensive users of the specified EHR function had 30% higher quality scores than those who used it less intensively.

As shown in Table 3, 56 associations were tested. Therefore, the Benjamini–Hochberg approach18 was used to adjust for multiple comparisons, producing an effective α of 0.016 as the threshold for significance. Analyses were conducted in SAS 9.3 (SAS Institute, Cary, NC, USA) and Stata 12 (StataCorp, College Station, TX, USA).

Approvals: The institutional review boards of Weill Cornell Medical College and the IFH both approved the study, and a waiver of consent was granted because providers and patients were both de-identified.

RESULTS

The inclusion criteria produced a sample of 65 primary care providers, including 183 095 visits with 61 977 patients between January 1, 2012 and June 30, 2013 (Table 1). Descriptive statistics for the EHR usage measures are presented in Table 2.

Table 1:

Characteristics of 65 providers and their patients

Characteristic Value 
N providers 65 
    Attested for stage I “meaningful use” in 2012, n (%) 40 (62) 
    Female, n (%) 40 (62) 
    MD/DO (vs. advanced practice nurse), n (%) 57 (88) 
    Annual visits, median (Q1–Q3) 1764 (9782506) 
    Annual unique patients, median (Q1–Q3) 987 (5781450) 
    ADG comorbidity count of panel, median (Q1-Q3) 0.4 (0.30.5) 
No. of patients 61 977 
    Mean age (SD) 37 (21) 
    ADG comorbidity count, median (min, max) 0 (06) 
    Used portal account, n (%) 4529 (7) 
    Female, n (%) 37 127 (60) 
    Race, n (%)  
        White 21 975 (35) 
        Black or African-American 15 007 (24) 
        Other 18 511 (30) 
        Unreported/refused 6484 (10) 
    Ethnicity, n (%)  
        Hispanic or Latino 19 590 (32) 
        Not Hispanic or Latino 36 282 (59) 
        Not collected/unknown 6105 (10) 
    Preferred language, n (%)  
        English 53 715 (87) 
        Spanish 5523 (9) 
        Other 322 (1) 
        Not collected/unknown 2417 (4) 
    Last known insurance, n (%)  
        Medicaid fee-for-service or managed care 13 631 (41) 
        Medicare fee-for-service or managed care 4524 (13) 
        Commercial 9653 (29) 
        Uninsured/self-pay 4275 (13) 
        Other 1448 (4) 
    Unknown 50 (<1) 
Characteristic Value 
N providers 65 
    Attested for stage I “meaningful use” in 2012, n (%) 40 (62) 
    Female, n (%) 40 (62) 
    MD/DO (vs. advanced practice nurse), n (%) 57 (88) 
    Annual visits, median (Q1–Q3) 1764 (9782506) 
    Annual unique patients, median (Q1–Q3) 987 (5781450) 
    ADG comorbidity count of panel, median (Q1-Q3) 0.4 (0.30.5) 
No. of patients 61 977 
    Mean age (SD) 37 (21) 
    ADG comorbidity count, median (min, max) 0 (06) 
    Used portal account, n (%) 4529 (7) 
    Female, n (%) 37 127 (60) 
    Race, n (%)  
        White 21 975 (35) 
        Black or African-American 15 007 (24) 
        Other 18 511 (30) 
        Unreported/refused 6484 (10) 
    Ethnicity, n (%)  
        Hispanic or Latino 19 590 (32) 
        Not Hispanic or Latino 36 282 (59) 
        Not collected/unknown 6105 (10) 
    Preferred language, n (%)  
        English 53 715 (87) 
        Spanish 5523 (9) 
        Other 322 (1) 
        Not collected/unknown 2417 (4) 
    Last known insurance, n (%)  
        Medicaid fee-for-service or managed care 13 631 (41) 
        Medicare fee-for-service or managed care 4524 (13) 
        Commercial 9653 (29) 
        Uninsured/self-pay 4275 (13) 
        Other 1448 (4) 
    Unknown 50 (<1) 
Table 2:

Use of electronic health record functions among 65 primary care providers

Preventive service or medical condition Median use of function, per provider 
 Best-practice alerts, median (minmax) % accepted Order sets, median (minmax) number used per 100 visits 
Diabetes 39.3 (0.064.2) 22.0 (0.244.3) 
Pneumonia 23.2 (0.087.5) 5.2 (0.019.7) 
Breast cancer screening 17.6 (0.090.0) 1.2 (0.07.3) 
Colorectal cancer screening 15.0 (0.092.0) 2.1 (0.09.3) 
Hyperlipidemia 11.1 (0.093.8) 0.6 (0.07.3) 
Antidepressants 6.5 (0.044.6) 2.4 (0.012.4) 
Tobacco use 5.9 (0.094.7) 0.7 (0.08.9) 
Hypertension 0.0 (0.033.3) 0.0 (0.00.4) 
Adult body-mass index 0.0 (0.017.2) 0.0 (0.01.0) 
Preventive service or medical condition Median use of function, per provider 
 Best-practice alerts, median (minmax) % accepted Order sets, median (minmax) number used per 100 visits 
Diabetes 39.3 (0.064.2) 22.0 (0.244.3) 
Pneumonia 23.2 (0.087.5) 5.2 (0.019.7) 
Breast cancer screening 17.6 (0.090.0) 1.2 (0.07.3) 
Colorectal cancer screening 15.0 (0.092.0) 2.1 (0.09.3) 
Hyperlipidemia 11.1 (0.093.8) 0.6 (0.07.3) 
Antidepressants 6.5 (0.044.6) 2.4 (0.012.4) 
Tobacco use 5.9 (0.094.7) 0.7 (0.08.9) 
Hypertension 0.0 (0.033.3) 0.0 (0.00.4) 
Adult body-mass index 0.0 (0.017.2) 0.0 (0.01.0) 

The panel report usage median use per year was 0.0 (min 0.0, max 374.0). Best practice alerts and order sets were first classified by their relevance to 9 preventive services and medical conditions relevant to the MU clinical quality measures. Within each of these categories, the best practice alert rate was then calculated as number accepted divided by number fired, by provider (see text for definition of “accepted”). The order set rate was calculated as number of order sets used for every 100 visits, by provider. The pneumonia BPA studied here was phased out and not utilized in 2013.

Table 3:

“Meaningful use” clinical quality measures and associations with EHR function use

  Mean quality performance for those above and below median(IRR; 95% CI; P-value) 
Outcome measure NQF identifier Condition-specific BPAs Condition-specific order sets Panel-level report 
Core     
    BMI screening/follow-up, adults 65+ 0421-A 53.3% vs. 52.5% (1.02; 0.96–1.07; .58) NAd 52.0% vs. 53.2% (0.98; 0.94–1.02; .26) 
    BMI screening/follow-up, adults 18–64 0421-B 30.1% vs. 29.9% (1.01; 0.93–1.09; .88) NAd 29.2% vs. 30.5% (0.96; 0.87–1.04; .32) 
    Tobacco use assessment 0028-A 97.4% vs. 98.2% (0.99; 0.98–1.01; .33) 97.7% vs. 97.9% (1.00; 0.98–1.02; .83) 97.1% vs. 98.3% (0.99; 0.97–1.01; .19) 
    Tobacco cessation intervention 0028-B 80.6% vs. 66.7% (1.21; 1.09–1.34; < .001) 81.2% vs. 65.4% (1.24; 1.12–1.38; < .001) 78.9% vs. 69.4% (1.14; 1.04–1.25; .007) 
Menu     
    Controlling high blood pressure Stage 2 0018 58.3% vs. 58.8% (0.99; 0.95–1.03; .69) NAd 58.4% vs. 58.8% (0.99; 0.96–1.03; .76) 
    Breast cancer screening 0031 49.4% vs. 44.2% (1.12; 1.03–1.21; .005) 49.1% vs. 44.4% (1.11; 1.02, 1.20; .01) 47.8% vs. 46.2% (1.03; 0.95–1.12; .41) 
    Colorectal cancer screening 0034 47.4% vs. 34.4% (1.38; 1.22–1.55; < .001 45.0% vs. 36.2% (1.24; 1.09–1.42; .002) 41.9% vs. 40.4% (1.04; 0.92–1.17; .55) 
    Diabetes: Foot exam 0056 62.9% vs. 58.0% (1.07; 1.01–1.13; .02) 60.7% vs. 59.0% (1.03; 0.97–1.09; .31) 62.8% vs. 58.0% (1.08; 1.02–1.15; .012) 
    Diabetes: HbA1c poor control (> 9%)a 0059 18.5% vs. 17.2% (1.07; 0.96–1.20; .22) 18.5% vs. 16.9% (1.09; 0.97–1.23; .13) 17.3% vs. 18.2% (0.95; 0.85–1.06; .38) 
    Diabetes: Blood pressure managementb 0061 63.3% vs. 64.6% (DM alerts) (0.98; 0.94–1.02; .37) 63.8% vs. 63.9% (HTN alerts) (1.00; 0.95–1.04; .95) 63.2% vs. 65.2% (0.97; 0.93–1.01; .14) 63.6% vs. 64.2% (0.99; 0.95–1.04; .70) 
    Diabetes: Urine screening 0062 87.1% vs. 80.0% (1.09; 1.04–1.14; < .001) 85.8% vs. 79.9% (1.07; 1.02–1.13; .004) 83.8% vs. 83.4% (1.00; 0.96–1.05; .85) 
    Diabetes: LDL test performedc 0064-A 68.2% vs. 59.5% (DM alerts) (1.15; 1.06–1.24; .001) 68.4% vs. 60.0% (lipids alerts) (1.14; 1.04–1.24; .003) 66.3% vs. 60.9% (DM orders) (1.09; 0.99–1.20; .08) 66.8% vs. 61.4% (lipids orders) (1.09; 1.00–1.19; .06) 64.7% vs. 63.5% (1.02; 0.93–1.12; .68) 
    Diabetes: LDL value <100 mg/dlc 0064-B 37.0% vs. 39.8% (DM alerts) (0.93; 0.84–1.02; .13) 36.8% vs. 39.7% (lipids alerts) (0.93; 0.85–1.02; .11) 36.7% vs. 40.5% (DM orders) (0.91; 0.83–0.99; .04) 35.4% vs. 41.0% (lipids orders) (0.86; 0.78–0.95; .003) 39.8% vs. 37.2% (1.07; 0.97–1.17; .17) 
    Diabetes: HbA1c value <8% 0575 61.5% vs. 59.7% (1.03; 0.98–1.08; .23) 60.5% vs. 60.7% (1.00; 0.95–1.04; .86) 62.1% vs. 59.4% (1.05; 1.00–1.10; .06) 
    Advising smokers to quit 0027-A 25.3% vs. 27.3% (0.93; 0.80–1.07; .28) 26.3% vs. 26.3% (1.00; 0.87–1.16; .98) 27.7% vs. 25.3% (1.10; 0.95–1.26; .21) 
    Discussing smoking cessation strategies 0027-B 15.0% vs. 11.7% (1.28; 1.02–1.62; .03) 15.7% vs. 11.1% (1.41; 1.11–1.79; .005) 15.8% vs. 11.6% (1.36; 1.10–1.69; .005) 
    Pneumonia vaccination status, adults 65+ 0043 70.9% vs. 60.8% (1.17; 1.05–1.30; .004) 69.9% vs. 57.8% (1.21; 1.09–1.34; < .001) 67.7% vs. 60.4% (1.12; 1.01–1.24; .03) 
    Antidepressant medication management/180 days 0105-B 85.8% vs. 88.7% (0.97; 0.92–1.01; .16) 85.2% vs. 88.9% (0.96; 0.92–1.002; .06) 87.6% vs. 88.0% (1.00; 0.94–1.06; .90) 
  Mean quality performance for those above and below median(IRR; 95% CI; P-value) 
Outcome measure NQF identifier Condition-specific BPAs Condition-specific order sets Panel-level report 
Core     
    BMI screening/follow-up, adults 65+ 0421-A 53.3% vs. 52.5% (1.02; 0.96–1.07; .58) NAd 52.0% vs. 53.2% (0.98; 0.94–1.02; .26) 
    BMI screening/follow-up, adults 18–64 0421-B 30.1% vs. 29.9% (1.01; 0.93–1.09; .88) NAd 29.2% vs. 30.5% (0.96; 0.87–1.04; .32) 
    Tobacco use assessment 0028-A 97.4% vs. 98.2% (0.99; 0.98–1.01; .33) 97.7% vs. 97.9% (1.00; 0.98–1.02; .83) 97.1% vs. 98.3% (0.99; 0.97–1.01; .19) 
    Tobacco cessation intervention 0028-B 80.6% vs. 66.7% (1.21; 1.09–1.34; < .001) 81.2% vs. 65.4% (1.24; 1.12–1.38; < .001) 78.9% vs. 69.4% (1.14; 1.04–1.25; .007) 
Menu     
    Controlling high blood pressure Stage 2 0018 58.3% vs. 58.8% (0.99; 0.95–1.03; .69) NAd 58.4% vs. 58.8% (0.99; 0.96–1.03; .76) 
    Breast cancer screening 0031 49.4% vs. 44.2% (1.12; 1.03–1.21; .005) 49.1% vs. 44.4% (1.11; 1.02, 1.20; .01) 47.8% vs. 46.2% (1.03; 0.95–1.12; .41) 
    Colorectal cancer screening 0034 47.4% vs. 34.4% (1.38; 1.22–1.55; < .001 45.0% vs. 36.2% (1.24; 1.09–1.42; .002) 41.9% vs. 40.4% (1.04; 0.92–1.17; .55) 
    Diabetes: Foot exam 0056 62.9% vs. 58.0% (1.07; 1.01–1.13; .02) 60.7% vs. 59.0% (1.03; 0.97–1.09; .31) 62.8% vs. 58.0% (1.08; 1.02–1.15; .012) 
    Diabetes: HbA1c poor control (> 9%)a 0059 18.5% vs. 17.2% (1.07; 0.96–1.20; .22) 18.5% vs. 16.9% (1.09; 0.97–1.23; .13) 17.3% vs. 18.2% (0.95; 0.85–1.06; .38) 
    Diabetes: Blood pressure managementb 0061 63.3% vs. 64.6% (DM alerts) (0.98; 0.94–1.02; .37) 63.8% vs. 63.9% (HTN alerts) (1.00; 0.95–1.04; .95) 63.2% vs. 65.2% (0.97; 0.93–1.01; .14) 63.6% vs. 64.2% (0.99; 0.95–1.04; .70) 
    Diabetes: Urine screening 0062 87.1% vs. 80.0% (1.09; 1.04–1.14; < .001) 85.8% vs. 79.9% (1.07; 1.02–1.13; .004) 83.8% vs. 83.4% (1.00; 0.96–1.05; .85) 
    Diabetes: LDL test performedc 0064-A 68.2% vs. 59.5% (DM alerts) (1.15; 1.06–1.24; .001) 68.4% vs. 60.0% (lipids alerts) (1.14; 1.04–1.24; .003) 66.3% vs. 60.9% (DM orders) (1.09; 0.99–1.20; .08) 66.8% vs. 61.4% (lipids orders) (1.09; 1.00–1.19; .06) 64.7% vs. 63.5% (1.02; 0.93–1.12; .68) 
    Diabetes: LDL value <100 mg/dlc 0064-B 37.0% vs. 39.8% (DM alerts) (0.93; 0.84–1.02; .13) 36.8% vs. 39.7% (lipids alerts) (0.93; 0.85–1.02; .11) 36.7% vs. 40.5% (DM orders) (0.91; 0.83–0.99; .04) 35.4% vs. 41.0% (lipids orders) (0.86; 0.78–0.95; .003) 39.8% vs. 37.2% (1.07; 0.97–1.17; .17) 
    Diabetes: HbA1c value <8% 0575 61.5% vs. 59.7% (1.03; 0.98–1.08; .23) 60.5% vs. 60.7% (1.00; 0.95–1.04; .86) 62.1% vs. 59.4% (1.05; 1.00–1.10; .06) 
    Advising smokers to quit 0027-A 25.3% vs. 27.3% (0.93; 0.80–1.07; .28) 26.3% vs. 26.3% (1.00; 0.87–1.16; .98) 27.7% vs. 25.3% (1.10; 0.95–1.26; .21) 
    Discussing smoking cessation strategies 0027-B 15.0% vs. 11.7% (1.28; 1.02–1.62; .03) 15.7% vs. 11.1% (1.41; 1.11–1.79; .005) 15.8% vs. 11.6% (1.36; 1.10–1.69; .005) 
    Pneumonia vaccination status, adults 65+ 0043 70.9% vs. 60.8% (1.17; 1.05–1.30; .004) 69.9% vs. 57.8% (1.21; 1.09–1.34; < .001) 67.7% vs. 60.4% (1.12; 1.01–1.24; .03) 
    Antidepressant medication management/180 days 0105-B 85.8% vs. 88.7% (0.97; 0.92–1.01; .16) 85.2% vs. 88.9% (0.96; 0.92–1.002; .06) 87.6% vs. 88.0% (1.00; 0.94–1.06; .90) 

Bolded values are statistically significant after adjustment for multiple comparisons by the Benjamini-Hochberg procedure (e.g., P < .016).

aFor this indicator, higher rates indicate lower quality, so an IRR > 1 would be interpreted as an association with lower quality.

bFor blood pressure management in diabetes, alerts/order sets relevant to both diabetes and hypertension were examined as potential predictors.

cFor LDL testing and values in diabetes, alerts/order sets relevant to both diabetes and hyperlipidemia were examined as potential predictors.

dOrder sets associated with BMI screening and hypertension had insufficient use and were not analyzed as predictors.

NQF = National Quality Forum; IRR = incidence rate ratio; CI = confidence interval; BMI = body mass index; HbA1c = hemoglobin A1c; LDL = low-density lipoprotein; DM = diabetes; HTN = hypertension.

There were statistically significant differences in quality performance on the basis of EHR usage in 17 of 59 comparisons even after adjusting for multiple comparisons (Table 3).

A. Condition-specific BPA hypotheses: There were 9 preventive care quality metrics relevant to BMI, tobacco cessation, cancer screening, and pneumonia vaccination. Providers who accepted BPAs at a rate higher than the median had higher quality for 4 of the preventive services measures: tobacco cessation interventions (median quality 80.6% vs. 66.7%; P < .001); breast cancer screening (median quality 49.4% vs. 44.2%; P = .005); colorectal cancer screening (median quality 47.4% vs. 34.4%; P < .001); and pneumonia vaccination (70.9% vs. 60.8%; P = .004).

There were 9 quality metrics pertaining to management (hypertension, diabetes, and depression). Use of condition-specific BPA was associated with higher quality for: diabetes urine screening (87.1% vs. 80.0%; P < .001) and LDL testing in diabetes (68.2% vs. 59.5%; P = .001 for association with diabetes-related BPAs, and 68.4% vs. 60.0%; P = .003 for hyperlipidemia-related BPAs).

In addition, one negative association was statistically significant. Use of hyperlipidemia order sets was associated with lower rates of LDL control (35.4% vs. 41.0%; P = .003).

B. Condition-specific order set hypotheses: Providers whose use of preventive care order sets was higher than the median had higher quality on 5 preventive services metrics: tobacco cessation interventions (81.2% vs. 65.4%; P < .001); breast cancer screening (49.1% vs. 44.4%; P = .012); colorectal cancer screening (45.0% vs. 36.2%; P = .002); tobacco cessation medications (15.7% vs. 11.1%; P = .005); and pneumonia vaccination (69.9% vs. 57.8%; P < .001). Use of condition-specific order sets was associated with better performance on one management metric, urine screening in diabetes (85.8% vs. 79.9%; P = .004).

C. Panel report hypotheses: Viewing panel-level reports was associated with higher quality for 3 measures: tobacco cessation intervention (78.9% vs. 69.4%; P = .007); foot exam in diabetes (62.8% vs. 58.0%; P = .01); and tobacco cessation medications and strategies (15.8% vs. 11.6%; P = .005).

DISCUSSION

During a period when providers were preparing for and in the early stages of “meaningful use,” primary care providers who made the most use of certain EHR functions performed differently on clinical quality measures than providers who used them less frequently. First, providers who accepted reminders about services appropriate to specific conditions had higher adherence to recommended care for those conditions, as measured by meaningful use quality metrics. Second, providers who chose to use order sets specific to a particular condition had higher adherence to recommended care pertaining to that condition, measured by meaningful use quality metrics. Third, providers who reviewed their entire patient panel using a panel-view function performed better on several quality metrics. Effect sizes were in some cases modest (e.g., the difference between 58% and 63% compliance with diabetic foot exam guidelines) but in others were substantial (a change from 34% to 47% compliance with colorectal cancer screening guidelines, representing a nearly 30% relative increase). These findings suggest that the EHR functions we studied are associated with higher compliance with measures of healthcare quality, resulting in higher rates of cancer screening, diabetes examinations and laboratory testing, tobacco cessation counseling, and vaccinations.

To place this study in context, EHR functions have been evaluated separately before “meaningful use.” Strong evidence supports the impact of decision support alerts and reminders combined with order entry on measures such as appropriate use of preventive care and medication safety.19–22 Order sets also serve as a passive form of decision support that reminds physicians about recommended actions and makes it easier to perform those actions,23 and pre-MU studies have shown that order sets can be associated with better guideline adherence and quality performance.24,25 More limited evidence also suggests that panel-level views or dashboards can be associated with quality improvement.26 In contemporary EHRs, all of these components and many more are routinely incorporated into more comprehensive systems. In several pre-“meaningful use” studies, these complete EHR systems have been examined at the component level, but methods have relied on self-report to capture use of EHR components. Poon et al.7 employed a survey approach to demonstrate that EHR use as a binary factor was not associated with ambulatory physician quality performance on Healthcare Effectiveness Data and Information Set (HEDIS) measures, whereas use of specific EHR functions was associated with significantly better quality scores. The strongest quality associations were those with self-reported use of the EHR problem list, the visit note, and radiology test results. Associations varied by quality metric, with the most associations involving women’s health, colon cancer screening, and cancer prevention measures.7 Also in the ambulatory context, Keyhani et al.27 used national survey data and found no association between availability of several EHR components and quality performance. In the inpatient setting, Amarasingham and colleagues used a survey instrument to identify specific elements of the hospital EHR associated with quality and costs, finding effects associated with higher degrees of computerization in notes, order entry, test results, and decision support.8 A survey by Singh et al.28 demonstrated that practitioners varied in how often they had missed the electronic delivery of test results, on the basis of EHR usability, alert volume, patient-provider communication practices, and other factors. However, these previous EHR studies focused on the availability of features or functions, and relied upon survey methods. By contrast, we employed EHR data itself to measure provider-level use of individual EHR functions, and similarly used EHR-generated measures of healthcare quality.

We did not a priori hypothesize a distinction between process measures and outcome measures. However, post hoc inspection of the findings shows that many of the metrics measuring healthcare processes (such as rates of laboratory testing in diabetes) had a statistically significant positive association with EHR use, whereas the three metrics that measured healthcare outcomes (blood pressure control, LDL levels in diabetes, and HA1C levels in diabetes) did not show this association. Several explanations are possible. One possibility is that healthcare processes (such as LDL testing) are influenced more rapidly than outcomes (such as LDL control). It is also possible that providers may have worked more diligently to improve processes for those patients whose outcomes were worse, leading to an inverse association between process and outcome metrics. A third alternative is that quality improvement initiatives increased the rate of LDL testing among patients not previously tested, and these newly tested patients had worse LDL. The apparent increase in elevated LDL would then be an example of selection or information bias.29,30 A similar pattern occurred in a UK pay-for-performance program; initially, the program led to an increased rate of cholesterol testing accompanied by a decreased rate of patients achieving cholesterol targets.31,32

The study is cross-sectional so definitive conclusions cannot be drawn about causal relationships. Using data from the same institution, we have previously shown a large amount of variability in which functions different providers choose to use, ranging from updating problem lists to responding to alerts and reminders.9 Of relevance to the current study, that study found a negative association between alert frequency and alert acceptance for a variety of different alerts and reminders.9 We have no independent measure of quality orientation among the physicians, and it is possible that physicians with a stronger quality orientation made different choices about what functions to use. Nevertheless, it is also possible that the higher intensity use of functions offered by the EHR may have caused quality effects. The types of intensive use in this study were the proportion of BPAs accepted, the use of order sets, and the use of panel-level reporting tools. Each of these components has a plausible relationship with quality performance. On the other hand, it is also possible that more intense use of the EHR was associated with better documentation practices in general, which would increase the availability of structured data for the EHR-generated quality measures. Clinical workflow changes that support documenting in structured fields substantially improve the accuracy of electronic reporting, altering quality indicators through improved documentation alone.33

This study does not directly measure the effects of the “meaningful use” program because it does not compare providers who have achieved “meaningful use” with others who have not, or with providers who use paper records. Rather, this study seeks to open the “black box” of EHR usage to more precisely determine which components of the EHR may be associated with quality improvement. As EHRs become nearly ubiquitous in American healthcare as a result of the federal “meaningful use” initiative, these analyses suggest ways in which EHRs may be affecting healthcare. This study is relevant to current health information technology policy because the quality metrics were produced by certified EHR technology in compliance with “meaningful use” guidance. In early stages of the “meaningful use” program, eligible providers must report these metrics, and in the future, they will be expected to demonstrate improved performance on them. This study also employed EHR usage metrics derived from EHR data rather than self-report or the availability of features at the organizational level. Although previous studies have examined the effectiveness of specific clinical decision support and other EHR components, none to our knowledge has considered a whole set of disparate usage patterns and quality measures in contemporary, multifunctional EHRs in the context of current national policy. This kind of real-world study is essential for understanding how policy and technology are likely to affect clinical care on a day-to-day, visit-to-visit basis. It also has the potential to help identify more and less effective components of EHRs, which could be useful to guide continued improvements in EHR design as well as end-user training.

Limitations

The usage metrics were developed on the current data set and have not been validated elsewhere, and the study was conducted at a single network using a single commercial EHR and may not be generalizable to other EHR products. Findings from an FQHC may not be fully generalizable to other sorts of healthcare organizations. FQHCs are public or nonprofit private organizations that receive public funding to provide comprehensive primary care services to medically underserved populations.34 FQHCs thus have patient populations that are more likely to be uninsured and minority race than other healthcare organizations. They often rely more heavily than other primary care organizations on nurse practitioners and physician assistants34 (in our data, nurse practitioners made up 12% of primary care providers).

Another limitation is that the “meaningful use” clinical quality measures included here are a limited set of potential measures of healthcare quality. Other measures of healthcare quality might produce different findings: electronically reported quality measures can in some cases differ substantively from quality assessed through manual chart review.35,36 In addition, patient-level data was available for the usage measures but not the quality measures, so for the current analysis, both predictor and outcome variables were computed at the provider level. The ability to link quality metrics to patient-level lab values would have helped to distinguish between the alternative explanations of the negative correlation mentioned above. The EHR usage measures should therefore be interpreted as an indicator of provider EHR behavior with all of their patients, not solely the ones eligible for each of the quality measures. The study was cross-sectional rather than longitudinal given that the clinical quality measure data was available for only 18 months, and thus caution is warranted before drawing causal inferences. Specifically, as mentioned above, we are unable to distinguish between the explanation that using the EHR functions produced quality improvement and the explanation that providers who attended more closely to quality performance might have selected EHR functions that enabled them to provide services covered by the quality measures.

The increased adherence to recommended care observed in the study may have short- or long-term implications for the cost of healthcare, but we considered this complex area of study to be outside of the scope of the current project.

CONCLUSION

Primary care providers in the ambulatory setting showed variable patterns of EHR use, and intensive use of certain EHR functions was associated with performance on certain of the electronically reported “meaningful use” quality measures. These functions may help providers deliver higher quality healthcare, and may represent some of the mechanisms through which EHRs are associated with improved quality. As more providers adopt EHRs in the era of “meaningful use,” it will be increasingly important to identify which elements of these complex technologies are associated with quality performance. The methods piloted in this study may provide a framework for future longer-term work in this area.

FUNDING

This study was supported by the New York State Department of Health (contract C025877).

COMPETING INTERESTS

None.

CONTRIBUTORS

J.S.A. conceived and designed the study, supervised and contributed to the data analysis, interpreted results, and drafted and revised the paper. L.M.K. contributed to study design, data analysis, and data interpretation, and made substantive revisions to the paper. A.E. conducted data analysis and contributed to interpreting results. S.N. contributed to data acquisition and results interpretation, and revised the paper. D.M.S. contributed to study design and data interpretation. D.H. contributed to data acquisition and results interpretation. R.K. helped to conceive the study question and contributed to study design, data interpretation, and revisions of the paper.

ACKNOWLEDGEMENTS

Preliminary findings from this study were presented at the AcademyHealth Research Meeting, San Diego, June 8–10, 2014. We also thank Dr Neil Calman for constructive feedback on the study design and results interpretation.

REFERENCES

1
Blumenthal
D
Tavenner
M.
The “meaningful use” regulation for electronic health records
.
New Engl J Med.
 
2010
;
363
:
501
504
.
2
Hsiao
C-J
Hing
E
Ashman
J
.
Trends in electronic health record system use among office-based physicians: United States, 2007-2013
.
Natl Health Stat Reports.
 
2014
;
75
:
1
17
.
3
Chaudhry
B
Jerome
W
Shinyi
W
et al
.
Systematic review: Impact of health information technology on quality, efficiency, and costs of medical care
.
Ann Int Med.
 
2006
;
144
(
10
):
742
752
.
4
Reed
M
Huang
J
Graetz
I
et al
.
Outpatient electronic health records and the clinical care and outcomes of patients with diabetes mellitus
.
Ann Int Med.
 
2012
;
157
(
7
):
482
489
.
5
Samore
MH
Bateman
K
Alder
SC
et al
.
Clinical decision support and appropriateness of antimicrobial prescribing: a randomized trial
.
JAMA.
 
2005
;
294
(
18
):
2305
2314
.
6
Kern
LM
Barron
Y
Dhopeshwarkar
RV
Edwards
A
Kaushal
R
Investigators
H
.
Electronic health records and ambulatory quality of care
.
J Gen Intern Med.
 
2013
;
28
(
4
):
496
503
.
7
Poon
EG
Wright
A
Simon
SR
et al
.
Relationship between use of electronic health record features and health care quality: results of a statewide survey
.
Med Care.
 
2010
;
48
(
3
):
203
209
.
8
Amarasingham
R
Plantinga
L
Diener-West
M
Gaskin
DJ
Powe
NR
.
Clinical information technologies and inpatient outcomes: a multiple hospital study
.
Arch Int Med.
 
2009
;
169
(
2
):
108
114
.
9
Ancker
JS
Kern
LM
Edwards
A
et al
.
How is the electronic health record being used? Use of EHR data to assess physician-level variability in technology use
.
J Am Med Inform Assoc.
 
2014
;
21
(
6
):
1001
1008
.
10
Lanham
HJ
Sittig
DF
Leykum
LK
Parchman
ML
Pugh
JA
McDaniel
RR
.
Understanding differences in electronic health record (EHR) use: linking individual physicians’ perceptions of uncertainty and EHR use patterns in ambulatory care
.
J Am Med Inform Assoc.
 
2014
;
21
(
1
):
73
81
.
11
Berg
M
Aarts
J
van der Lei
J
.
ICT in health care: sociotechnical approaches
.
Meth Inform Med.
 
2003
;
42
(
4
):
297
301
.
12
Armijo
D
McDonnell
C
Werner
K
.
Electronic Health Record Usability: Interface Design Considerations. Agency for Healthcare Research and Quality. Rockville, MD; 2009.
13
Ammenwerth
E
Iller
C
Mahler
C
.
IT adoption and the interaction of task, technology and individuals: a fit framework and a case study
.
BMC Med Inform Decis Mak.
 
2006
;
6
:
3
.
14
Ancker
JS
Kern
LM
Abramson
E
Kaushal
R
.
The triangle model for evaluating the effect of health information technology on healthcare quality and safety
.
JAMIA.
 
2011
;
18
:
749
753
.
15
National Committee for Quality Assurance. Standards and Guidelines for Physician Practice Connections - Patient-Centered Medical Home™. http://www.ncqa.org/Portals/0/Programs/Recognition/PCMH_Overview_Apr01.pdf. Accessed March 27, 2015.
16
Burton-Jones
A
Straub
DW
Jr
.
Reconceptualizing system usage: an approach and empirical test
.
Inform Sys Res.
 
2006
;
17
(
3
):
228
246
.
17
Weiner
J
Starfield
B
Steinwachs
D
Mumford
L
.
Development and application of a population-oriented measure of ambulatory care case-mix
.
Med Care.
 
1991
;
29
(
5
):
452
472
.
18
Benjamini
Y
Hochberg
Y
.
Controlling the false discovery rate: a practical and powerful approach to multiple testing
.
J R Stat Soc, Series B.
 
1995
;
57
(
1
):
289
300
.
19
Dexter
PR
Perkins
S
Overhage
JM
Maharry
K
Kohler
RB
McDonald
CJ
.
A computerized reminder system to increase the use of preventive care for hospitalized patients
.
New Engl J Med.
 
2001
;
345
(
13
):
965
970
.
20
Kaushal
R
Shojania
KG
Bates
DW
.
Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review
.
Arch Intern Med.
 
2003
;
163
(
12
):
1409
1416
.
21
Kaushal
R
Barrón
Y
Abramson
EL
.
The comparative effectiveness of 2 electronic prescribing systems
.
Am J Manag Care.
 
2011
;
17
(
SP
):
SP88
SP94
.
22
Kaushal
R
Kern
LM
Barron
Y
Quaresimo
J
Abramson
EL
.
Electronic prescribing improves medication safety in community-based office practices
.
J Gen Int Med.
 
2010
;
25
(
6
):
530
536
.
23
Bobb
AM
Payne
TH
Gross
PA
.
Viewpoint: controversies surrounding use of order sets for clinical decision support in computerized provider order entry
.
JAMIA.
 
2007
;
14
(
1
):
41
47
.
24
Thiel
SW
Asghar
MF
Micek
ST
Reichley
RM
Doherty
JA
Kollef
MH
.
Hospital-wide impact of a standardized order set for the management of bacteremic severe sepsis
.
Crit Care Med.
 
2009
;
37
(
3
):
819
824
.
25
Ballard
DJ
Ogola
G
Fleming
NS
et al
.
Impact of a standardized heart failure order set on mortality, readmission, and quality and costs of care
.
Intern J Qual Health Care.
 
2010
;
22
(
6
):
437
444
.
26
Feldstein
AC
Perrin
NA
Unitan
RA
et al
.
Effect of a patient panel-support tool on care delivery
.
Am J Manag Care.
 
2010
;
16
(
10
):
256
266
.
27
Keyhani
S
Hebert
PL
Ross
JS
Federman
A
Zhu
CW
Siu
AL
.
Electronic health record components and the quality of care
.
Med Care.
 
2008
;
46
(
12
):
1267
1272
28
Singh
H
Spitzmueller
C
Petersen
N
Sawhney
M
Sittig
D
.
Information overload and missed test results in electronic health record–based settings
.
JAMA Int Med.
 
2013
;
173
(
8
):
702
704
.
29
Grimes
DA
Schulz
KF
.
Bias and causal associations in observational research
.
Lancet.
 
2002
;
359(
9302
):
248
252
.
30
Delgado-Rodríguez
M
Llorca
J
.
Bias
.
J Epidemiol Community Health.
 
2004
;
58
(
8
):
635
641
.
31
McGovern
M
Williams
D
Hannaford
P
et al
.
Introduction of a new incentive and target-based contract for family physicians in the UK: good for older patients with diabetes but less good for women?
Diabetic Med.
 
2008
;
25
:
1083
1089
.
32
McGovern
MP
Boroujerdi
M
Taylor
MW
et al
.
The effect of the UK incentive-based contract on the management of patients with coronary heart disease in primary care
.
Fam Pract.
 
2008
;
25
:
33
39
.
33
Baron
RJ
.
Quality improvement with an electronic health record: Achievable, but not automatic
.
Ann Int Med.
 
2007
;
147
:
549
552
.
34
Health Resources and Services Administration (HRSA). Health Center Data Statistics. http://bphc.hrsa.gov/healthcenterdatastatistics/index.html. Accessed December 19, 2014.
35
Kern
LM
Malhotra
S
Barrón
Y
et al
.
Accuracy of electronically reported “meaningful use” clinical quality measures: A cross-sectional study
.
Ann Int Med.
 
2013
;
158
(
2
):
77
83
.
36
Parsons
A
McCullough
C
Wang
J
Shih
S
.
Validity of electronic health record-quality measurement for performance monitoring
.
JAMIA.
 
2012
;
19
(
4
):
604
609
.

Comments

0 Comments