Abstract

Objective

Poor electronic health record (EHR) usability contributes to clinician burnout and poses patent safety risks. Site-specific customization and configuration of EHRs require individual EHR system usability and safety testing which is resource intensive. We developed and pilot-tested a self-administered EHR usability and safety assessment tool, focused on computerized provider order entry (CPOE), which can be used by any facility to identify specific issues. In addition, the tool provides recommendations for improvement.

Materials and Methods

An assessment tool consisting of 104 questions was developed and pilot-tested at 2 hospitals, one using a Cerner EHR and the other using Epic. Five physicians at each site participated in and completed the assessment. Participant response accuracy compared to actual EHR interactions, consistency across participants, and usability issues identified through the tool were measured at each site.

Results

Across sites, participants answered an average of 46 questions in 23 min with 89.9% of responses either correct or partially correct. The tool identified 8 usability and safety issues at one site and 7 at the other site across medication, laboratory, and radiology CPOE functions.

Discussion

The tool shows promise as a method to rapidly evaluate EHR usability and safety and provide guidance on specific areas for improvement. Important improvements to the evaluation tool were identified including the need to clarify certain questions and provide definitions for usability terminology.

Conclusion

A self-administered usability and safety assessment tool can serve to identify specific usability and safety issues in the EHR and provide guidance for improvements.

Lay Summary

Electronic health records (EHRs) can improve care delivery by making it easier to access important patient information and by expediting the ordering of medications, laboratory tests, and diagnostic images, among other benefits. However, the usability of these software systems, which is defined as how efficiently, effectively, and satisfactorily the technology can be used, has been a longstanding challenge for most healthcare facilities. These usability challenges have direct patient safety consequences and can lead to patient harm. To help healthcare facilities identify specific EHR usability and safety challenges we developed a self-assessment tool. The self-assessment tool can be used by any healthcare facility, regardless of level of usability knowledge, to identify specific usability and safety challenges. The tool provides specific guidelines for how to address identified issues. We pilot-tested the self-assessment tool at 2 different healthcare systems and identified numerous usability issues at each site. Opportunities to improve the tool were identified. This type of self-assessment tool shows tremendous promise as a low-cost and scalable method to identify and address usability and safety challenges.

INTRODUCTION

Poor electronic health record (EHR) usability, which is the extent the technology can be used efficiently, effectively, and satisfactorily, continues to contribute to healthcare provider burnout and poses patient safety risks.1–8 Usability challenges, such as too many medications listed on a menu, a visual display usability issue, can lead to the selection of the wrong medication from the menu list resulting in a patient receiving a medication that was never intended for them or receiving the wrong dose.9 These types of usability issues have been shown to lead to patient harm in both adults and children.2,10

Identifying specific usability issues with implemented EHRs and making improvements has proven to be difficult as it generally requires significant investment by any organization that wants to undertake this task. There are 2 primary reasons for this. First, because EHR systems from the same vendor are configured and customized at each healthcare facility, a “one size fits all” EHR usability solution will not address the unique usability challenges at each healthcare facility.11 For example, one study comparing time, clicks, and errors across different implementations of products from the same EHR vendor found 3-fold differences in these metrics for certain functions such as ordering medications or imaging.11 Because of these differences, each healthcare facility must assess the usability of their implemented EHR to identify areas of optimization. Second, usability optimization efforts can be expensive since many healthcare facilities do not have usability expertise and must pay for usability evaluation and optimization efforts.12 Site-specific usability testing of the implemented EHR is time-consuming and expensive.

Different tools and frameworks have been developed to assess EHR usability and to classify the type of usability issue. Examples include the Usability Problem Taxonomy which is a model that serves to classify detected usability issues to support analysis and eventual remediation, the Usability Problem Taxonomy which provides a classification structure, and the User Action Framework which is a hierarchy of usability classification categories.13–15 These tools and frameworks provide important structure for how to identify, classify, and define usability issues. However, they are not specific to healthcare and do provide specific guidance on how to address the identified usability issues.

We developed an EHR usability and safety assessment tool that can be self-administered by any healthcare facility without usability experts. The tool supports the identification of EHR usability issues that may have safety implications and provides specific recommendations for how to improve the EHR identified usability issues. We pilot-tested the assessment tool at 2 different healthcare facilities, each of which uses a different EHR vendor product, to evaluate whether the tool can be used by healthcare facilities to accurately capture EHR usability and safety issues.

METHODS

Assessment tool design

The goal of the assessment tool is to enable healthcare facility self-identification of usability and safety issues in the EHR with an initial focus on medication, laboratory, and radiology aspects of computerized provider order entry (CPOE). The tool was developed by an interdisciplinary team of clinicians, human factors specialists, and health information technology (IT) experts. The assessment tool consists of clinical scenarios that are presented to a practicing healthcare provider who completes the scenarios using their facility’s EHR. The scenarios and questions were designed to elicit information about prominent usability and safety issues identified in the literature. The decision to focus on these usability issues stemmed from past research analyzing thousands of patient safety event reports that identified these issues as prominent and associated with patient safety.10,16 The usability issues were:

  • Visual display: EHR display of information is confusing, cluttered, or inaccurate resulting in clinician difficulty interpreting information.

  • Availability of information: EHR availability of clinically relevant information is hindered because information is entered or stored in the wrong location or is otherwise inaccessible.

  • System automation and defaults: The EHR automates or defaults information that is unexpected, unpredictable, or not transparent to the clinician.

  • Alerting: EHR alerts or other feedback are inadequate because they are absent, incorrect, or ambiguous.

  • Data entry: EHR data entry is difficult or not possible given the clinicians’ work process preventing the clinician from appropriately entering the desired information.

The evaluation tool consists of a total of 104 questions hosted in REDCap, an open access and Health Insurance Portability and Accountability Act compliant data collection software. There are a set of 40 core questions and depending on the responses, additional questions may be asked. With branching logic, there is the potential for an additional 64 questions. Table 1 shows a breakdown of the questions by CPOE function and usability issue. The core questions were based on the prominent usability issues associated with each of the CPOE functions and whether there are clear usability guidelines to address each issue. About half of the questions (17 core, 45 total) were related to medication ordering functionality given the prevalence and severity of medication errors. Laboratory and radiology ordering functionality each accounted for approximately a quarter of the questions (11 core, 29 total). There was 1 question related to demographics. From a usability perspective, most questions we focused on identifying visual display usability issues (14 core, 44 total), followed by data entry (14 core, 31 total), system automation and defaults (5 core, 14 total), availability of information (6 core, 8 total), and alerting (0 core, 6 total). There is one demographic question. The usability questions were created to elicit information about one specific usability issue (eg, visual display) so there was no overlap in the usability issues identified from each question. There was a different scenario for each aspect of CPOE: medication ordering, laboratory ordering, and radiology ordering.

Table 1.

Number of assessment questions by EHR CPOE functionality and usability category

Visual displayData entrySystem automation and defaultsAvailability of informationAlertingDemographicsTotal
Medication23 (8)10 (4)6 (3)4 (2)2 (0)45 (17)
Laboratory14 (4)11 (6)4 (1)29 (11)
Radiology7 (2)10 (4)4 (1)4 (4)4 (0)29 (11)
Demographics1 (1)1 (1)
Total44 (14)31 (14)14 (5)8 (6)6 (0)1 (1)104 (40)
Visual displayData entrySystem automation and defaultsAvailability of informationAlertingDemographicsTotal
Medication23 (8)10 (4)6 (3)4 (2)2 (0)45 (17)
Laboratory14 (4)11 (6)4 (1)29 (11)
Radiology7 (2)10 (4)4 (1)4 (4)4 (0)29 (11)
Demographics1 (1)1 (1)
Total44 (14)31 (14)14 (5)8 (6)6 (0)1 (1)104 (40)

Note: Number of core questions is shown in parentheses.

Table 1.

Number of assessment questions by EHR CPOE functionality and usability category

Visual displayData entrySystem automation and defaultsAvailability of informationAlertingDemographicsTotal
Medication23 (8)10 (4)6 (3)4 (2)2 (0)45 (17)
Laboratory14 (4)11 (6)4 (1)29 (11)
Radiology7 (2)10 (4)4 (1)4 (4)4 (0)29 (11)
Demographics1 (1)1 (1)
Total44 (14)31 (14)14 (5)8 (6)6 (0)1 (1)104 (40)
Visual displayData entrySystem automation and defaultsAvailability of informationAlertingDemographicsTotal
Medication23 (8)10 (4)6 (3)4 (2)2 (0)45 (17)
Laboratory14 (4)11 (6)4 (1)29 (11)
Radiology7 (2)10 (4)4 (1)4 (4)4 (0)29 (11)
Demographics1 (1)1 (1)
Total44 (14)31 (14)14 (5)8 (6)6 (0)1 (1)104 (40)

Note: Number of core questions is shown in parentheses.

Of the 104 questions, 91 (87.5%) were objective and could be graded for accuracy while 13 (12.5%) were subjective in nature with no discernable correct answer. For example, subjective questions asked whether any lab fields were irrelevant or if any medication fields were redundant. Subjective questions were removed from the analyses of accuracy and consensus but were included in the analysis of usability.

Each question was aligned with a specific usability recommendation based on several different usability resources including Nielsen Usability Heuristics and publications from the National Institute of Standards and Technology (NIST), Office of National Coordinator for Health Information Technology (ONC), The Joint Commission, and the Institute for Safe Medication Practices (ISMP).17–22 After completing the assessment tool, the facility can review the responses and specific recommendations which provide actionable information to healthcare facilities to improve their EHR CPOE’s usability and safety components.

An example scenario is for a physician to place an order for prednisone and to indicate whether the medication name appears with part of the medication name in tall man lettering during the ordering process. This scenario and question about tall man lettering is related to the visual display usability issue and the ISMP safety guidance is to use tall man lettering. A sample of the medication-related usability and safety recommendations that are embedded in the tool are provided in the Supplementary Appendix.

Intended use of the assessment tool

The tool is designed to be used within a healthcare facility’s production EHR or training environment that closely mimics the implemented EHR. To streamline the testing process, the tool is designed to be used with any existing “test” patient in the production environment or existing patient in the training environment. To perform an evaluation, a site would send the REDCap evaluation tool link and production/training environment patient ID to the participating provider. The provider would login to the EHR and complete the evaluation tool by performing the test case actions in the EHR and answering the corresponding assessment tool questions in REDCap. This tool is intended for physician use.

Methodology for testing the evaluation tool

To understand the efficacy of a question-and-answer-based tool for assessing EHR usability, the tool was piloted at 2 different hospitals that use different EHR vendors (Epic and Cerner). The goal of testing was to (1) validate whether participants could use the tool to accurately capture information about aspects of the site’s EHR usability and (2) understand the types of usability issues associated with the implemented EHRs at the 2 different hospitals. While the tool is intended to be completed by a single participant at each healthcare facility, for the purposes of our pilot evaluation of the tool, we had multiple participants from the same site complete the self-assessment to support validation.

Sites and participants: Participants were recruited from 2 hospital systems, one site used Cerner and the other site used Epic. This study was approved by the IRB at each participating institution. Ten total participants, 5 from each site, were recruited via convenience and snowball sampling with department-wide emails. Participants included inpatient physicians, residents, and attendings, each with more than 6 months of experience with their site’s EHR, Table 2. Participants were compensated for their participation.

Table 2.

Participant demographics by site

SitePhysician statusSpecialtyAverage years of experience with site’s EHRs [MIN, MAX]Average years of experience with EHRs [MIN, MAX]
Site A4 Residents (80%)5 Internal Medicine (100%)3.1 [1, 4]5.5 [3, 6]
1 Attending (20%)
Site B3 Residents (60%)5 Emergency Medicine (100%)3.8 [2, 7]6.2 [5, 8]
2 Attendings (40%)
SitePhysician statusSpecialtyAverage years of experience with site’s EHRs [MIN, MAX]Average years of experience with EHRs [MIN, MAX]
Site A4 Residents (80%)5 Internal Medicine (100%)3.1 [1, 4]5.5 [3, 6]
1 Attending (20%)
Site B3 Residents (60%)5 Emergency Medicine (100%)3.8 [2, 7]6.2 [5, 8]
2 Attendings (40%)
Table 2.

Participant demographics by site

SitePhysician statusSpecialtyAverage years of experience with site’s EHRs [MIN, MAX]Average years of experience with EHRs [MIN, MAX]
Site A4 Residents (80%)5 Internal Medicine (100%)3.1 [1, 4]5.5 [3, 6]
1 Attending (20%)
Site B3 Residents (60%)5 Emergency Medicine (100%)3.8 [2, 7]6.2 [5, 8]
2 Attendings (40%)
SitePhysician statusSpecialtyAverage years of experience with site’s EHRs [MIN, MAX]Average years of experience with EHRs [MIN, MAX]
Site A4 Residents (80%)5 Internal Medicine (100%)3.1 [1, 4]5.5 [3, 6]
1 Attending (20%)
Site B3 Residents (60%)5 Emergency Medicine (100%)3.8 [2, 7]6.2 [5, 8]
2 Attendings (40%)

Procedure: Study sessions were conducted via video conferencing. Using their own computers, participants filled out the evaluation tool while completing scenarios in their site’s test EHR environment. Because participants were familiar with their EHR there was no orientation required. For pilot testing this tool, a moderator and note-taker watched the participant complete the evaluation tool using screen share. Participants’ were not asked to conduct a verbal protocol, however, they were instructed to let the moderator know if the question is unclear and any verbalized comments were documented via audio recording and note taking.

Analysis: There were 3 aspects to our analysis. First, we analyzed participant response accuracy by comparing the participant’s response to the evaluation tool questions to what was documented by the expert moderator and identified through a review of the screen share. Participants’ responses could be marked as “Correct”, “Partially Correct”, or “Incorrect”. Participants’ answers were assessed as follows:

  • Correct if they were aligned with the moderator’s review;

  • Partially correct if they included some similar elements to the moderator’s review but missed or added contrary elements when answering free-text or multiple-choice questions;

  • Incorrect if the participant’s answer did not match the moderator’s review in any way.

Partially correct and incorrect responses were further categorized based on the type of error (ie, misunderstanding EHR functionality, misunderstanding the question, miss-click).

The second analysis focused on consensus across the 5 participants from each site which could be impacted by the accuracy of each participant’s response and/or participant workflow variability. For example, in response to a question about the number of medications displayed from a search query, 2 participants from the same site could have different correct responses depending on whether they viewed their personally customized “favorites” search list versus the standard facility search list. These workflow variations could lead to different responses and therefore, different usability and safety assessment results and guidance. To address these differences, the 5 participants’ responses from each site were compared and the response that accounted for the majority was noted as the definitive answer. For example, if 3 participants responded “yes” and 2 responded “no,” the definitive answer was determined to be “yes.” A similar strategy was used to determine final responses for multiple-choice and free-text response answers. Responses that the majority of the participants agreed upon were included as the definitive answer. Questions in which there was no consensus are reported in the results section.

Finally, the third analysis focused on determining whether EHR usability and safety issues could be identified from each site based on the assessment tool. The number and types of usability and safety issues were quantified based on correct response from participants. The correct responses could be those in which all participants have consensus or those in which the majority has the correct response.

RESULTS

Participant response accuracy

On average, participants answered 46 questions (range: 41–57) in 23 min (range: 12–39 minutes). Response accuracy by site is displayed in Table 3. Across the 2 sites, 80.6% were correct responses, 9.3% were partially correct, and 10.1% were incorrect.

Table 3.

Average distribution of correct, partially correct, and incorrect answers per participant

CorrectPartially correctIncorrectTotal
Site A43 (82.1%)4 (7.6%)5.4 (10.3%)52.4 (100%)
Site B36.4 (78.8%)5.2 (11.3%)4.6 (10.0%)46.2 (100%)
Total39.7 (80.6%)4.6 (9.3%)5 (10.1%)49.3 (100%)
CorrectPartially correctIncorrectTotal
Site A43 (82.1%)4 (7.6%)5.4 (10.3%)52.4 (100%)
Site B36.4 (78.8%)5.2 (11.3%)4.6 (10.0%)46.2 (100%)
Total39.7 (80.6%)4.6 (9.3%)5 (10.1%)49.3 (100%)
Table 3.

Average distribution of correct, partially correct, and incorrect answers per participant

CorrectPartially correctIncorrectTotal
Site A43 (82.1%)4 (7.6%)5.4 (10.3%)52.4 (100%)
Site B36.4 (78.8%)5.2 (11.3%)4.6 (10.0%)46.2 (100%)
Total39.7 (80.6%)4.6 (9.3%)5 (10.1%)49.3 (100%)
CorrectPartially correctIncorrectTotal
Site A43 (82.1%)4 (7.6%)5.4 (10.3%)52.4 (100%)
Site B36.4 (78.8%)5.2 (11.3%)4.6 (10.0%)46.2 (100%)
Total39.7 (80.6%)4.6 (9.3%)5 (10.1%)49.3 (100%)

To further analyze the partially correct and incorrect responses we examined all of these responses across participants and sites. There were of total 96 responses from the 10 participants that were either partially correct or incorrect. Of these 96, the most common reason a participant answered questions partially correct or incorrectly was a misinterpretation of information in the EHR (n = 74 of 96, 77.1%). For example, the most common misinterpretation was participants thinking certain fields in the EHR were not required to be populated when they were required. Twenty-one of 96 (21.9%) partially correct or incorrect responses were due to misunderstanding the questions in the assessment tool. Questions were commonly misunderstood due to unfamiliar terminology such as “patient identifier” or “default.” Lastly, 1 (1.0%) question the participant verbally answered correctly during the test but then selected the wrong option in the evaluation tool.

Consensus across participants

At Site A, there were a total of 61 unique questions answered across all participants. Of these questions, 1 (1.6%) had only 1 respondent. Of the 60 unique questions with more than 1 respondent, 35 questions (58.3%) had consensus from all participants and all of these were accurate responses. There were 18 questions (30.0%) in which the majority rules approach was applied and there was a correct response. There were 7 questions (11.7%) in which the majority rules approach was applied and the majority response was either partially correct or incorrect. Most of the questions answered partially correctly or incorrectly were multiple-choice questions that related to defaulted and required fields. In 3 questions (5.0%), users demonstrated different workflows which resulted in different answers; however, all answers were correct.

At Site B, there were a total of 49 unique questions answered across all participants. Of these questions, 1 (2.0%) had only 1 respondent. Of the 48 unique questions with more than 1 respondent, 25 questions (52.1%) had consensus from all participants and were accurate responses. Using the majority rules approach, there were 15 questions (31.3%) that were accurate. In 1 instance (2.1%), there was no consensus based on majority rules. There were 8 questions (16.7%) that were partially correct or incorrect using the majority rules approach. In 4 questions (8.3%), users demonstrated different workflows which resulted in different answers; however, all answers were correct.

Usability issues identified by site

The assessment identified 8 usability and safety issues at Site A and 7 issues at Site B. See Table 4 for the usability and safety issues by CPOE function for each site.

Table 4.

CPOE function usability and safety issues identified at each site based on the evaluation tool results

Usability issuesUsability definitionScenarioSite A usability issueSite B usability issue
Visual displayEHR display of information is confusing, cluttered, or inaccurate resulting in clinician difficulty interpreting information.MedicationMore than 10 search results are displayed.More than 10 search results are displayed.
Search results only display generic name and not brand name.Search results are not displayed in a logical order.
LaboratoryNone identified.None identified.
RadiologyMore than 10 search results are displayed.More than 10 search results are displayed.
Data entryEHR data entry is difficult or not possible given the clinicians’ work process preventing the clinician from appropriately entering desired information.MedicationRedundant order information is requested from the user.None identified.
LaboratoryIrrelevant order information is requested from the user.None identified.
RadiologyNone identified.Irrelevant order information is requested from the user.
System automation and defaultsEHR automates or defaults to information that is unexpected, unpredictable, or not transparent to the clinician.MedicationSearch bar fails to automate or default information as expected.Search bar fails to automate or default information as expected.
Search results fail to update as more letters are typed in the search bar.
LaboratoryNone identified.None identified.
RadiologyNone identified.None identified.
Availability of informationEHR availability of clinically relevant information is hindered because information is entered or stored in the wrong location or is otherwise inaccessible.MedicationNone identified.Search results cannot be filtered.
LaboratoryNone identified.None identified.
RadiologyThe search term “Ankle XR” does not produce results.None identified.
AlertingEHR alerts or other feedback are inadequate because they are absent, incorrect, or ambiguous.MedicationNone identified.None identified.
LaboratoryNone identified.None identified.
RadiologyNo error message when search does not produce results.None identified.
Usability issuesUsability definitionScenarioSite A usability issueSite B usability issue
Visual displayEHR display of information is confusing, cluttered, or inaccurate resulting in clinician difficulty interpreting information.MedicationMore than 10 search results are displayed.More than 10 search results are displayed.
Search results only display generic name and not brand name.Search results are not displayed in a logical order.
LaboratoryNone identified.None identified.
RadiologyMore than 10 search results are displayed.More than 10 search results are displayed.
Data entryEHR data entry is difficult or not possible given the clinicians’ work process preventing the clinician from appropriately entering desired information.MedicationRedundant order information is requested from the user.None identified.
LaboratoryIrrelevant order information is requested from the user.None identified.
RadiologyNone identified.Irrelevant order information is requested from the user.
System automation and defaultsEHR automates or defaults to information that is unexpected, unpredictable, or not transparent to the clinician.MedicationSearch bar fails to automate or default information as expected.Search bar fails to automate or default information as expected.
Search results fail to update as more letters are typed in the search bar.
LaboratoryNone identified.None identified.
RadiologyNone identified.None identified.
Availability of informationEHR availability of clinically relevant information is hindered because information is entered or stored in the wrong location or is otherwise inaccessible.MedicationNone identified.Search results cannot be filtered.
LaboratoryNone identified.None identified.
RadiologyThe search term “Ankle XR” does not produce results.None identified.
AlertingEHR alerts or other feedback are inadequate because they are absent, incorrect, or ambiguous.MedicationNone identified.None identified.
LaboratoryNone identified.None identified.
RadiologyNo error message when search does not produce results.None identified.
Table 4.

CPOE function usability and safety issues identified at each site based on the evaluation tool results

Usability issuesUsability definitionScenarioSite A usability issueSite B usability issue
Visual displayEHR display of information is confusing, cluttered, or inaccurate resulting in clinician difficulty interpreting information.MedicationMore than 10 search results are displayed.More than 10 search results are displayed.
Search results only display generic name and not brand name.Search results are not displayed in a logical order.
LaboratoryNone identified.None identified.
RadiologyMore than 10 search results are displayed.More than 10 search results are displayed.
Data entryEHR data entry is difficult or not possible given the clinicians’ work process preventing the clinician from appropriately entering desired information.MedicationRedundant order information is requested from the user.None identified.
LaboratoryIrrelevant order information is requested from the user.None identified.
RadiologyNone identified.Irrelevant order information is requested from the user.
System automation and defaultsEHR automates or defaults to information that is unexpected, unpredictable, or not transparent to the clinician.MedicationSearch bar fails to automate or default information as expected.Search bar fails to automate or default information as expected.
Search results fail to update as more letters are typed in the search bar.
LaboratoryNone identified.None identified.
RadiologyNone identified.None identified.
Availability of informationEHR availability of clinically relevant information is hindered because information is entered or stored in the wrong location or is otherwise inaccessible.MedicationNone identified.Search results cannot be filtered.
LaboratoryNone identified.None identified.
RadiologyThe search term “Ankle XR” does not produce results.None identified.
AlertingEHR alerts or other feedback are inadequate because they are absent, incorrect, or ambiguous.MedicationNone identified.None identified.
LaboratoryNone identified.None identified.
RadiologyNo error message when search does not produce results.None identified.
Usability issuesUsability definitionScenarioSite A usability issueSite B usability issue
Visual displayEHR display of information is confusing, cluttered, or inaccurate resulting in clinician difficulty interpreting information.MedicationMore than 10 search results are displayed.More than 10 search results are displayed.
Search results only display generic name and not brand name.Search results are not displayed in a logical order.
LaboratoryNone identified.None identified.
RadiologyMore than 10 search results are displayed.More than 10 search results are displayed.
Data entryEHR data entry is difficult or not possible given the clinicians’ work process preventing the clinician from appropriately entering desired information.MedicationRedundant order information is requested from the user.None identified.
LaboratoryIrrelevant order information is requested from the user.None identified.
RadiologyNone identified.Irrelevant order information is requested from the user.
System automation and defaultsEHR automates or defaults to information that is unexpected, unpredictable, or not transparent to the clinician.MedicationSearch bar fails to automate or default information as expected.Search bar fails to automate or default information as expected.
Search results fail to update as more letters are typed in the search bar.
LaboratoryNone identified.None identified.
RadiologyNone identified.None identified.
Availability of informationEHR availability of clinically relevant information is hindered because information is entered or stored in the wrong location or is otherwise inaccessible.MedicationNone identified.Search results cannot be filtered.
LaboratoryNone identified.None identified.
RadiologyThe search term “Ankle XR” does not produce results.None identified.
AlertingEHR alerts or other feedback are inadequate because they are absent, incorrect, or ambiguous.MedicationNone identified.None identified.
LaboratoryNone identified.None identified.
RadiologyNo error message when search does not produce results.None identified.

Visual display: The assessment tool served to identify 3 visual display issues at both Sites A and B. In the medication scenario, Site A (2 issues) and Site B’s (2 issues) EHRs both produced more than 10 medication search results. Excessive search results are inefficient because they require the user to wade through unnecessary data to find an appropriate order and there is an increased likelihood of selecting the wrong medication resulting in a patient safety issue.17,18 EHRs can limit the number of results by removing old and unused medication orders or providing filtering options to narrow search results. Site A’s second medication visual display issue involved displaying only generic names in medication searches. It is safer to display both the brand and generic name in the search results because it can help prevent wrong medication errors.22 Site B’s second medication visual display issue involved the illogical organization of medication search results. Users should understand the logic behind a search result’s organization to use that logic to find their desired order efficiently and safely.19 In the radiology scenario, Site A (1 issue) and Site B (1 issue) identified the same usability issue. Both sites’ EHRs produced more than 10 radiology search results. While excessive search lists cannot always be avoided, as vague searches such as “XR” may naturally produce many results, EHRs can organize information or provide feedback to the user to narrow the search results and increase safety and efficiency.19

Data entry: Two data entry issues were identified at Site A and one at Site B. In the medication scenario, Site A had one issue related to participants identifying medication order fields that they considered redundant, such as separate text boxes for inputting PRN (ie, “as needed”) reason and indication. This redundant information contributes to a cluttered visual display.17 In the laboratory scenario, Site A’s EHR had one issue in which the EHR contained irrelevant lab fields, such as text boxes to input the results of prior labs. In the radiology scenario, Site B’s EHR had one issue related to radiology orders containing irrelevant fields, such as dropdowns for indicating the radiology order was to be performed in the operating room. Unnecessary data entry tasks create inefficiencies in the EHR as well as visual clutter on the screen. Removing the redundant or irrelevant fields creates a faster, simpler, and less error-prone experience for the provider.17,18

System automation and defaults: There was one System Automation and Defaults issue at Site A and 2 at Site B. During the medication scenario, Site A (1 issue) and Site B (2 issues) both lacked an “auto-complete” feature, which is when a search bar attempts to pre-populate your search term as you continue to type additional characters. Auto-complete features increase efficiency and safety so that users do not need to type the whole word before searching their query which allows users to use recognition rather than recall when identifying their search term.19 Users of Site B’s EHR also needed to finish typing their query and then hit submit before the search results would populate. Requiring users to hit submit before populating results is inefficient, and instead results should begin to populate beneath the search bar as the user types.19

Availability of information: Site A and Site B each had one availability of information usability issues. In the medication scenario, Site B’s participants were unable to filter their search results. Filtering is a valuable function for searching large databases because it allows users to remove extraneous information and narrow search results to the most relevant information.19 In the radiology scenario, Site A’s participants needed to input specific keywords in a specific order (ie, “XR Ankle” and not “Ankle XR”) for the search to produce results. Requiring specific search terms makes the system difficult to learn as users must memorize applicable terminology. Also, not knowing or forgetting terminology may prevent users from finding their target order entirely and delay patient care. Searches should allow for flexible search terminology.19

Alerting: One usability issue related to alerting was found during the radiology scenario at Site A. When a search result does not produce any results, the EHR does not inform the user that no results are available and shows a blank screen. Without feedback from the software, users may not understand why their search has not produced results or how to proceed. When no search results are available, the EHR should tell the user that their search has not produced results and provide recommendations for the next steps.17

DISCUSSION

Suboptimal EHR usability continues to contribute to clinician burnout and patient safety risks. With most EHRs configured and customized for a specific healthcare facility, improving usability across the board is challenging and site-specific solutions are generally required. To address this challenge, we developed and pilot-tested an EHR usability and safety evaluation tool, focused on CPOE, which can be self-administered by any healthcare facility. The evaluation tool questions were weighted toward medication orders as well as visual display and data entry usability issues given the prominence of these in the health IT and safety literature. This work extends beyond current usability frameworks by focusing on specific usability and safety issues identified with CPOE systems and by providing specific recommendations to address identified issues. The assessment tool moves beyond categorizing usability issues to provide evidence-based solutions filling a major gap.

Our pilot test demonstrated that the tool can be used by different healthcare facilities to identify EHR usability and safety issues on their respective systems. With an average time to completion of 23 min and an average of 46 applicable questions, the tool served to identify 8 usability issues at one site and 7 usability issues at another with little cost to the institution. Importantly, the tool was applicable to 2 different EHRs showing that it can be used across vendor products. In addition, for each usability issue identified, specific recommendations to improve usability were provided so that the EHR can be improved to address the identified issues.

The EHR usability and safety evaluation tool has several implications for informatics and patient safety. After additional testing and validation, the tool can serve to identify usability and safety issues in a more efficient and cost-effective way. Healthcare facilities may not need to invest in tailored usability and safety expertise, and those facilities that do not have resources for usability and safety assessment can now use this tool to improve their EHRs. Across the 2 sites that were tested there are common issues including too many options being displayed (visual display issues with medication and radiology ordering) and failure for certain fields to auto-populate as expected (system automation and defaults issue with medication ordering). These issues should be addressed by healthcare facilities working closely with their EHR vendors.

There are areas for tool improvement. While overall accuracy in responding to questions was high, there are certain questions where clearer terminology and definitions should be provided which may improve accuracy. For example, a definition of what constitutes a patient identifier and defining the concept of interface default values may be helpful. One issue that will need to be addressed is the variation in workflows that may lead to different, correct responses to assessment questions from participants at the same site. Ideally, the tool would evaluate the most common workflow. If the tool is completed by multiple providers and the majority rules approach is applied, the tool could identify usability issues for the most common workflow. Alternatively, this could be achieved by analyzing audit log data to identify common workflows and then specifically testing those workflows. However, additional participants and analyzing audit logs would require additional resources from each test site. Future work will focus on addressing the workflow variation issue.

With additional refinement and testing, the EHR usability and safety evaluation tool could be considered by the Center for Medicare and Medicaid Services (CMS) just as the Safety Assurance Factors for EHR Resilience (SAFER) Guides have become part of CMS’ Protect Patient Health Information objective.20 The SAFER guides provide a framework for healthcare facilities to assess the safety and safe use of their EHRs. The usability and safety evaluation tool, like the Leapfrog clinical decision support (CDS) evaluation tool, can be used on a regular basis to support technology optimization and improve patient safety.23–26 The Leapfrog CDS tool has been widely adopted by hospitals to assess the safety of their implemented CDS rules.

There are limitations to our study. The EHR usability and safety assessment tool serves to capture some EHR usability and safety issues but is not exhaustive and does not provide information on the severity of an identified issue. There are several different usability issues that may not be captured by the tool and these issues may have safety consequences and will need to be identified through other means. The tool described here is a proof of concept and will need to be further developed. In addition, the specific usability guidance provided when a usability issue is identified is limited to what can be found in the EHR usability and safety literature. This literature can be limited which in turn limits the level of guidance that can be provided. Generalizability is limited since the tool has only been pilot-tested at 2 large healthcare systems.

CONCLUSIONS

We have shown that it is feasible to assess EHR usability and safety with a self-administered survey tool and that this tool can serve to provide specific guidance for usability and safety optimizations. Additional work is needed to improve question clarity, to reduce variability in participant responses, and to expand the scope of the usability issues that can be tested using this methodology.

FUNDING

This work was supported by the Agency for Healthcare Research and Quality (5R01HS023701).

AUTHOR CONTRIBUTIONS

ZP, JLH, SK, and RMR conceived of the idea and developed the self-assessment tool. All authors supported application of the tool, analysis, and writing.

CONFLICT OF INTEREST STATEMENT

None declared.

DATA AVAILABILITY

The data underlying this article will be shared on reasonable request to the corresponding author.

REFERENCES

1

Melnick
E
,
Dyrbye
L
,
Sinsky
C
, et al.
The association between perceived electronic health record usability and professional burnout among US physicians
.
Mayo Clin Proc
2020
;
95
(
3
):
476
87
. https://www.sciencedirect.com/science/article/pii/S0025619619308365. Accessed September 30, 2020.

2

Howe
JL
,
Adams
KT
,
Hettinger
AZ
,
Ratwani
RM.
Electronic health record usability issues and potential contribution to patient harm
.
JAMA
2018
;
319
(
12
):
1276
.

3

Khairat
S
,
Burke
G
,
Archambault
H
,
Schwartz
T
,
Larson
J
,
Ratwani
RM.
Focus section on health IT usability: perceived burden of EHRs on physicians at different stages of their career
.
Appl Clin Inform
2018
;
09
(
02
):
336
47
.

4

Gomes
KM
,
Ratwani
RM.
Evaluating improvements and shortcomings in clinician satisfaction with electronic health record usability
.
JAMA Netw Open
2019
;
2
(
12
):
e1916651
.

5

Ratwani
R
,
Fairbanks
T
,
Savage
E
, et al.
A systematic review to identify usability and safety challenges and practices during electronic health record implementation
.
Appl Clin Inform
2016
;
07
(
04
):
1069
87
.

6

Guo
J
,
Irdbarren
S
,
Kapsandoy
S
,
Perri
S
,
Staggers
N.
eMAR user interfaces: a call for ubiquitous usability evaluations and product redesign
.
Appl Clin Inform
2011
;
2
:
202
24
.

7

Middleton
B
,
Bloomrosen
M
,
Dente
MA
, et al. ;
American Medical Informatics Association
.
Enhancing patient safety and quality of care by improving the usability of electronic health record systems: recommendations from AMIA
.
J Am Med Inform Assoc
2013
;
20
(
e1
):
e2
8
. https://academic.oup.com/jamia/article-abstract/20/e1/e2/692244. Accessed September 30, 2020.

8

Fairbanks
RJ
,
Bisantz
A
,
Hettinger
AZ
,
Ratwani
R
,
Patterson
E
,
Roth
E.
Usability in health IT: beyond compliance to meaningful design and assessment. In: Proceedings of the Human Factors and Ergonomics Society.
2016
; Los Angeles, CA. doi:.

9

Koppel
R
,
Metlay
JP
,
Cohen
A
, et al.
Role of computerized physician order entry systems in facilitating medication errors
.
JAMA
2005
;
293
(
10
):
1197
203
.

10

Ratwani
RM
,
Savage
E
,
Will
A
, et al.
Identifying electronic health record usability and safety challenges in pediatric settings
.
Health Aff
2018
;
37
(
11
):
1752
9
.

11

Ratwani
RM
,
Savage
E
,
Will
A
, et al.
A usability and safety analysis of electronic health records: a multi-center study
.
J Am Med Informatics Assoc
2018
;
25
(
9
):
1197
201
.

12

Shah
T
,
Borondy Kitts
A
,
Gold
JA
, et al.
Electronic health record optimization and clinician well-being: a potential roadmap toward action
.
NAM Perspect
2020
;
2020
:
10.31478/202008a
.

13

Keenan
SL
,
Hartson
HR
,
Kafura
DG
,
Schulman
RS.
Usability problem taxonomy: a framework for classification and analysis
.
Empir Softw Eng
1999
;
4
(
1
):
71
104
.

14

Hvannberg
E.
Interact EL, 2003 undefined. Classification of Usability Problems (CUP) Scheme. Citeseer. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.2.8928&rep=rep1&type=pdf. Accessed March 11, 2022.

15

Andre
T
,
Hartson
HR
,
Andre
TS
,
Williges
RC
,
Van Rens
L.
The User Action Framework: A Theory-Based Foundation for Inspection and Classification of Usability Problems. researchgate.net. Published online
1999
. https://www.researchgate.net/profile/Terence-Andre/publication/221100122_The_User_Action_Framework_A_Theory-Based_Foundation_for_Inspection_and_Classification_of_Usability_Problems/links/580f8ab508aef2ef97afe729/The-User-Action-Framework-A-Theory-Based-Foundation-for-Inspection-and-Classification-of-Usability-Problems.pdf. Accessed March 11, 2022.

16

Howe
JL
,
Katharine Adams
MT
,
Zachary Hettinger
BA
,
Raj Ratwani
MM
,
Author
C
,
Ratwani
R.
Electronic Health Record Usability Issues and Potential Contribution to Patient Harm.
2018
. https://dashboard.healthit.gov. Accessed September 30, 2020.

17

Nielsen
J.
Usability inspection methods. In: Conference Companion on Human Factors in Computing Systems.
1994
:
413
414
; Boston.

18

Lowry
SZ
,
Quinn
MT
,
Ramaiah
M
, et al. (NISTIR 7865) A Human Factors Guide to Enhance EHR Usability of Critical User Interactions When Supporting Pediatric Patient Care. National Institute of Standards and Technology;
2012
.

19

Wiklund
ME
,
Kendler
J
,
Jochber
L
,
Winger
MB.
Technical Basis for User Interface Design of Health IT, Grant/Contract Reports (NISTGCR). National Institute of Standards and Technology;
2015
.

20

Sittig
DF
,
Ash
JS
,
Singh
H.
SAFER Electron Health Rec Saf Assur Factors EHR Resil. Office of the National Coordinator for Health Information Technology;
2015
.

21

Commission J. Sentinel Event Alert 54: Safe Use of Health Information.

2015
.

22

ISMP
. Guidelines for Safe Electronic Communication of Medication Information.
2019
.

23

Classen
DC
,
Holmgren
AJ
,
Co
Z
, et al.
National trends in the safety performance of electronic health record systems from 2009 to 2018
.
JAMA Netw Open
2020
;
3
(
5
):
e205547
.

24

Leung
AA
,
Keohane
C
,
Lipsitz
S
, et al.
Relationship between medication event rates and the Leapfrog computerized physician order entry evaluation tool
.
J Am Med Inform Assoc
2013
;
20
(
e1
):
e85
90
.

25

Metzger
J
,
Welebob
E
,
Bates
DW
,
Lipsitz
S
,
Classen
DC.
Mixed results in the safety performance of computerized physician order entry
.
Health Aff (Millwood)
2010
;
29
(
4
):
655
63
.

26

Chaparro
JD
,
Classen
DC
,
Danforth
M
,
Stockwell
DC
,
Longhurst
CA.
National trends in safety performance of electronic health record systems in children’s hospitals
.
J Am Med Inform Assoc
2017
;
24
(
2
):
268
74
.

This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial License (https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact [email protected]

Supplementary data