Abstract

Objective

To describe types of recommendations represented in a curated online evidence library, report on the quality of evidence-based recommendations pertaining to diagnostic imaging exams, and assess underlying knowledge representation.

Materials and Methods

The evidence library is populated with clinical decision rules, professional society guidelines, and locally developed best practice guidelines. Individual recommendations were graded based on a standard methodology and compared using chi-square test. Strength of evidence ranged from grade 1 (systematic review) through grade 5 (recommendations based on expert opinion). Finally, variations in the underlying representation of these recommendations were identified.

Results

The library contains 546 individual imaging-related recommendations. Only 15% (16/106) of recommendations from clinical decision rules were grade 5 vs 83% (526/636) from professional society practice guidelines and local best practice guidelines that cited grade 5 studies (P < .0001). Minor head trauma, pulmonary embolism, and appendicitis were topic areas supported by the highest quality of evidence. Three main variations in underlying representations of recommendations were “single-decision,” “branching,” and “score-based.”

Discussion

Most recommendations were grade 5, largely because studies to test and validate many recommendations were absent. Recommendation types vary in amount and complexity and, accordingly, the structure and syntax of statements they generate. However, they can be represented in single-decision, branching, and score-based representations.

Conclusion

In a curated evidence library with graded imaging-based recommendations, evidence quality varied widely, with decision rules providing the highest-quality recommendations. The library may be helpful in highlighting evidence gaps, comparing recommendations from varied sources on similar clinical topics, and prioritizing imaging recommendations to inform clinical decision support implementation.

BACKGROUND AND SIGNIFICANCE

Recent federal regulations encourage the adoption and use of clinical decision support (CDS) systems in medical practice. The Health Information Technology for Economic and Clinical Health Act of 20091 promotes meaningful use of electronic health records to achieve specified improvements in care delivery. More recently, the Protecting Access to Medicare Act of 2014 (PAMA)2 aims to promote evidence-based care by including provisions that will require ambulatory providers ordering covered advanced diagnostic imaging exams (eg, computed tomography [CT], magnetic resonance imaging, nuclear medicine) to consult evidence-based appropriate use criteria, created or endorsed by qualified provider-led entities, through certified CDS mechanisms.

The strength of the evidence underlying CDS recommendations may enhance or impede the usability of CDS alerts3 and represents an important attribute in assessing the trustworthiness of practice guidelines.4 Despite the fact that guidelines regarding the appropriate use of imaging are widely available for various clinical scenarios, an assessment of the strength of evidence supporting these guidelines using an objective, broadly accepted methodology has not been readily available. To address this need, we implemented a previously described curated online “evidence library”5 (http://libraryofevidence.med.harvard.edu/).

The curation process begins with analyzing a unit of evidence, defined as an assertion regarding the appropriateness of utilizing a diagnostic imaging procedure for certain indications, taken from a published recommendation, guideline, systematic review, or clinical decision rule. Each unit of evidence is assigned to at least 2 curators, medical librarians trained in grading quality of evidence. This entails manually reviewing clinical studies listed in the bibliography of the published recommendation and in supplementary files (eg, evidence tables). In addition, all annotation attributes for grading (eg, guideline, source, imaging modality, body region, recommendation) are fully captured for each unit of evidence. After curator reconciliation, a validating clinician (from a relevant medical specialty) reviews the evidence and grading assignments. Inter-annotator agreement was previously reported to be 97% with κ of 0.88 after grading standardization.5

In the evidence library, we use a comprehensive annotation framework to grade and store evidence-based recommendations pertaining to the ordering of diagnostic imaging exams. Each recommendation is converted into an “IF…THEN” statement, such that a single statement contains sufficient knowledge to make an independent assertion to perform or not perform an imaging procedure. The following is a sample statement in “IF…THEN” format:

IF [age >X] AND [symptom] AND [sign] AND NOT [symptom] THEN NOT [procedure]

The IF-THEN statements use logic operators, similar to those used by Arden syntax, a Health Level-7 standard for representing medical logic modules utilized in CDS systems.6,7 The logic operators combine statements to form compound IF-THEN logic statements. A formal representation allows us to subsequently map attributes to a standard terminology (eg, Systematized Nomenclature of Medicine – Clinical Terms [SNOMED CT]) by being able to automate mapping of attributes in the “IF” clause and the “THEN” clause (with phrases separated by logic operators). This facilitates future translation into a shareable and standard representation that conforms to CDS standards (eg, Fast Healthcare Interoperability Resources [FHIR]). Using standards for CDS representation enables interoperability between various electronic health record and CDS systems. This, in turn, enhances workflow by automating some aspects of data entry.

The curation process was not entirely manual; we utilized custom-designed software to facilitate authoring and curation of evidence. When a guideline is authored, required elements (eg, guideline name, source) described previously5 are entered in a structured form. An administrator enters the names of 2 curators and a validating clinician. The ungraded evidence proceeds to both curators simultaneously for asynchronous annotation. When there is disagreement, it enables reconciliation by allowing either curator (or the administrator) to change a grade prior to being directed to the validating clinician. All curators are able to add references and comments for other curators. In addition, the software enables e-mails to be sent (with a click of a button) to each other to facilitate reconciliation. Finally, grading is completed and the evidence is deposited in the library.

Each logic statement is graded using a system developed by the Oxford Centre for Evidence-Based Medicine.8 The grading methodology has been described in detail previously.5 In summary, each logic statement derived from a recommendation source is graded level 1–4 (with subgrades a–c), based on the quality of a validating study cited in the recommendation source. If no study is cited, it is graded as level 5, indicating expert opinion.

Each recommendation is also given a separate grade according to a system developed by the United States Preventive Service Task Force (USPSTF)9 for stratifying the quality of level 5 recommendations. A level 5 recommendation that was a synthesis of a large body of evidence is given the USPSTF grade “non-I” (non-insufficient), indicating that while it lacks direct validation, it remains an evidence-based recommendation. Any level 5 recommendation that is not supported by direct validation or a general body of evidence is given the USPSTF grade “I” (insufficient). All recommendations graded 1–4 are automatically given the USPSTF grade “non-I.” Finally, annotation attributes are captured for each logic statement, including guideline name, source, imaging modality, body region, and disease entity (ie, sign or symptom).

OBJECTIVE

In this paper, we aim to: (1) describe the knowledge currently represented in the evidence library, (2) report the evidence quality of knowledge content in the library, and (3) assess knowledge representation for various types of clinical knowledge.

MATERIALS AND METHODS

This descriptive study was exempt from Institutional Review Board review.

Recommendation sources

The recommendations regarding appropriate use of imaging used to populate the evidence library were derived from a variety of sources and a number of publishers. Sources were identified through a literature search of Medline (1980–2016) using the text search terms “decision rule,” “decision instrument,” “clinical algorithm,” “prediction rule,” “low-risk criteria,” “high-risk criteria,” “practice guideline,” and “clinical policy,” and the Medical Subject Headings terms “radiography,” “radiology/standards,” “ultrasonography,” and “decision support techniques.” Specific techniques for identifying clinical prediction studies described by Wong et al.10 were used. This literature search was augmented by reviewing the reference lists of identified articles. Each source was then individually examined to determine its eligibility for entry into the evidence library. Approximately 85% were excluded from the final collection of evidence sources because they: (1) contained non–imaging-related recommendations, (2) duplicated recommendations published in a previous source, or (3) contained rules and scoring systems that had not been validated since 2000.

Imaging-related decision rules, prediction rules, and risk-assessment scores commonly utilized in current medical practice were also identified through medical calculator websites such as MDCalc (www.mdcalc.com) and QxMD (www.qxmd.com). Imaging-related clinical practice guidelines were identified through the National Guideline Clearinghouse11 and by searching the websites of various professional societies for published policies and statements. All relevant documentation for local best practice guidelines were also obtained from various health care institutions. Imaging-related guidelines were obtained primarily from 2 public sources, the American College of Radiology (ACR) Appropriateness Criteria database12 and the American College of Cardiology Appropriate Use Criteria (AUC) list.13 ACR is also a qualified provider-led entity and can use its own AUC to fulfill PAMA requirements for ordering advanced diagnostic imaging.

From all these recommendation sources, we prioritized grading recommendations that were included in the Centers for Medicare & Medicaid Services priority clinical areas14,15 and recommendations from the Choosing Wisely® campaign.16

Categories of clinical knowledge

Recommendations regarding the appropriate use of imaging were classified into the following categories:

  1. Clinical decision rule.17 This is a clinical tool that quantifies the individual contributions that various components of the history, physical examination, and basic laboratory results make toward the diagnosis, prognosis, or likely response to treatment of a patient. This category also encompasses clinical prediction rules and clinical scoring systems, which make implicit rather than explicit recommendations regarding the ordering of imaging based on the pretest probability of a suspected diagnosis.

  2. Professional society guideline without numerical scale.18 This describes any published policy or statement by a professional medical society in the United States. Also included in this category are recommendations by the USPSTF and any recommendations endorsed by a professional medical society on the Choosing Wisely website16 that are not independently published by that society. Recommendations are explicitly stated without a numerical scale for appropriateness.

  3. Professional society guideline with numerical scale.12,13 This describes any professional society guideline as described above, consisting of a set of recommendations that are individually rated by the degree of appropriateness with a numerical value, typically from 1 to 9.

  4. Local best practice guideline.19 This describes any decision rule or guideline developed or adapted by an institution and implemented within its own CDS system.

Quality of evidence of various types of clinical logic

For each category, we determined the evidence quality of all recommendations that were included in the evidence library. We assessed the proportion of recommendations that had evidence quality better than grade 5 (ie, were given a grade of 1–4). We also selected recommendations for clinical topic areas that had the best evidence quality, on average.

Knowledge representation of various types of clinical knowledge

We initially described an IF…THEN representation for clinical recommendations.5 Representation of knowledge in the form of a guideline with numerical scale for appropriateness, for instance, enables identification of all clinical criteria (eg, signs, symptoms) that need to be present to warrant imaging, which are then included in the IF phrase. When present, the THEN phrase would recommend performing an imaging examination if the appropriateness scale determines that one is appropriate. Thus, representing guidelines from the ACR12 would include one IF…THEN statement for each imaging modality for a specific clinical variant. When representing local best practice guidelines and clinical prediction rules, conversions of recommendations are not straightforward. We describe the transformation of various recommendations into IF…THEN statements and determine the number of generated IF…THEN statements for various categories of recommendations.

RESULTS

The evidence library has grown substantially in volume and variety of content since its implementation and initial description in 20155 (Figures 1 and 2). To date, it includes a total of 89 recommendation sources identified from the published medical literature, 41 clinical decision rules, 116 professional society guidelines without numerical scale, 25 sets of professional society guidelines with a numerical scale, and 7 local best practice guidelines. Together, these sources generate a total of 742 individual imaging-related recommendations.

Growth of evidence library content, by number of recommendation sources, over time.
Figure 1.

Growth of evidence library content, by number of recommendation sources, over time.

Distribution of sources within the evidence library by type.
Figure 2.

Distribution of sources within the evidence library by type.

Grades and quality of recommendation sources

Among all recommendations currently in the evidence library, 73 were based on expert opinion (ie, grade 5). When assessed by type of source, 15% (16/106) of recommendations from clinical decision rules are grade 5, while 83% (526/636) of recommendations from professional society practice guidelines without numerical scale, professional society guidelines with numerical scale, and local best practice guidelines cite studies that are grade 5 (Table 1; P < .0001, chi-square test). The chi-square test compares the proportion of recommendations that are grade 5 among evidence from clinical decision rules, compared to those from guidelines (professional society or local).

Table 1.

Evidence quality of the different recommendation sources

Recommendation sourceNo. of sourcesNo. of recommendationsReceived Oxford grade 1–4 (%)
Clinical decision rules4110690 (85)
Professional society guidelines (without numerical scale)11639871110 (17)
Local best practice guidelines7128
Professional society guidelines (with numerical scale)2522631
Total189742200 (27)
Recommendation sourceNo. of sourcesNo. of recommendationsReceived Oxford grade 1–4 (%)
Clinical decision rules4110690 (85)
Professional society guidelines (without numerical scale)11639871110 (17)
Local best practice guidelines7128
Professional society guidelines (with numerical scale)2522631
Total189742200 (27)
Table 1.

Evidence quality of the different recommendation sources

Recommendation sourceNo. of sourcesNo. of recommendationsReceived Oxford grade 1–4 (%)
Clinical decision rules4110690 (85)
Professional society guidelines (without numerical scale)11639871110 (17)
Local best practice guidelines7128
Professional society guidelines (with numerical scale)2522631
Total189742200 (27)
Recommendation sourceNo. of sourcesNo. of recommendationsReceived Oxford grade 1–4 (%)
Clinical decision rules4110690 (85)
Professional society guidelines (without numerical scale)11639871110 (17)
Local best practice guidelines7128
Professional society guidelines (with numerical scale)2522631
Total189742200 (27)

The 3 topic areas supported by the highest quality of evidence were minor head trauma, suspected pulmonary embolism, and suspected appendicitis (Table 2).

Table 2.

Topic areas supported by the highest quality of evidence

Topic areaImaging modalities includedNo. of recommendations graded 1–4Mean Oxford grade
Minor head traumaCT head121.5
Suspected appendicitisCT abdomen, CT abdomen/pelvis111.2
Suspected pulmonary embolismCT pulmonary angiography, pulmonary angiography, ventilation-perfusion scan82.0
Blunt trauma (chest)CT chest82.5
Blunt trauma (spine)X-ray cervical spine61.4
Topic areaImaging modalities includedNo. of recommendations graded 1–4Mean Oxford grade
Minor head traumaCT head121.5
Suspected appendicitisCT abdomen, CT abdomen/pelvis111.2
Suspected pulmonary embolismCT pulmonary angiography, pulmonary angiography, ventilation-perfusion scan82.0
Blunt trauma (chest)CT chest82.5
Blunt trauma (spine)X-ray cervical spine61.4
Table 2.

Topic areas supported by the highest quality of evidence

Topic areaImaging modalities includedNo. of recommendations graded 1–4Mean Oxford grade
Minor head traumaCT head121.5
Suspected appendicitisCT abdomen, CT abdomen/pelvis111.2
Suspected pulmonary embolismCT pulmonary angiography, pulmonary angiography, ventilation-perfusion scan82.0
Blunt trauma (chest)CT chest82.5
Blunt trauma (spine)X-ray cervical spine61.4
Topic areaImaging modalities includedNo. of recommendations graded 1–4Mean Oxford grade
Minor head traumaCT head121.5
Suspected appendicitisCT abdomen, CT abdomen/pelvis111.2
Suspected pulmonary embolismCT pulmonary angiography, pulmonary angiography, ventilation-perfusion scan82.0
Blunt trauma (chest)CT chest82.5
Blunt trauma (spine)X-ray cervical spine61.4

Representation of recommendations

We have identified 3 main types of variations in logic:

1. Single-decision statement. When a decision rule or clinical guideline comprises a number of clinical criteria, any one of which could warrant imaging if positive, it generates what we deemed a single-decision statement. Each recommendation from that source produces one IF…THEN statement and its negative equivalent.

Example 1: The National Emergency X-Radiography Utilization Study (NEXUS) criteria for c-spine imaging20 is a decision rule wherein at least 1 of any of 5 clinical criteria must be positive in the setting of blunt trauma in order to warrant an X-ray of the cervical spine. If none of the 5 criteria are positive, cervical spinal fracture is considered extremely unlikely and X-ray unnecessary. This decision rule can be described by the following rule criteria and logic statements:

Rule criteria: [age ≥1] AND [blunt trauma]

IF [neurologic deficit] OR [midline spinal tenderness*] OR [altered consciousness] OR [intoxication] OR [distracting injury*] THEN [X-ray cervical spine]

IF NOT ([neurologic deficit] OR [midline spinal tenderness*] OR [altered consciousness] OR [intoxication] OR [distracting injury*]) THEN NOT [X-ray cervical spine]

*indicates term unmapped to standard ontology

Example 2: The USPSTF guideline regarding the use of low-dose CT to screen for lung cancer states21:

“The USPSTF recommends annual screening for lung cancer with low-dose computed tomography in adults aged 55–80 years who have a 30 pack-year smoking history and currently smoke or have quit within the past 15 years. Screening should be discontinued once a person has not smoked for 15 years or develops a health problem that substantially limits life expectancy or the ability or willingness to have curative lung surgery.”

This guideline can be converted into the following rule criteria and logic statements:

Rule criteria: [age >55] AND [age <80]

IF [cigarette pack-years ≥30] AND ([smoking] OR [time since stopped smoking <15 years]) AND NOT [limited life expectancy*] AND [candidate for curative lung surgery*] THEN [low-dose CT chest]

IF NOT [cigarette pack-years ≥30] AND ([smoking] OR [time since stopped smoking <15 years]) AND NOT [limited life expectancy*] AND [candidate for curative lung surgery*] THEN NOT [low-dose CT chest]

*indicates term unmapped to standard ontology

2. Branching statements. Certain recommendation sources are more complex, consisting of decision branches with application of different sets of criteria to unique patient subgroups. Thus, each produces more than 2 logic statements. Such recommendations exhibit an implicit order, requiring that certain criteria be met before others can be applied.

Example: The Quebec decision rule for radiography in shoulder dislocation22 determines which patients presenting with an anterior shoulder dislocation are highly likely to have a humeral fracture and warrant X-ray of the shoulder. It applies different criteria to patients under the age of 40 than to patients aged ≥40. As this particular algorithm produces 5 final decision points (see Figure 3), 5 logic statements can be produced:

Rule criteria: [anterior dislocation of shoulder] AND [age ≥18] AND NOT [Glasgow coma scale score <13]

IF [age <40] AND ([motor vehicle accident] OR [assault] OR [injury while engaged in sports activity] OR [fall from greater than patient’s own height*]) THEN [X-ray shoulder]

IF [age <40] AND NOT ([motor vehicle accident] OR [assault] OR [injury while engaged in sports activity] OR [fall from greater than patient’s own height*]) THEN NOT [X-ray shoulder]

IF [age ≥40] AND [superficial ecchymosis of the shoulder] THEN [X-ray shoulder]

IF [age ≥40] AND NOT [superficial ecchymosis of the shoulder] AND NOT [first episode anterior dislocation of shoulder] THEN NOT [X-ray shoulder]

IF [age ≥40] AND NOT [superficial ecchymosis of the shoulder] AND [first episode anterior dislocation of shoulder] THEN [X-ray shoulder]

*indicates term unmapped to standard ontology

3. Score-based statements. As in the recommendation types described above, score-based recommendations also rely on the presence or absence of clinical criteria. However, in this type, not all criteria are weighted equally. The weights of all criteria, represented by point values, are summed to determine a score, and a recommendation to perform an imaging exam is triggered if the score falls above a specified cutoff. As an alternative to generating individual logic statements containing each possible combination of criteria as individual attributes, we found it most practical to represent these criteria in a separate table and generate a new “score” attribute.

Example: The Wells score for stratifying risk of pulmonary embolism.23 Each criterion is assigned a point value (Table 3). The sum of the values generates either the attribute [Wells score ≥4] or [Wells score <4].

Rule criteria: [age ≥18] AND NOT ([pregnant] OR [anticoagulated in the past 48 h])

IF [Wells score ≤4] AND [D-dimer normal] THEN NOT [CT chest]

IF [Wells score ≤4] AND [D-dimer elevated] THEN NOT [CT chest]

IF [Wells score ≤4] AND [D-dimer unknown] THEN NOT [CT chest]

IF [Wells score >4] THEN [CT chest]

Quebec shoulder dislocation decision tree.
Figure 3.

Quebec shoulder dislocation decision tree.

Table 3.

Criteria and their weights in points for determination of Wells score

Wells score
NameValue
Clinical signs or symptoms of deep vein thrombosis3
Pulmonary embolism is the leading diagnosis3
Heart rate >1001.5
Recently bedridden for at least 3 days OR major surgery (requiring general or regional anesthesia) within past 4 weeks1.5
Previously diagnosed pulmonary embolism or deep vein thrombosis1.5
Hemoptysis1
Active cancer (treatment or palliation within past 6 months)1
None of the above0
Wells score
NameValue
Clinical signs or symptoms of deep vein thrombosis3
Pulmonary embolism is the leading diagnosis3
Heart rate >1001.5
Recently bedridden for at least 3 days OR major surgery (requiring general or regional anesthesia) within past 4 weeks1.5
Previously diagnosed pulmonary embolism or deep vein thrombosis1.5
Hemoptysis1
Active cancer (treatment or palliation within past 6 months)1
None of the above0
Table 3.

Criteria and their weights in points for determination of Wells score

Wells score
NameValue
Clinical signs or symptoms of deep vein thrombosis3
Pulmonary embolism is the leading diagnosis3
Heart rate >1001.5
Recently bedridden for at least 3 days OR major surgery (requiring general or regional anesthesia) within past 4 weeks1.5
Previously diagnosed pulmonary embolism or deep vein thrombosis1.5
Hemoptysis1
Active cancer (treatment or palliation within past 6 months)1
None of the above0
Wells score
NameValue
Clinical signs or symptoms of deep vein thrombosis3
Pulmonary embolism is the leading diagnosis3
Heart rate >1001.5
Recently bedridden for at least 3 days OR major surgery (requiring general or regional anesthesia) within past 4 weeks1.5
Previously diagnosed pulmonary embolism or deep vein thrombosis1.5
Hemoptysis1
Active cancer (treatment or palliation within past 6 months)1
None of the above0
Table 4.

Selected examples of sources producing highly graded single-decision, branching, and score-based logic

Logic typeSource categorySource nameSymptom and/or diagnosis (SNOMED CT) label
Single-decisionClinical decision ruleClinical decision rule for diverticulitis24Acute diverticulitis
Professional society guidelineClinical practice guideline: adult sinusitis25Acute rhinosinusitis
Clinical decision ruleOttawa ankle rule26Ankle injury
Clinical decision rulePediatric appendicitis rule27Acute appendicitis, pediatric
Clinical decision ruleClinical decision rule for multiple blunt traumas28Blunt trauma
Clinical decision ruleClinical decision rule for abdominal trauma29Blunt trauma to abdomen
Clinical decision rulePediatric Emergency Care Applied Research Network blunt abdominal trauma rule30Blunt trauma to abdomen, pediatric
Clinical decision ruleCanadian c-spine rule31Blunt trauma, cervical spine injury
Clinical decision ruleNEXUS cervical spine rule20Blunt trauma, cervical spine injury
Clinical decision ruleNEXUS blunt chest trauma rule32Blunt trauma, intracranial injury
Clinical decision ruleNEXUS minor head trauma rule33Blunt trauma, intracranial injury
Clinical decision ruleNew Orleans/Charity head trauma rule34Blunt trauma, intracranial injury
Local best practice guidelineBrigham and Women’s Hospital local best practice for minor head traumaBlunt trauma, intracranial injury
Clinical decision ruleCanadian head CT rule35Blunt trauma, intracranial injury
Clinical decision ruleEast riding elbow rule36Elbow injury
Clinical decision ruleClinical prediction rule for scaphoid fractures37Fracture of scaphoid bone
Clinical decision ruleOttawa knee rule38Knee injury
Clinical decision rulePulmonary embolism rule-out criteria rule39Pulmonary embolism
Professional society guidelineUSPSTF lung cancer screening recommendation21Screening, lung cancer
Clinical decision ruleClinical decision rule for subarachnoid hemorrhage40Subarachnoid hemorrhage
Professional society guidelineClinical practice guideline: tinnitus41Tinnitus
Clinical decision ruleFacial fracture rule42Trauma, fracture of facial bone
Clinical decision ruleMandibular fracture rule43Trauma, fracture of mandible
BranchingClinical decision ruleQuebec shoulder dislocation rule22Anterior dislocation of shoulder
Clinical decision rulePediatric Emergency Care Applied Research Network head trauma rule44Blunt trauma, intracranial injury, pediatric
Professional society guidelineAmerican College of Physicians low back pain guideline45Low back pain
Score-basedClinical decision ruleAppendicitis inflammatory response score46Abdominal pain, acute appendicitis
Clinical decision ruleAlvarado score47Abdominal pain, acute appendicitis
Clinical decision ruleOPTIMAP score48Abdominal pain, acute appendicitis
Clinical decision ruleWells score49Deep vein thrombosis
Clinical decision ruleSTONE score50Flank pain, ureteral stone
Clinical decision ruleWells score23Pulmonary embolism
Logic typeSource categorySource nameSymptom and/or diagnosis (SNOMED CT) label
Single-decisionClinical decision ruleClinical decision rule for diverticulitis24Acute diverticulitis
Professional society guidelineClinical practice guideline: adult sinusitis25Acute rhinosinusitis
Clinical decision ruleOttawa ankle rule26Ankle injury
Clinical decision rulePediatric appendicitis rule27Acute appendicitis, pediatric
Clinical decision ruleClinical decision rule for multiple blunt traumas28Blunt trauma
Clinical decision ruleClinical decision rule for abdominal trauma29Blunt trauma to abdomen
Clinical decision rulePediatric Emergency Care Applied Research Network blunt abdominal trauma rule30Blunt trauma to abdomen, pediatric
Clinical decision ruleCanadian c-spine rule31Blunt trauma, cervical spine injury
Clinical decision ruleNEXUS cervical spine rule20Blunt trauma, cervical spine injury
Clinical decision ruleNEXUS blunt chest trauma rule32Blunt trauma, intracranial injury
Clinical decision ruleNEXUS minor head trauma rule33Blunt trauma, intracranial injury
Clinical decision ruleNew Orleans/Charity head trauma rule34Blunt trauma, intracranial injury
Local best practice guidelineBrigham and Women’s Hospital local best practice for minor head traumaBlunt trauma, intracranial injury
Clinical decision ruleCanadian head CT rule35Blunt trauma, intracranial injury
Clinical decision ruleEast riding elbow rule36Elbow injury
Clinical decision ruleClinical prediction rule for scaphoid fractures37Fracture of scaphoid bone
Clinical decision ruleOttawa knee rule38Knee injury
Clinical decision rulePulmonary embolism rule-out criteria rule39Pulmonary embolism
Professional society guidelineUSPSTF lung cancer screening recommendation21Screening, lung cancer
Clinical decision ruleClinical decision rule for subarachnoid hemorrhage40Subarachnoid hemorrhage
Professional society guidelineClinical practice guideline: tinnitus41Tinnitus
Clinical decision ruleFacial fracture rule42Trauma, fracture of facial bone
Clinical decision ruleMandibular fracture rule43Trauma, fracture of mandible
BranchingClinical decision ruleQuebec shoulder dislocation rule22Anterior dislocation of shoulder
Clinical decision rulePediatric Emergency Care Applied Research Network head trauma rule44Blunt trauma, intracranial injury, pediatric
Professional society guidelineAmerican College of Physicians low back pain guideline45Low back pain
Score-basedClinical decision ruleAppendicitis inflammatory response score46Abdominal pain, acute appendicitis
Clinical decision ruleAlvarado score47Abdominal pain, acute appendicitis
Clinical decision ruleOPTIMAP score48Abdominal pain, acute appendicitis
Clinical decision ruleWells score49Deep vein thrombosis
Clinical decision ruleSTONE score50Flank pain, ureteral stone
Clinical decision ruleWells score23Pulmonary embolism
Table 4.

Selected examples of sources producing highly graded single-decision, branching, and score-based logic

Logic typeSource categorySource nameSymptom and/or diagnosis (SNOMED CT) label
Single-decisionClinical decision ruleClinical decision rule for diverticulitis24Acute diverticulitis
Professional society guidelineClinical practice guideline: adult sinusitis25Acute rhinosinusitis
Clinical decision ruleOttawa ankle rule26Ankle injury
Clinical decision rulePediatric appendicitis rule27Acute appendicitis, pediatric
Clinical decision ruleClinical decision rule for multiple blunt traumas28Blunt trauma
Clinical decision ruleClinical decision rule for abdominal trauma29Blunt trauma to abdomen
Clinical decision rulePediatric Emergency Care Applied Research Network blunt abdominal trauma rule30Blunt trauma to abdomen, pediatric
Clinical decision ruleCanadian c-spine rule31Blunt trauma, cervical spine injury
Clinical decision ruleNEXUS cervical spine rule20Blunt trauma, cervical spine injury
Clinical decision ruleNEXUS blunt chest trauma rule32Blunt trauma, intracranial injury
Clinical decision ruleNEXUS minor head trauma rule33Blunt trauma, intracranial injury
Clinical decision ruleNew Orleans/Charity head trauma rule34Blunt trauma, intracranial injury
Local best practice guidelineBrigham and Women’s Hospital local best practice for minor head traumaBlunt trauma, intracranial injury
Clinical decision ruleCanadian head CT rule35Blunt trauma, intracranial injury
Clinical decision ruleEast riding elbow rule36Elbow injury
Clinical decision ruleClinical prediction rule for scaphoid fractures37Fracture of scaphoid bone
Clinical decision ruleOttawa knee rule38Knee injury
Clinical decision rulePulmonary embolism rule-out criteria rule39Pulmonary embolism
Professional society guidelineUSPSTF lung cancer screening recommendation21Screening, lung cancer
Clinical decision ruleClinical decision rule for subarachnoid hemorrhage40Subarachnoid hemorrhage
Professional society guidelineClinical practice guideline: tinnitus41Tinnitus
Clinical decision ruleFacial fracture rule42Trauma, fracture of facial bone
Clinical decision ruleMandibular fracture rule43Trauma, fracture of mandible
BranchingClinical decision ruleQuebec shoulder dislocation rule22Anterior dislocation of shoulder
Clinical decision rulePediatric Emergency Care Applied Research Network head trauma rule44Blunt trauma, intracranial injury, pediatric
Professional society guidelineAmerican College of Physicians low back pain guideline45Low back pain
Score-basedClinical decision ruleAppendicitis inflammatory response score46Abdominal pain, acute appendicitis
Clinical decision ruleAlvarado score47Abdominal pain, acute appendicitis
Clinical decision ruleOPTIMAP score48Abdominal pain, acute appendicitis
Clinical decision ruleWells score49Deep vein thrombosis
Clinical decision ruleSTONE score50Flank pain, ureteral stone
Clinical decision ruleWells score23Pulmonary embolism
Logic typeSource categorySource nameSymptom and/or diagnosis (SNOMED CT) label
Single-decisionClinical decision ruleClinical decision rule for diverticulitis24Acute diverticulitis
Professional society guidelineClinical practice guideline: adult sinusitis25Acute rhinosinusitis
Clinical decision ruleOttawa ankle rule26Ankle injury
Clinical decision rulePediatric appendicitis rule27Acute appendicitis, pediatric
Clinical decision ruleClinical decision rule for multiple blunt traumas28Blunt trauma
Clinical decision ruleClinical decision rule for abdominal trauma29Blunt trauma to abdomen
Clinical decision rulePediatric Emergency Care Applied Research Network blunt abdominal trauma rule30Blunt trauma to abdomen, pediatric
Clinical decision ruleCanadian c-spine rule31Blunt trauma, cervical spine injury
Clinical decision ruleNEXUS cervical spine rule20Blunt trauma, cervical spine injury
Clinical decision ruleNEXUS blunt chest trauma rule32Blunt trauma, intracranial injury
Clinical decision ruleNEXUS minor head trauma rule33Blunt trauma, intracranial injury
Clinical decision ruleNew Orleans/Charity head trauma rule34Blunt trauma, intracranial injury
Local best practice guidelineBrigham and Women’s Hospital local best practice for minor head traumaBlunt trauma, intracranial injury
Clinical decision ruleCanadian head CT rule35Blunt trauma, intracranial injury
Clinical decision ruleEast riding elbow rule36Elbow injury
Clinical decision ruleClinical prediction rule for scaphoid fractures37Fracture of scaphoid bone
Clinical decision ruleOttawa knee rule38Knee injury
Clinical decision rulePulmonary embolism rule-out criteria rule39Pulmonary embolism
Professional society guidelineUSPSTF lung cancer screening recommendation21Screening, lung cancer
Clinical decision ruleClinical decision rule for subarachnoid hemorrhage40Subarachnoid hemorrhage
Professional society guidelineClinical practice guideline: tinnitus41Tinnitus
Clinical decision ruleFacial fracture rule42Trauma, fracture of facial bone
Clinical decision ruleMandibular fracture rule43Trauma, fracture of mandible
BranchingClinical decision ruleQuebec shoulder dislocation rule22Anterior dislocation of shoulder
Clinical decision rulePediatric Emergency Care Applied Research Network head trauma rule44Blunt trauma, intracranial injury, pediatric
Professional society guidelineAmerican College of Physicians low back pain guideline45Low back pain
Score-basedClinical decision ruleAppendicitis inflammatory response score46Abdominal pain, acute appendicitis
Clinical decision ruleAlvarado score47Abdominal pain, acute appendicitis
Clinical decision ruleOPTIMAP score48Abdominal pain, acute appendicitis
Clinical decision ruleWells score49Deep vein thrombosis
Clinical decision ruleSTONE score50Flank pain, ureteral stone
Clinical decision ruleWells score23Pulmonary embolism

DISCUSSION

The evidence library has grown substantially, encompassing a range of recommendation sources for diagnostic imaging. Most of the recommendations have been found to be unvalidated, ie, based on expert opinion and lacking critical appraisal. This does not necessarily suggest that these recommendations are poor clinical advice that deviate from standard of care, but rather that, in most cases, research to test and validate these recommendations is simply lacking. We found that high-quality recommendations only exist for a select few topic areas and imaging modalities.20,23,26,31,33,34,46,47 Recommendations in many areas, even those involving high-cost imaging (eg, lumbar spinal magnetic resonance imaging for the assessment of low back pain) were quite poorly supported.45

Our approach differs from other valuable content sources for imaging CDS rules, such as those sponsored by various professional societies. Use of a single framework for grading evidence enables comparison of rules addressing similar clinical topics across multiple publishers of evidence. Collaboration among various publishers may help to accelerate adoption of evidence into practice. Also, in contrast to professional societies with content teams who create new evidence, our curation team does not create new evidence; rather, we grade evidence for its strength.

The grades from the evidence library may help to inform best practices for implementation of CDS as well as relevant federal regulations. Future rule-making on implementation of CDS, such as envisioned under PAMA,2 should consider the grade of evidence in setting reimbursement policies. The heterogeneity in the strength of evidence underlying available recommendations raises potential implications for clinical validity or trustworthiness of some of the recommendations, specifically those based on expert opinion and lacking critical appraisal. Broad implementation of imaging recommendations in CDS may thus generate a large number of alerts and disruptions in provider workflows, some of potentially questionable clinical validity. Such an approach is unlikely to reduce inappropriate use of imaging,51 as providers ignore nearly 99% of such alerts,52,53 thus potentially contributing to alert fatigue54 and physician burnout.55,56 On the other hand, transparently graded recommendations may enable providers and payers to prioritize dissemination of higher-grade evidence in their specific CDS implementations to meet PAMA legislation requirements while limiting exposure to lower-grade evidence. Centers for Medicare & Medicaid Services regulations currently require CDS coverage for only 8 priority clinical conditions15 beginning January 2018. The evidence library may help to inform these regulations as to which high-quality recommendations may be potential targets for the next set of priority clinical areas. In addition, the library can identify evidence gaps to motivate research that will further enhance available evidence.

We also found that the identity of the publisher or source of recommendation is not a useful proxy for the quality of imaging recommendations. In addition, conflicting recommendations may be created by various publishers for the same clinical presentation. Thus, a public repository of transparently graded and CDS consumable evidence from various publishers, as intended by the library of evidence, may help to inform the choice of recommendations embedded in CDS.

Sources also varied in the amount and complexity of recommendations they included and, accordingly, the structure and syntax of logic statements they generated. We have characterized 3 underlying representations of recommendations: single-decision, branching, and score-based logical representations. While alternative methods to represent decision logic exist and could have been used, such as decision tables and visual decision trees,57–59 we found that the use of individual logic statements as described above was the most practical and exhibited the greatest clarity for quick comprehension and curation by medical librarians and clinicians. However, while the logic statements in our evidence library are easy to comprehend and grade, only 80% of attributes currently map to SNOMED CT.60 Furthermore, standardization of representation will have to meet current CDS standards and initiatives.61

As we continue to populate our evidence library, we plan to continually refine the grading system used by the curators to capture some features that have not yet been taken into account, but may be important in determining the quality of a recommendation based on a clinical study. These include study characteristics such as population sample size and outcome measure (eg, cost, accuracy of diagnosis, and long-term mortality). Other features of recommendations that need further study include strength and grade.62 Guidelines are developed by expert panels compiling evidence from various sources. Thus, generalized recommendations for an imaging test may not be fully supported by a single study in which a diagnostic imaging test may have been proven effective for selected populations. Distinguishing these recommendations from those that have no clear supporting studies will entail more work.

Future work

New evidence will be reviewed and added annually. We will use version control to keep track of new recommendations from the same source. New pieces of evidence from different sources will be added and coexist with lower-quality evidence. The goal is to have transparently graded evidence from various sources that will help inform various stakeholders (physicians, payers, patients, medical practices, or institutions) regarding the choice of recommendations for clinical topics ultimately implemented in clinical practice.

We have recently released an app version of the library based on SMART and FHIR, currently available in the SMART App Gallery (https://apps.smarthealthit.org/app/55). The app enables users (physicians or patients) to visualize evidence in the library by entering patient-specific data, and displays relevant evidence-based decision rules to guide appropriate use of medical imaging. Future work is needed to assess the feasibility of integrating the app with health IT vendor solutions in clinical workflow. The graded evidence will ultimately be translated into a standard representation for clinical decision support.

CONCLUSION

We describe an evidence library, consisting of recommendations from clinical decision rules, professional society clinical practice guidelines, and local best practice guidelines. The quality of evidence varied widely among sources/publishers of imaging recommendations, with clinical decision rules providing the highest-quality imaging recommendations. The library may be helpful in highlighting evidence gaps, comparing recommendations from varied sources on similar clinical topics, and prioritizing the implementation of imaging recommendations to inform CDS implementations intended to promote evidence-based care.

Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sector.

Competing interests

The authors have no competing interests to declare.

Contributors

All authors contributed to the study design and data acquisition; SV, RL, NV, and RK are responsible for data analysis and interpretation. All authors contributed significant intellectual content during the manuscript preparation and revisions, approved the final version, and accept accountability for the overall integrity of the research process.

ACKNOWLEDGMENTS

The authors would like to thank Ms Laura E. Peterson for reviewing the manuscript.

References

1

Title XIII – Medicare Medicaid Health Information Technology
.
Medicare Medicaid Heal Inf Technol Misc Medicare Provisions
.
2009
;
353
80
. .

2

Protecting Access to Medicare Act of 2014
.
Congressional Record
.
2014
;
160
.

3

Khorasani
R
,
Hentel
K
,
Darer
J
et al. ,
Ten commandments for effective clinical decision support for imaging: enabling evidence-based practice to improve quality and reduce waste
.
Am J Roentgenol.
2014
;
203
5
:
945
51
.

4

Ransohoff
DF
,
Pignone
M
,
Sox
HC
.
How to decide whether a clinical practice guideline is trustworthy
.
JAMA.
2013
;
309
2
:
139
40
.

5

Lacson
R
,
Raja
AS
,
Osterbur
D
et al. ,
Assessing strength of evidence of appropriate use criteria for diagnostic imaging examinations
.
J Am Med Inform Assoc.
2016
;
23
3
:
649
53
.

6

Hripcsak
G
.
Writing Arden syntax medical logic modules
.
Comput Biol Med.
1994
;
24
5
:
331
63
.

7

Samwald
M
,
Fehre
K
,
de Bruin
J
,
Adlassnig
KP
.
The Arden Syntax standard for clinical decision support: experiences and directions
.
J Biomed Inform
2012
;
45
4
:
711
18
.

8

Oxford Centre for Evidence Based Medicine
.
2015
. . Accessed October 5, 2017.

9

Petitti
DB
,
Teutsch
SM
,
Barton
MB
et al. ,
Update on the methods of the U.S. Preventive Services Task Force: insufficient evidence
.
Ann Int Med.
2009
;
150
3
:
199
205
.

10

Wong
SS
,
Wilczynski
NL
,
Haynes
RB
,
Hedges
T
.
Developing optimal search strategies for detecting clinically relevant qualitative studies in MEDLINE
.
Stud Health Technol Inform.
2004
;
107
(
Pt 1
):
311
16
.

11

Agency for Healthcare Research and Quality
.
National Guideline Clearinghouse
.
2016
.
http://www.guideline.gov. Accessed March 15, 2016
.

12

American College of Radiology
.
Appropriateness Criteria
.
2016
.
https://acsearch.acr.org/list. Accessed May 17, 2016
.

13

American College of Cardiology
.
Appropriate Use Criteria
.
2016
. .

15

Hentel
K
,
Menard
A
,
Khorasani
R
.
New CMS clinical decision support regulations: a potential opportunity with major challenges
.
Radiology.
2017
;
283
1
:
10
13
.

16

ABIM Foundation
.
Choosing Wisely: An Initiative of the ABIM Foundation
.
2016
.
http://choosingwisely.org. Accessed June 1, 2017
.

17

McGinn
TG
,
Guyatt
GH
,
Wyer
PC
,
Naylor
CD
,
Stiell
IG
,
Richardson
WS
.
Users' guides to the medical literature: XXII: how to use articles about clinical decision rules. Evidence-Based Medicine Working Group
.
JAMA.
2000
;
284
1
:
79
84
.

18

National Guideline Clearinghouse
.
Inclusion Criteria
.
2014
. .

19

Bosch
M
,
Tavender
E
,
Bragge
P
,
Gruen
R
,
Green
S
.
How to define ‘best practice’ for use in Knowledge Translation research: a practical, stepped and interactive process
.
J Eval Clin Pract.
2013
;
19
5
:
763
68
.

20

Hoffman
JR
,
Mower
WR
,
Wolfson
AB
,
Todd
KH
,
Zucker
MI
.
Validity of a set of clinical criteria to rule out injury to the cervical spine in patients with blunt trauma. National Emergency X-Radiography Utilization Study Group
.
N Engl J Med.
2000
;
343
2
:
94
99
.

21

Moyer
VA
.
Screening for lung cancer: U.S. Preventive Services Task Force recommendation statement
.
Ann Int Med.
2014
;
160
5
:
330
38
.

22

Emond
M
,
Le Sage
N
,
Lavoie
A
,
Moore
L
.
Refinement of the Quebec decision rule for radiography in shoulder dislocation
.
CJEM.
2009
;
11
1
:
36
43
.

23

Wolf
SJ
,
McCubbin
TR
,
Feldhaus
KM
,
Faragher
JP
,
Adcock
DM
.
Prospective validation of Wells Criteria in the evaluation of patients with suspected pulmonary embolism
.
Ann Emerg Med.
2004
;
44
5
:
503
10
.

24

Lameris
W
,
van Randen
A
,
van Gulik
TM
et al. ,
A clinical decision rule to establish the diagnosis of acute diverticulitis at the emergency department
.
Dis Colon Rectum.
2010
;
53
6
:
896
904
.

25

Rosenfeld
RM
,
Piccirillo
JF
,
Chandrasekhar
SS
et al. ,
Clinical practice guideline (update): adult sinusitis
.
Otolaryngol Head Neck Surg.
2015
;
152
(
2 Suppl
):
S1
S39
.

26

Stiell
I
,
Wells
G
,
Laupacis
A
et al. ,
Multicentre trial to introduce the Ottawa ankle rules for use of radiography in acute ankle injuries. Multicentre Ankle Rule Study Group
.
BMJ.
1995
;
311
7005
:
594
97
.

27

Kharbanda
AB
,
Dudley
NC
,
Bajaj
L
et al. ,
Validation and refinement of a prediction rule to identify children at low risk for acute appendicitis
.
Arch Pediatr Adolesc Med.
2012
;
166
8
:
738
44
.

28

Forouzanfar
MM
,
Safari
S
,
Niazazari
M
et al. ,
Clinical decision rule to prevent unnecessary chest X-ray in patients with blunt multiple traumas
.
Emerg Med Australas.
2014
;
26
6
:
561
66
.

29

Holmes
JF
,
Wisner
DH
,
McGahan
JP
,
Mower
WR
,
Kuppermann
N
.
Clinical prediction rules for identifying adults at very low risk for intra-abdominal injuries after blunt trauma
.
Ann Emerg Med.
2009
;
54
4
:
575
84
.

30

Holmes
JF
,
Lillis
K
,
Monroe
D
et al. ,
Identifying children at very low risk of clinically important blunt abdominal injuries
.
Ann Emerg Med.
2013
;
62
2
:
107
16 e2
.

31

Coffey
F
,
Hewitt
S
,
Stiell
I
et al. ,
Validation of the Canadian c-spine rule in the UK emergency department setting
.
Emerg Med J.
2011
;
28
10
:
873
76
.

32

Rodriguez
RM
,
Anglin
D
,
Langdorf
MI
et al. ,
NEXUS chest: validation of a decision instrument for selective chest imaging in blunt trauma
.
JAMA Surg.
2013
;
148
10
:
940
46
.

33

Mower
WR
,
Hoffman
JR
,
Herbert
M
et al. ,
Developing a decision instrument to guide computed tomographic imaging of blunt head injury patients
.
J Trauma.
2005
;
59
4
:
954
59
.

34

Haydel
MJ
,
Preston
CA
,
Mills
TJ
,
Luber
S
,
Blaudeau
E
,
DeBlieux
PM
.
Indications for computed tomography in patients with minor head injury
.
N Engl J Med.
2000
;
343
2
:
100
05
.

35

Stiell
IG
,
Wells
GA
,
Vandemheen
K
et al. ,
The Canadian CT Head Rule for patients with minor head injury
.
Lancet.
2001
;
357
9266
:
1391
96
.

36

Arundel
D
,
Williams
P
,
Townend
W
.
Deriving the East Riding Elbow Rule (ER2): a maximally sensitive decision tool for elbow injury
.
Emerg Med J.
2014
;
31
5
:
380
83
.

37

Rhemrev
SJ
,
Beeres
FJ
,
van Leerdam
RH
,
Hogervorst
M
,
Ring
D
.
Clinical prediction rule for suspected scaphoid fractures: a prospective cohort study
.
Injury.
2010
;
41
10
:
1026
30
.

38

Stiell
IG
,
Greenberg
GH
,
Wells
GA
et al. ,
Prospective validation of a decision rule for the use of radiography in acute knee injuries
.
JAMA.
1996
;
275
8
:
611
15
.

39

Kline
JA
,
Mitchell
AM
,
Kabrhel
C
,
Richman
PB
,
Courtney
DM
.
Clinical criteria to prevent unnecessary diagnostic testing in emergency department patients with suspected pulmonary embolism
.
J Thromb Haemost.
2004
;
2
8
:
1247
55
.

40

Perry
JJ
,
Stiell
IG
,
Sivilotti
ML
et al. ,
Clinical decision rules to rule out subarachnoid hemorrhage for acute headache
.
JAMA.
2013
;
310
12
:
1248
55
.

41

Tunkel
DE
,
Bauer
CA
,
Sun
GH
et al. ,
Clinical practice guideline: tinnitus
.
Otolaryngol Head Neck Surg.
2014
;
151
(
2 Suppl
):
S1
S40
.

42

Sitzman
TJ
,
Hanson
SE
,
Alsheik
NH
,
Gentry
LR
,
Doyle
JF
,
Gutowski
KA
.
Clinical criteria for obtaining maxillofacial computed tomographic scans in trauma patients
.
Plast Reconstr Surg.
2011
;
127
3
:
1270
78
.

43

Charalambous
C
,
Dunning
J
,
Omorphos
S
,
Cleanthous
S
,
Begum
P
,
Mackway-Jones
K
.
A maximally sensitive clinical decision rule to reduce the need for radiography in mandibular trauma
.
Ann R Coll Surg Engl.
2005
;
87
4
:
259
63
.

44

Kuppermann
N
,
Holmes
JF
,
Dayan
PS
et al. ,
Identification of children at very low risk of clinically-important brain injuries after head trauma: a prospective cohort study
.
Lancet.
2009
;
374
9696
:
1160
70
.

45

Chou
R
,
Qaseem
A
,
Snow
V
et al. ,
Diagnosis and treatment of low back pain: a joint clinical practice guideline from the American College of Physicians and the American Pain Society
.
Ann Int Med.
2007
;
147
7
:
478
91
.

46

Scott
AJ
,
Mason
SE
,
Arunakirinathan
M
,
Reissis
Y
,
Kinross
JM
,
Smith
JJ
.
Risk stratification by the Appendicitis Inflammatory Response score to guide decision-making in patients with suspected appendicitis
.
Br J Surg.
2015
;
102
5
:
563
72
.

47

McKay
R
,
Shepherd
J
.
The use of the clinical scoring system by Alvarado in the decision to perform computed tomography for acute appendicitis in the ED
.
Am J Emerg Med.
2007
;
25
5
:
489
93
.

48

Leeuwenburgh
MM
,
Stockmann
HB
,
Bouma
WH
et al. ,
A simple clinical decision rule to rule out appendicitis in patients with nondiagnostic ultrasound results
.
Acad Emerg Med.
2014
;
21
5
:
488
96
.

49

Scarvelis
D
,
Wells
PS
.
Diagnosis and treatment of deep-vein thrombosis
.
CMAJ.
2006
;
175
9
:
1087
92
.

50

Moore
CL
,
Bomann
S
,
Daniels
B
et al. ,
Derivation and validation of a clinical prediction rule for uncomplicated ureteral stone – the STONE score: retrospective and prospective observational cohort studies
.
BMJ.
2014
;
348
:
g2191
.

51

Centers for Medicare & Medicaid Services
.
Medicare Imaging Demonstration
2015
. . Accessed October 5, 2017.

52

Ip
IK
,
Lacson
R
,
Hentel
K
et al. ,
Journal club: predictors of provider response to clinical decision support: lessons learned from the Medicare imaging demonstration
.
Am J Roentgenol.
2017
;
208
2
:
351
57
.

53

Lacson
R
,
Ip
I
,
Hentel
KD
et al. ,
Medicare imaging demonstration: assessing attributes of appropriate use criteria and their influence on ordering behavior
.
Am J Roentgenol.
2017
;
208
5
:
1051
57
.

54

Ash
JS
,
Sittig
DF
,
Campbell
EM
,
Guappone
KP
,
Dykstra
RH
.
Some unintended consequences of clinical decision support systems
.
AMIA Annu Symp Proc
2007
;
11
:
26
30
.

55

Shanafelt
TD
,
Noseworthy
JH
.
Executive leadership and physician well-being: nine organizational strategies to promote engagement and reduce burnout
.
Mayo Clin Proc.
2017
;
92
1
:
129
46
.

56

Noseworthy
JH
,
Madara
J
,
Cosgrove
D
et al. ,
Physician Burnout Is A Public Health Crisis: A Message To Our Fellow Health Care CEOs
.
2017
. .

57

Shiffman
RN
,
Karras
BT
,
Agrawal
A
,
Chen
R
,
Marenco
L
,
Nath
S
.
GEM: a proposal for a more comprehensive guideline document model using XML
.
J Am Med Inform Assoc.
2000
;
7
5
:
488
98
.

58

Shiffman
RN
.
Representation of clinical practice guidelines in conventional and augmented decision tables
.
J Am Med Inform Assoc.
1997
;
4
5
:
382
93
.

59

Peleg
M
,
Boxwala
AA
,
Ogunyemi
O
et al. ,
GLIF3: the evolution of a guideline representation format
.
AMIA Ann Symp Proc.
2000
:
645
49
.

60

Yan
Z
,
Lacson
R
,
Ip
I
et al. ,
Evaluating terminologies to enable imaging-related decision rule sharing
.
AMIA Ann Symp Proc.
2016
;
2016
:
2082
89
.

62
This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)