953. Online Medical Education Improves Knowledge of Monoclonal Antibody Treatment for COVID-19 Among Physicians

Abstract Background Treatments aimed at patients with mild to moderate COVID-19 offer an opportunity to improve rates of hospitalizations and progression to severe disease. The aim of this study was to assess the educational impact of a series of continuing medical education (CME) activities on the knowledge, competence, and confidence of primary care (PCP), infectious disease (ID), and ER/critical care physicians regarding the management of COVID-19 with monoclonal antibody (mAb) therapy. Methods The educational series consisted of 9 online, CME activities in multiple formats. At the individual activity level, educational effect was assessed with a repeated pairs pre-/post-assessment study including a 3-item, multiple choice, knowledge/competence questionnaire and one confidence assessment question, with each participant serving as his/her own control. To assess changes in knowledge, competence, and confidence data from all clinicians who completed both pre- and post-questions were aggregated across activities and stratified by learning themes. McNemar’s test (P< .05) assessed educational effect. Data were collected from 12/20 to 5/21. Results To date, the 9 activities have reached over 24,000 physicians. Selected improvements in knowledge and competence measured as relative % change in correct responses pre/post education across the learning themes are reported here. (i) 45% improvement in PCPs and a 31% improvement in ID specialists’ knowledge/competence in identifying patients who would benefit from mAbs (P < .01). (ii) 83% improvement in PCPs and a 42% improvement in ID specialists’ confidence in identifying patients who would benefit from mAbs (P < .001). (iii) 15% improvement in ID specialists’ knowledge/competence on the clinical data on mAbs for COVID-19 (P < .001). (iv) 32% improvement in PCPs knowledge/competence in understanding the mechanism of action (MOA) of mAbs for COVID-19 (P < .001) Conclusion This series of online, CME-certified educational activities delivered in multiple formats resulted in significant improvements in knowledge and competence regarding the management of patients with mild to moderate COVID-19. This analysis also uncovered remaining educational gaps; 55% of content related to identifying patients who would benefit from mAbs was not retained post-education. Disclosures All Authors: No reported disclosures

. Framework for classifying consults into 7 types Methods. A randomly selected sample of 100 de-identified infectious diseases (ID) consult requests from a single academic center were independently coded as 1 of the 7 consultation types by 3 ID specialists and 3 hospitalists. Perfect concordance (6/6 coders) and partial concordance (4/6 or 5/6 coders) was calculated. Total (3/3 coders) and partial (2/3 coders) concordance based on consult subtypes and provider specialty was also calculated. We compared proportions between groups using a chi square test.
Results. Perfect concordance was 30%, and partial concordance was 60% ( Figure  1). Total concordance among ID specialists was 44% and among hospitalists was 54% (Table 2). In cases without perfect concordance (n=70), ID specialists had 20% total concordance and 70% partial concordance, while hospitalists had 34% total concordance and 59% partial concordance. ID specialists were less likely than hospitalists to have perfect concordance for ideal consults (52% vs 73%, P=0.01). ID specialists and hospitalists were similarly likely to classify a consult as ideal (65% vs 69%, P=0.34), but ID specialists were more likely to classify a consult as S.O.S. (25% vs. 17%, p=0.02), and less likely to classify a consult as confirmatory (3% vs 7%, P=0.02) ( Table 3).  Conclusion. ID consults can be classified into a novel rubric of 7 subtypes. Overall, partial or perfect concordance between hospitalists and ID consultants was 90%. ID specialists were more likely to classify consult requests as S.O.S than hospitalists, and hospitalists were more likely to classify consults as confirmatory. Opportunities exist to utilize the rubric to improve provider communication and interprofessional education.
Disclosures. All Authors: No reported disclosures

Session: P-54. Infectious Diseases Medical Education
Background. Treatments aimed at patients with mild to moderate COVID-19 offer an opportunity to improve rates of hospitalizations and progression to severe disease. The aim of this study was to assess the educational impact of a series of continuing medical education (CME) activities on the knowledge, competence, and confidence of primary care (PCP), infectious disease (ID), and ER/critical care physicians regarding the management of COVID-19 with monoclonal antibody (mAb) therapy.
Methods. The educational series consisted of 9 online, CME activities in multiple formats. At the individual activity level, educational effect was assessed with a repeated pairs pre-/post-assessment study including a 3-item, multiple choice, knowledge/competence questionnaire and one confidence assessment question, with each participant serving as his/her own control. To assess changes in knowledge, competence, and confidence data from all clinicians who completed both pre-and post-questions were aggregated across activities and stratified by learning themes. McNemar's test (P< .05) assessed educational effect. Data were collected from 12/20 to 5/21.
Results. To date, the 9 activities have reached over 24,000 physicians. Selected improvements in knowledge and competence measured as relative % change in correct responses pre/post education across the learning themes are reported here. (i) 45% improvement in PCPs and a 31% improvement in ID specialists' knowledge/competence in identifying patients who would benefit from mAbs (P < .01). (ii) 83% improvement in PCPs and a 42% improvement in ID specialists' confidence in identifying patients who would benefit from mAbs (P < .001). (iii) 15% improvement in ID specialists' knowledge/competence on the clinical data on mAbs for COVID-19 (P < .001). (iv) 32% improvement in PCPs knowledge/competence in understanding the mechanism of action (MOA) of mAbs for COVID-19 (P < .001) Conclusion. This series of online, CME-certified educational activities delivered in multiple formats resulted in significant improvements in knowledge and competence regarding the management of patients with mild to moderate COVID-19. This analysis also uncovered remaining educational gaps; 55% of content related to identifying patients who would benefit from mAbs was not retained post-education. Background. Critical Access Hospitals (CAH) may face challenges with limited resources in their infection prevention and control (IPC) program. As part of the Project Firstline collaborative, the University of Nebraska Medical Center and its clinical partner Nebraska Medicine sought to identify needs and develop resources to mitigate IPC program gaps in small and rural hospitals, including CAHs. Since, little is known about the resources needed by CAHs to strengthen their IPC program, a needs assessment survey was deployed to Federal Emergency Management Agency Region VII CAHs.

Methods.
A 49-question Research Electronic Data Capture (REDCap) survey was distributed via email to infection preventionists in Region VII CAHs. The survey had 4 sections with questions focused on IPC program infrastructure, competency-based training, audit and feedback, and identification and isolation of high-risk pathogens/ serious communicable diseases. An IPC practice score was assigned to each CAH by totaling "yes" responses. A "no" or "not sure" response was considered an IPC gap. Respondents who selected "no" were asked to identify resources that would assist in mitigating identified gaps. Descriptive analyses evaluated frequency of gaps and most cited resources. Welch t-test was used to study differences in IPC practice score between states.
Results. 50 CAHs (33 in NE, 16 in IA and 1 in KS) and 1 small NE hospital (not licensed as CAH but included in the analyses as CAH) participated in the survey. Majority (n=38) responded to all sections with IPC scores ranging from 13 to 48. There was no significant difference between IPC practice scores of CAHs in NE and IA (average score 33 vs 36; p = 0.38). Specific IPC practice gaps present in > 50% of CAHs were related to audit and feedback practices (Table 1). Additional gaps included lack of drug diversion program, absence of input from IPC team prior to purchasing equipment and failure to conduct risk assessment for the laboratory. Most CAHs cited a standardized audit tool and staff training materials as much needed resources (Table 1).