How can artificial intelligence decrease cognitive and work burden for front line practitioners?

Abstract Artificial intelligence (AI) has tremendous potential to improve the cognitive and work burden of clinicians across a range of clinical activities, which could lead to reduced burnout and better clinical care. The recent explosion of generative AI nicely illustrates this potential. Developers and organizations deploying AI have a responsibility to ensure AI is designed and implemented with end-user input, has mechanisms to identify and potentially reduce bias, and that the impact on cognitive and work burden is measured, monitored, and improved. This article focuses specifically on the role AI can play in reducing cognitive and work burden, outlines the critical issues associated with the use of AI, and serves as a call to action for vendors and users to work together to develop functionality that addresses these challenges.


Introduction
To deliver safe care, physicians and other healthcare workers need to quickly develop an understanding of the patient, their context (ie, medical and social history) and the details of their current care.The information and technological environment in which care is delivered can either facilitate or impede the rapid acquisition of this information.The vast array of electronic data and increasing amounts of that data have resulted in overwhelming and often conflicting data that clinicians must synthesize to make optimal care decisions.
Artificial intelligence (AI) is being aggressively and rapidly developed and implemented across US healthcare.Electronic health record (EHR) vendors are now building AI into their products, as are other vendors such as clinical data warehouse vendors and large technology companies.Often, AI deployment is focused on cost savings, reducing complications, improving care quality, or drug development.Recently, the ONC has published new detailed guidance on AI and decision support interventions to improve transparency, promote trustworthiness, and incentivize the development and wider use of fair, appropriate, valid, effective, and safe decision support for patients. 1However, minimal AI development or deployment has been focused on its impact on workload or cognitive burden for frontline workers, 2,3 an issue that has become a primary concern due to severe staffing shortages among all types of healthcare workers.This article reviews critical issues for the use of AI to reduce cognitive and work burden for frontline healthcare workers.

Will AI hinder or help?
As with all people, clinicians have a limited mental bandwidth with which they can meet any situation.Cognitive load is the load imposed on our working memory by a particular task.When cognitively overloaded, our brain processing slows, we incur attentional blinks or blind spots, and we make more errors.Cognitive load theory identifies limitations in working memory that humans depend on to perform cognitive tasks. 4ognitive load is the amount of working memory used and is determined by: • How intrinsically difficult the task is (intrinsic load); • How skilled we are at the task; the more skilled we are, the less drain on our working memory (ie, the less the germane load); • How much work it takes to obtain and process critical pieces of information needed to complete the tasks (ie, extraneous load, often impacted by environmental and systems factors as well as the way information is presented).
Understanding these types of loads as well as the impact on working memory is essential, 5 in particular because higher cognitive load is associated with increased levels of clinician burnout. 6,7When effectively designed and implemented, AI has the potential to reduce the clinician's cognitive burden.For this discussion, AI includes the broader group of capabilities in which computer systems are developed to mimic human behaviors or enhance human capabilities such as visual perception, speech recognition, decision-making, and translation between languages.Machine learning is a subset of AI that uses statistical techniques to identify patterns in the data and create new insights and workflow optimizations.
AI, when done well, usually falls into 1 of 2 categories: (1) Makes it easier to do what you already want to do (or even does it for you); or (2) Disrupts your process when you are about to do something you don't want to do or shouldn't do (eg, alerting you when you are about to administer a medication to a patient who is allergic).When combined with welldesigned and implemented technology, AI has the potential to reduce the extraneous load, making it easier to quickly review and assimilate critical information needed to perform tasks required for safe patient care.
Alternatively, AI and technology that fragments information, creates distractions, or otherwise impedes the synthesis of critical knowledge will add to extraneous load, and increase the likelihood that the healthcare worker will make unsafe clinical decisions.This has been the challenge of clinical decision support, which has been associated with alert fatigue and other workflow disruptions.AI tools must be well-implemented and monitored for unintended consequences to optimize their benefits.

AI's potential to decrease clinician cognitive and work burden
Decreasing cognitive workload and the work burden that our front-line healthcare workers experience on a daily basis-while maintaining or improving quality and safety of care-requires improving time-consuming clinical processes that are often manually performed by busy healthcare workers.
There are many case examples that delineate the value of AI in facilitating clinicians' cognitive functions and clinical processes, such as data gathering, synthesis, documentation, and taking appropriate action (Table 1).For example, advances in natural language processing have enabled the ability to parse out key data elements from unstructured text; this is a major contributor to the data gathering and synthesis process and can be leveraged in applications to improve clinicians' workflows.The recent release of Chat GPT 4 and the other large language models like Microsoft's Bing chatbot shows AI can play a role in reducing burden on frontline providers. 8ndeed, a major EHR vendor just announced the incorporation of Chat GPT 4 into its EHR product to help reduce work burden on frontline clinicians. 9sign and implementation AI holds the power to rapidly synthesize large datasets from clinical assessments, physiologic observations, and documentation to derive a score faster than the human brain.Much like any other assessment or clinical tool used to assess risk and guide interventions, clinicians must understand the output of such data to inform their plan of care, in combination with other inputs such as real-time clinical assessments and patient preferences.Clinicians must also be provided appropriate product labeling and methodology disclosures by AI tool developers.
To successfully implement AI and machine learning to reduce frontline cognitive and workflow burden requires engaging both internal and external stakeholders.AI tools can be designed by vendors working with health systems, or by health systems themselves.Frontline staff engagement is essential for user-centered design, implementation, and adaptation of the new practice for both implementation success and reduction of cognitive burden, particularly because in healthcare, software developers are not system end users.These frontline stakeholders include clinical care delivery personnel and frontline technical experts in health IT, bioengineering, system redesign, and research.Staff engagement will only succeed if supported by frontline management and organizational leaders.
In addition, successful implementation requires organizations to design for sustainability from the start, empowering local stakeholders to be actively involved in all aspects of planning, delivering, measuring, and refining a practice so they feel ownership of, and understand the value of, the practice.This leads to more success in adapting the practice to their context and identifying the means to sustain a practice once initial funding is expended.
Frontline design must also be supported by key external stakeholders, who in this case are EHR vendors, clinical data warehouse vendors, command center vendors, patient advocates, and researchers.How might these stakeholders work together?A new, novel set of health IT standards may offer such a model.These novel new standards, developed by the Association for the Advancement of Medical Instrumentation (AAMI), are called software life cycle standards because they apply to every stage of software-from development to implementation to ongoing use, maintenance, and ultimately retirement-based on a successful model in aviation software regulation. 10These standards outline the critical role of frontline users in software development, as opposed to traditional standards that only focused on software developers' roles in software design.These standards also focus on the critical role of human factors in software development, implementation, and use, defining and outlining best practices to conduct user-centered design.To ensure frontline involvement at all stages of the software life cycle will require establishing successful collaborative partnerships among internal and key external stakeholders.Data gathering, synthesis and taking action: A computer assisted management program for antibiotics and other anti-infective agents showed improved patient and antibiotic outcomes. 17f note, a significant challenge associated with designing and implementing AI is the lack of data standards within the health systems, vendors, and at the national level.Most automated synthesis in healthcare today requires standardization, but that standardization can significantly reduce data fidelity-a challenge that is unavoidable but that not be ignored.For example, categories of nursing units are important data that likely have not been a focus for standardization.Yet in care models, it is important to differentiate a stepdown unit from an intensive care unit, or a neonatal intensive care unit from the nursery.

Monitoring performance related to cognitive burden
Whenever new technologies or treatments are deployed, there is a need to constantly assess for performance and unintended consequences-and when identified, explore the reasons for these events and proactively address the root causes.Known best practices for monitoring AI include scheduled review of AI algorithm content and performance, monitoring data inputs, identifying drift, and assessing key performance indicators.
When implementing AI to reduce clinicians' cognitive burden, health systems and vendors should measure and monitor key performance indicators.Reliable and accessible measures of cognitive load and workload are necessary to understand baseline state and to determine whether AI-assisted interventions have resulted in improvement (Table 2).Non-obtrusive metrics, when possible, are preferable to surveys and selfrecordings of time, to avoid additional burden on health care professionals.Fortunately, multiple non-invasive workload measures are currently available, such as EHR event log data and time-motion observations. 11,12In addition, cognitive load should be measured at multiple points in the development life-cycle (project inception, pre-model development, pre-deployment, deployment). 3stly, in order to fully recognize the benefits of AI for cognitive burden, it will be essential to understand the return on investment of AI and avoid knee-jerk reactions to leverage AI to justify having less staff.In some cases, that may be appropriate, while in others, reducing staff will only add additional work to those left behind, and increase their cognitive loads yet again (which will counteract the potential benefits).Organizations will need to build robust measurement and monitoring methods to ensure that they are truly optimizing the impact on their staff, and not just focusing on cost savings.

Addressing bias
Bias and inequities exist in healthcare.AI has the potential to reinforce, as well as help mitigate, those biases.Bias can originate from the data, from those using the data or developing algorithms, or from inappropriate usage of an algorithm's results.
AI bias is usually derived from statistical bias in existing data; for instance, biases that already exist-either in sample size or human bias in data collection or actions-become encoded in an algorithm.AI-based algorithms are derived and validated using patient datasets that may have limited size and diversity.This places an even greater importance on active monitoring and surveillance of outcomes after AI-based technologies have been developed, validated, and deployed.Real-world data, leading to real-world evidence, can provide a feedback loop to developers, allowing them to refine and adjust AI algorithms when appropriate. 13nother critical issue is AI algorithms often include demographic factors such as age, race, ethnicity, gender, and religion in their risk assessments.However, when used to identify eligibility for essential healthcare products or services in limited supply-such as life-saving medications or surgeries-AI algorithms can worsen health disparities.To address this, it is important to understand the underlying reasons why demographic or lifestyle factors are associated with adverse outcomes, attempt-as much as possible-to remove algorithm biases, and ensure that AI application is appropriate and ethical.Ultimately, it is important to have transparency about what the algorithm includes and publish results and performance metrics.For instance, developers could build multiple models (eg, one with demographics and one without), to better understand the potential presence of bias in the algorithm.As healthcare examines and segments various sources of data to identify bias, it will be necessary to continue to monitor those metrics after AI implementation.
Importantly, AI can also be part of the solution to bias.For example, when inequities exist-differential referral patterns for patients of different racial or ethnic groups for similar symptoms-AI can be designed to reduce those inequities and de-bias the referral patterns.AI has potential to improve the cognitive and work burden of clinicians across a range of clinical activities, which could lead to reduced burnout and better clinical care.

Conclusion
With recent developments in generative AI and other AI applications in healthcare, we are just at the beginning of the AI revolution and substantial contributions to healthcare are coming.However, the use of AI to reduce cognitive and work The NASA Task Load Index measures cognitive load and includes dimensions of mental, physical, and temporal demand, as well as effort, performance and frustration level. 18he System Usability Scale (SUS) has been used to assess technology across multiple sectors, including healthcare. 19

Direct observation
Time motion analysis 20 Embedded measures of eye movements Numbers and types of interruptions (hard stop vs interruptive vs passive) End user surveys of consequences of cognitive and work overload Employee engagement surveys that measure resilience, decompression, and activation Measurement of burnout, 21 work intentions, 22 and career regret 23 burden must be a priority.To take advantage of AI's full potential and to minimize potential downsides, vendors and users must work together to develop AI functionality that addresses these issues.It is time to develop a pilot version of each of the 3 leading EHR vendors' software that operationalizes the many advances in reducing cognitive and work burden outlined in this article, and conduct a rigorous evaluation by a well-established research team-with public grant funding-that coordinates cross-site evaluation.The evaluation should include both inpatient and ambulatory use cases.Both research results and functionality content should be published and widely shared.Developers and organizations deploying AI have a responsibility to ensure that it is designed and implemented with end-user input, has mechanisms to reduce and potentially even improve bias, and measures and evaluates AI's impact on cognitive and work burden.

Table 1 .
Clinical and cognitive processes potentially impacted by AI.

Table 2 .
Measures of cognitive load and workload.