-
PDF
- Split View
-
Views
-
Cite
Cite
Hilel Frankenthal, Izhar Ben Shlomo, Yael Kurzweil Segev, Ilan Bubil, Alon K., Dina Orkin, Ayala Kobo Greenhut, Perceived reliability of medical device alarms—a major determinant of medical errors driven by frozen medical thinking, International Journal for Quality in Health Care, Volume 34, Issue 1, 2022, mzac009, https://doi.org/10.1093/intqhc/mzac009
- Share Icon Share
Abstract
This concept paper introduces the phenomenon of self-assigning a ‘perceived reliability’ value to medical device readings as a potential source of cognitive bias in medical decision-making. Medical errors can result from clinical decisions based on partial clinical data despite medical device readings providing data to the contrary. At times, this results from clinician distrust of medical device output. Consequentially, clinicians engage in a form of ‘frozen thinking’, a fixation on a particular thought process despite data to the contrary. Many medical devices, such as intensive care unit (ICU) monitors and alarms, lack validated statistics of device output reliability and validity. In its absence, clinicians assign a self-perceived reliability value to device output data and base clinical decisions therefrom. When the perceived reliability value is low, clinicians distrust the device and ignore device readings, especially when other clinical data are contrary.
We explore the cognitive and theoretical underpinnings of this ‘perceived reliability’ phenomenon. The mental assignment of a perceived reliability value stems from principles of ‘script theory’ of medical decision-making. In this conceptual framework, clinicians make decisions by comparing current situations to mental ‘scripts’ of prior clinical decisions and their outcomes. As such, the clinician utilizes scripts of prior experiences to create the perceived reliability value.
Self-assigned perceived reliability is subject to multiple dangers of reliability and cognitive biases. Some of these biases are presented. Among these is the danger of dismissing device readings as ‘noise’. This is particularly true of ICU alarms that can emit frequent false alarms and contribute to clinician sensory overload. The cognitive dangers of this ‘noise dismissal’ are elaborated via its similarity to the phenomenon of ‘spatial disorientation’ among aviation pilots.
We conclude with suggestions for reducing the potential bias of ‘perceived reliability’. First presented are regulatory/legislative and industry-based interventions for increasing the study of, and end-user access to, validated device output reliability statistics. Subsequently, we propose strategies for overcoming and preventing this phenomenon. We close with suggestions for future research and development of this ‘perceived reliability’ phenomenon.
Medical errors cause nearly 10% of all hospital-related deaths [1]. Frozen thinking, a potentially preventable cause of error [2], occurs when a care provider or team remains fixed in a particular clinical thought process, at times despite extant data to the contrary. This can occur at any point in the clinical care process: diagnostic approach, establishing diagnosis and developing and managing a care plan. Frozen thinking results from multiple sources of cognitive bias, including distrust in measuring device output and device alarms, ignored due to their incongruence with medical staff clinical assessment. We suggest that device distrust stems from medical staff assigning the device a poor ‘perceived reliability’ value irrespective of the device’s objective reliability. Consequently, device output may be mistakenly dismissed as erroneous and unreliable. We discuss multiple factors that directly influence this distrust and perceived device unreliability. Of note, in this, we use the term ‘reliability’ in reference to the reproducibility and validity of the device output rather than the ‘probability that an item will carry out its function satisfactorily for the stated period when used according to the specified conditions’ [3].
Continuous clinical monitoring devices and alarms are scaled for high sensitivity at the expense of specificity. This results in frequent alarm activation that behooves the need to distinguish true from false alarms [4]. The increased probability of false-positive alarms reduces care providers’ alarm response rates. Care providers match their alarm response rates similar to the anticipated probability of true positive results [5]. When the expected probability of false alarms is high, the device output can be deemed suspect by the clinician and becomes merely an additional source of data to be considered as part of a multi-factored decision [6]. Receiver-operator curves (ROC) were designed to assist this decision-making and reduce the number of false positives through balancing between sensitivity and specificity [7]. However, alarms attached to continuous monitoring devices infrequently undergo thorough ROC evaluation. Thus, the clinician has to make clinical decisions based on a self-assigned measure of device output reliability, such as a perceived ROC value, along with additional correlating data [6]. However, in the absence of objective data, how does a clinician arrive at a self-assigned reliability value? Through drawing upon mental scripts of prior experiences.
The script theory posits that an individual utilizes ‘[mental] set[s] [i.e., scripts] of interconnected concepts… to make predictions about how a particular event or sequence of events is likely to play out’. Each script is developed based on prior experiences. Upon encountering decision points, the individual compares the current situation to prior scripts, looking for matches and discrepancies. Experts have a greater script repertoire and make better and more rapid decisions. This, in part, is due to their ability to analyze a situation through the lens of their mental script repertoire [8–10]. Additionally, experts use prior scripts to create a mental simulation of various courses of action and their consequences. The expert clinician selects the course of action that seems the most appropriate for the given situation [11, 12]. The use of scripts in clinical decision-making was demonstrated through a number of qualitative studies in the 1990s [13–15] and is the source of extensive curricular efforts to improve clinical reasoning teaching in medical education [10, 12, 16]. As such, in the case of responding to medical device output, the clinician intuitively self-assigns a device reliability value through matching prior experiences and clinical scripts to the clinical situation with certain device output signals. This is seemingly no different than any other clinical decision. However, the clinician is unable to establish criterion-related evidence of validity through cross-checking the self-assigned reliability value against objective validated data.
The self-assigning process is further highly suspect due to its pervasive inherent sources of reliability bias. Clinicians may use parallel sources of clinical data to corroborate or dismiss device output measurement. This can provide a form of construct-related evidence. They may re-measure device output to try to establish repeatability. They may consult another clinician to try to establish an ad hoc inter-observer correlation. However, the process lacks formal oversight and correlational studies (i.e. internal validity), and its generalizability (i.e. external validity) is questionable due to the lack of validated objective data. Without formal analysis, device output may also be unreliable due to errors of measurement [17]. Lastly, despite efforts at reducing these statistical biases, there are multiple cognitive sources of bias that can easily skew any derived conclusions. One such source of bias stems from dismissing signals as ‘noise’ in an attempt to arrive at a rapid decision. Erroneous ‘noise dismissal’ is influenced by the weight assigned to scripts and their outcomes. The assigned script weight is heavily influenced by associated emotions, especially negative emotions. Similarly, the individual’s current emotional state also interferes in signal processing [18]. Lastly, signals emitted from devices that are perceived as less reliable and incongruous with other signals, may be ‘cancelled’ subconsciously as ‘noise reduction’ even prior to reaching consciousness, through a process called pre-attentive gating [19]. This distrust is amplified by the high rate of false positivity associated with many clinical alarms, especially in multi-device environments, such as the intensive care unit (ICU) and operating rooms (OR).
Bedside clinicians in the ICU and OR experience sensory and cognitive overload as they factor in each alarm’s relative importance and true or false value. Here, all the clinical and multi-device output data converge to one decision, ‘believe it or not?’ [20]. In situations of conflicting signals, a clinician may elect to rely on certain clinical findings despite device signals to the contrary. Aviation science offers an extreme example of this situation in a phenomenon termed spatial disorientation (SD) (‘pilots’ vertigo’), in which pilots choose their ears’ vestibular sense over the overabundance of automated outputs in sophisticated aircraft [21, 22]. Pilots inherently lose their ability to sense position in space from their vestibular and musculoskeletal system. They are forced to rely solely on their visual system. When this is lost, such as in situations of poor visibility, pilots are forced to resort to instrumental flying, relying on instrumental data despite their sensory data to the contrary. SD has plagued the aviation industry for decades with minimal success despite numerous attempts at improving simulation training and technological changes to aircraft. The problem is recognized as being multi-factorial and situation dependent, ergo the challenge in finding a solution [21]. Over the last two decades, a tactile-based vest has shown much promise in aiding pilots to regain their sense of position through tactile stimulation [23, 24]. The multi-factorial and situational nature of SD is akin to the challenge of trusting medical device output and ICU alarms in particular. Both situations are high stakes and require decisiveness and action despite their associated ambiguities. Both settings can require long work hours, high stress, sensory overload and multiple simultaneous technological data sources. The sensory overload of ICU alarms has been studied extensively as one of the causes of alarm fatigue and distrust [25, 26]. Additionally, in congruence with the SD phenomenon, the authors previously presented the dangerous potential consequences of selecting clinical inputs over device data when the two are seemingly contradictory [2]. Nonetheless, Bitan et al.’s NICU study demonstrates that, at times, expert clinicians are capable of accurately selecting the degree of ICU alarm reliability and validity through incorporating other clinical inputs into their decisions [6].
In summary, many clinical environments, especially those incorporating simultaneous use of multiple clinical devices and alarms, behoove the clinician to rapidly sift through clinical and device data to reach a clinical decision. Due to device distrust and lack of adequate device analysis, the clinician self-assigns a perceived reliability value to each device and alarm to aid in clinical decision-making. We identified multiple sources of cognitive bias in the perceived reliability self-assignment process, all of which can culminate in medical error.
We suggest the following interventions for overcoming perceived reliability bias:
Improving Device Reliability
As a front-line intervention, hospital quality leaders should provide end users with existing device output reliability data. This should be rolled out along with a sweeping program for increasing awareness of the current limitations in device output reliability data.
Manufacturers need to improve individual device alarm reliability statistics and quality of device data processing and output by finding the optimal balance between sensitivity and specificity based on well-structured and verified algorithms.
Regulatory and legislative bodies should expand medical device certification requirements to include device output reliability and validity.
The medical device industry should be incentivized by regulatory bodies to develop and expand the range of devices that will combine and process multiple inputs to provide fewer false alarms and more reliable device output data [27–29]. Unfortunately, such systems exist but have not been incorporated into most ICUs.
Strategies for Overcoming Perceived Reliability
Reducing the sense of need to utilize perceived reliability. This can be accomplished through the process of improving device output reliability coupled with increasing awareness of both validated device reliability and perceived reliability bias.
Establishing and validating strategies of triangulation through using multiple sources of data to achieve similar, or better, clinical decisions without having to rely on device output data.
Developing external interventions for increasing situational awareness, such as a mandated cognitive pause or de-freezing questionnaire [2]. This requires a form of protocolization, akin to the ‘time out’ in operating theatres, that is to be invoked when a set of clinical/situational criteria are met.
Future Research
We are introducing the concept of perceived reliability. There is much that remains to be explored about this phenomenon. Which conditions and variables affect device end users’ sense of need to self-assign reliability values? Which conditions contribute to the clinician’s accuracy or error in arriving at this self-assigned value? Which other fields of the study of uncertainty can provide conceptual and mathematical frameworks for perceived reliability analysis? Overall, this will demand intense device and human analysis.
References
Author notes
Equal contributors.