-
PDF
- Split View
-
Views
-
Cite
Cite
Prashanth Rajivan, Efrat Aharonov-Majar, Cleotilde Gonzalez, Update now or later? Effects of experience, cost, and risk preference on update decisions, Journal of Cybersecurity, Volume 6, Issue 1, 2020, tyaa002, https://doi.org/10.1093/cybsec/tyaa002
- Share Icon Share
Abstract
Installing software updates is one of the most important security actions that people can take to protect their computer systems. However, people often delay installing updates. Why would people delay installation of security updates, knowing that these updates may reduce the risk of information loss from attacks? In a laboratory experiment, we studied how people learn to make update decisions from past experiences. In a simulated “work” environment, participants could defend against low probability and high impact losses, by installing a security update. The cost of updates was variable; participants could update immediately for a high cost or wait to update for free, risking increased exposure to attacks and losses. Thus, the optimal decision was to update immediately when the update was made available. The results from our experiment indicate people learn from experience to delay security updates. The cost of the update and individual risk preference both significantly predicted the tendency to delay the update; people with higher willingness to take risks may be more likely to neglect to update, keeping the status quo even when it may be sub-optimal. We discuss the implications of these findings for the design of interventions to reduce delays in update installations.
Introduction
Systems running vulnerable versions of software, particularly from providers with widespread usage (e.g. Microsoft, Adobe, and OpenSSL), are commonly targeted during cyberattacks [1]. Updates or patches are periodically released to mend such security vulnerabilities. Timely installation of security updates is an important security measure individuals can take to reduce their exposure to attacks [2–4]. While many updates can be applied automatically (e.g. updates to software applications), critical updates, such as an update to the operating system, continue to require human consent and intervention [5–7]. However, people often delay or fail to take this important protective action [8–11]. The delay to install a manual update could be a few days, few weeks, or even months in many instances [8, 10–12]. For example, in the case of the WannaCry ransomware attack, an update to address the vulnerability exploited by that malware was made available weeks before the actual attack. Failure to update caused millions of devices worldwide to be infected [13].
Why would people delay installation of critical security updates that could reduce the risk of information loss from a cyberattack? Understanding what drives people to delay this important protective action is a first step toward solving this problem. Here, we build on behavioral economics and decision-making research to investigate the influence of three possible factors that may lead end-users to delay taking protective decisions: experience, opportunity cost, and individual risk preference.
First, we investigate the effect of individual experience. Every day, people make many recurring decisions with possible adverse outcomes such as crossing a street, wearing a seat belt, choosing to vaccinate, backing up a hard drive, using a strong password, and installing security updates. For such everyday decisions, people typically do not have access to explicit description of the risk. Instead, they primarily rely on their own past experience. For example, one may decide to reuse a simple, familiar password for an online account because a similar decision from her past experience (as retrieved from memory) did not cause her any harm; in contrast, the decision to use a long password had problematic consequences in the past because she forgot that long password. Studies on decisions on risky choices made from experience have repeatedly found that people tend to underweight the probability of rare events [14, 15]. In the case of security updates, from an end-user perspective, the consequence of delaying to install an update being an attack or breach can be considered a rare event [16]. The more probable outcome of delaying to update would be that nothing happens and the end-user is repeatedly prompted with annoying update reminders. We expect that previous experience would influence people’s beliefs regarding the probabilities of attack, leading them to believe that an attack is less common than it really is, thus influencing their decision to delay an update.
Second, we investigate the effect of opportunity cost. The primary goal of end-users is to use a particular application or system to accomplish their tasks; security is usually a secondary goal, valuable only to the extent that it allows them to preserve their primary goal. Therefore, the cost of choosing a protective action (install update) could involve disruption to their primary work, which will require time, effort, and/or money [16]. For example, a security update may require a system reboot that takes time; or the new update may not function according to individual’s expectation [6, 7, 17, 18]. Here, we investigate whether this opportunity cost of installing an update plays a role in end-user’s choice to delay this action.
Third, we investigate the impact of individual risk preference. Consider two individuals with identical financial condition (Person A and Person B), are presented with a risky, but high-payoff investment. Let’s assume, Person A chooses to make the risky investment whereas Person B rejects the presented opportunity. The difference in their observed behavior could be explained by their individual willingness to take financial risk [19]. Similarly, different individuals choose to update at different speeds. Some choose to update early whereas others typically delay installing an update [20]. Difference in update speeds between individuals could also be explained by their underlying risk-taking “trait” or characteristic. We expect that people with low risk-taking preference would choose to update early [21]. So, in a laboratory experiment, we analyze people’s update decisions (speed of update) and risk-taking behavior with repeated experiences in an environment with variable cost conditions: a costly immediate update, versus a free but delayed update.
In what follows, we present a review of relevant literature regarding security updates and behavioral research. Then, we present a novel experimental paradigm, “Repeated Protective Decisions (RPD),” that emulates the end-user’s two main types of decisions: a primary decision task (representing users’ ongoing work) and a secondary task of security update decisions (representing users’ protective decisions), over a course of “days” (trial) and periods (e.g. “weeks”). We present the results from this experiment and use a regression model to explain the factors that predict update decisions. Results from this work can be helpful in the design of policies and solutions to mitigate the delay of update decisions and inform the development of security risk models that account for the gaps in human protection actions [22–24].
Related work
With the growing awareness that security is not only a technical problem, but also a social and economic one, recent works on security updates span disciplines such as economics, game theory, and Human-Computer Interaction. From an economics perspective, the current literature has addressed the incentives that drive human decisions to update [25]. Numerous models have been developed to understand the optimal timing of security patch release in relation to the release of vulnerability disclosure [26–29]. However, the majority of these models make idealistic assumptions about human decision-making process, including rationality, homogeneity in human’s risk and cost preferences, and full information conditions, suggesting that humans can interpret situations and make optimal choices based on complete knowledge of the incentive structure (e.g. probability and severity of hazards) [23]. From human factors perspective, several factors have been proposed to explain delay in installing updates, such as disruption to primary work, difficulty in understanding update messages, and individual risk-taking ability.
Disruption to primary work
Disruption to primary work, complementary to the argument about cost and incentives put-forth in the economics literature, is a reason commonly self-reported by users for their delay in installing updates [7, 11, 18, 20, 30–33]. One theory, based on “present bias” [34], suggests people could be delaying the installation of updates because they may be discounting future benefits of updating (avoiding the loss from an exploit) for immediate gratification from their primary task [35, 36]. Many others have suggested the role of past experience influencing security update decisions [6, 7, 18, 31]. For example, Vaniea et al. found that users learn to avoid or delay installing updates for certain applications that made surprise changes to its user interface, impacting productivity and user interaction [31]. Employing forced reboots at unfortunate times as part of an automatic update is another common annoyance self-reported by people as reasons for perceiving updates as a disruption to their work [6, 18, 20]. In this research, we further study the role of past experience and opportunity cost—disruption (time, work, interaction) incurred for the benefit of protection from threats—on repeated update decisions in a laboratory experiment.
Risk communication
The issue of end-users’ difficulty in understanding update messages, and why updates are important is another common reason widely reported by users as a cause for their delay in installing updates [30, 33, 37–41]. People report being frustrated by the lack of information on the urgency of an update [31], emphasizing the need for better risk communication [33, 41–44]. At the heart of this approach stands the assumption that humans are either not aware, and that better communication of these risks, or framing them differently, would increase their willingness to take the appropriate protective actions [33, 41, 45, 46]. While risk communication is certainly relevant to help answer our research question, other possibilities are also relevant and less explored. As mentioned before, many times, individuals are not explicitly aware of the probability of cyberattack and rely heavily on their experience, which has been shown to lead to systematic behavioral patterns such as underweighting of rare events, in this case, underweighting the possibility of an attack from experience [43].
Individual differences
Individual characteristics (e.g. gender, attitude, perception, and risk-taking) are another dominant factor that has been found to influence a variety of security behaviors, including security updates [21, 47–51]. One study compared expert and non-expert users’ attitude toward security updates [52]. They found that 35% of experts considered installing updates as one of the top three things they do to stay safe while just 2% of end-users made the same recommendation [52]. Others have explored individual characteristics such as cost perception and risk-taking [21, 27, 53, 54]. Farhang and colleagues found that people who frequently updated, perceived the cost of update to be lesser than individuals who seldom updated. However, it was not clear whether this perception was influenced by the real cost or if they were due to differences in individual preferences [53].
Egelman and Peer found that individuals who were more willing to take financial risks (e.g. invest in stock market) were also less likely to demonstrate proactive security behaviors such as timely installation of security updates [21]. Follow-up research has also found evidence for this relationship [54]. In contrast, a survey conducted by Mathur and Chetty with Android users found that users with lower willingness to take financial risk (risk averse) were also more likely to “avoid/disable” auto updates, refusing automatic protection [6]. They argue the reason for such a contradiction to earlier work may be due to the difference in underlying risk associated with auto updates—undesirable consequences to user interaction instead of considering risk from attacks [6]. In this research, we test whether there is a relationship between financial risk-taking and individual’s likelihood to delay a “manual” update.
Intention-behavior gap
Research using controlled laboratory experiments to study factors impacting the optimality of security update decisions is limited. The majority of user-centered research on security updates was addressed via surveys and interviews or through one-shot decisions [35]. An important limitation of survey-based research on update decisions is the intention-behavior gap, where expressed security preferences and actual behavior do not align [7]. Specifically, studies report a discrepancy in the speed of updates between reported intention and the actual behavior [18, 20]. For example, Redmiles et al. compared actual speed of update to self-reported speed by survey respondents and found a systematic over-reporting of update speed among survey respondents [20]. When respondents were asked to recommend an update speed for a friend, they recommended people update “immediately,” whereas when asked about their own behavior, they said they would update within 1 week [20]. However, actual measurements revealed most users on average delayed their update by several weeks [20]. Similarly, Wash et al. found that people who reported that they “update when convenient” were actually most often forced to update by the operating system [18]. A possible explanation for this disconnect is that respondents’ description about their experience with security updates could be based on their memory about a single, salient update instance that may not be consistent with their regular interaction or recent decisions on the update. Hence, it is necessary to test update decisions in interactive experimental settings, where people’s actions can be tracked and cognitive processes can be analyzed.
A decision from experience perspective
The study of protective decisions has been a topic of interest in decision sciences, from insurance decisions to health decisions (e.g. sunscreen use, vaccinations). Indeed, some of the approaches in studying human behavior in cybersecurity were inspired by existing models of protective decisions in healthcare and transportation research [44]. However, research on the economics and psychology of decision making in the field of cybersecurity has been limited to studies on threat response and adversarial behaviors [55–59]. Previous research has shown that experience-based and Description-based (one-shot decisions with full information) decisions lead to different choices. For example, when people are asked to make decisions based on a description of the problem, they tend to overweight rare events [60–62]. However, when they are asked to make repeated decisions from experience, they appear to underweight rare events [14, 15]. This, in turn, affects their tendency to take safety measures. For example, in the study of safety devices, Yechiam and colleagues showed that drivers who purchased safety devices (e.g. radio with a detachable panel) “learned” to stop using them within a year [63], because people do not experience radio thefts as an everyday event, leading people to believe that a safety measure was unnecessary.
Taken together, the current literature suggests that when making decisions from experience, humans tend to behave as if low probability events would happen even less often than they occur in reality. As cyberattacks don’t occur very frequently (from an individual end user’s perspective), and it is likely that they may have not occurred very recently either, resulting in decay of the memory of attacks and to underweighting the probability of an attack. Combined with the cost of security updates, we predict that with feedback and experience, participants would learn to avoid and delay security updates, in agreement with past research [63]. We also expect that the tendency to delay updates would be more pronounced for individuals who are willing to take more risks. We contribute to the existing literature by introducing the naturalistic assumption that the cost of taking protective actions may vary over time, and that people adapt their decisions to update according to their experience. We focus on security updates because of its periodic nature: an update provides protection only for a limited time, forcing users to repeat their decisions periodically. In this context, feedback from previous decisions is especially important for future decisions.
Experiment
The repeated protective decision paradigm
The Repeated Protective Decision (RPD) paradigm is a task emulating the primary task of end-users and their security actions. This is a novel paradigm based on paradigms used in the field of decisions from experience with features specific to security update decision making. Building on the literature, three assumptions guided the design of the task: (i) update is a secondary task, (ii) update decisions are repeated, and (iii) update is usually a costly action that takes time away from a primary (income-making) task. Figure 1 presents the task interface. The initial instructions explain that the task includes 20 periods, and each period includes 10 days (see Figure 2). On each day, participants are asked to make binary choices (i.e. the primary task, “investment choice”), and, if security updates are available, to choose whether they want to install an update. In the binary choice task, option “A” yielded an outcome of +2 points with certainty, and option “B” yielded either +4 or 0 points with P = 0.5, ensuring that the expected value (EV) of the two options is equivalent (EV = 2). Thus, the choices in the primary task should not affect participants’ overall payoffs. The simplicity and routine nature of the primary task were intended so that it does not require much attention.

The interface of the task. This screen shows the primary task (i.e. choose between two investment options, A and B), and the secondary task, indicating that an update is available and that the cost of the update is 10 points at Day 1 of Period 4.

The instructions participants received at the start of the experiment.
In addition, the participants are asked to decide if they want to “update their system” to protect against possible loss of 100 points, the result from a security failure. Updating the system may be costly (i.e. 10 points), but an update reduced the probability of losses by more than a half until the end of the period. Notice that this design implies that updating is a secondary task, to simulate real-life security scenarios. The decision to update could be made any day during a period—from the time the update is available at the start of the period until the end of the period. We assume the cost for making an update is dynamic: on certain days the cost for updating is higher while on other days they may be zero (e.g. automatic updates). The player is informed about the cost of update for each day, even on days following an update decision in the period. Choosing to update implies immediate cost for the user, but this cost is significantly lower than a possible loss from an attack. Finally, based on previous research [64] we assume the information that participants receive is asymmetrical. Thus, participants receive feedback on the cost of updates at any day (regardless of the decision), but do not receive information on prevented attacks (participants are generally aware of the lowered risk but not of whether they could have been attacked).
Participants
We collected data from 97 participants (39% females, median age = 32 years) from Amazon MTurk [65]. Participants were paid $1.5 as base payment for their time. In addition, participants could earn up to $1.5 as bonus rewards corresponding to amount of points they accumulated in the experiment. The bonus is used to incentivize participants to pay attention and perform the task to the best of their abilities. To avoid actual losses, if the reward accumulated at the end of the experiment was negative, participants received $0 as bonus. All experiment protocols were approved by the university institutional review board (IRB).
Experimental design
Table 1 summarizes the incentive structure used in the experiment. In a within-subjects design, we primarily manipulated the cost of updates. In this experiment, the cost of update represents opportunity cost for the end-user that can be variable. Taking time off the primary task to update that involves rebooting the system is usually “costlier” on most days whereas there might be days when it is less costly to take time off the primary task to reboot and update the system. For example, on weekdays, while at work, rebooting a personal laptop to update is likely to be seen as a costly effort as opposed to on a weekend. We only capture this cost at a day-level and not a time-of-day level. It is not intended to represent update specific costs such as the cost of installing a large-sized, slow update whose cost may not vary between days [20]. For simplicity, we included only two levels of cost: high cost (10 points) and low cost (0 points). The low cost (free update) represents minimum to no effort/time to update whereas the high cost represents significant effort (e.g. rebooting the machine while performing critical work-related task).
Experiment variables . | ||||
---|---|---|---|---|
Initial endowment | Attack loss | High-cost update | Low-cost update | |
+100 | −100 | 10 | 0 | |
Probabilities and EVs | ||||
P (discount) | P (attack) | P (attack after update) | EV for immediate update | EV for delayed update |
0.15 | 0.03 | 0.01 | −19 | −19.25 |
Experiment variables . | ||||
---|---|---|---|---|
Initial endowment | Attack loss | High-cost update | Low-cost update | |
+100 | −100 | 10 | 0 | |
Probabilities and EVs | ||||
P (discount) | P (attack) | P (attack after update) | EV for immediate update | EV for delayed update |
0.15 | 0.03 | 0.01 | −19 | −19.25 |
Experiment variables . | ||||
---|---|---|---|---|
Initial endowment | Attack loss | High-cost update | Low-cost update | |
+100 | −100 | 10 | 0 | |
Probabilities and EVs | ||||
P (discount) | P (attack) | P (attack after update) | EV for immediate update | EV for delayed update |
0.15 | 0.03 | 0.01 | −19 | −19.25 |
Experiment variables . | ||||
---|---|---|---|---|
Initial endowment | Attack loss | High-cost update | Low-cost update | |
+100 | −100 | 10 | 0 | |
Probabilities and EVs | ||||
P (discount) | P (attack) | P (attack after update) | EV for immediate update | EV for delayed update |
0.15 | 0.03 | 0.01 | −19 | −19.25 |
The 20 periods—in the paradigm—were divided into two phases. The first phase, consisting of three periods, introduced the primary task, and its periodic nature. Option to update was not made available during the first three periods (training phase). Furthermore, to give participants an experience of loss, all participants experienced for certain one instance of a loss (100 points) from an attack on a randomly chosen day in Period 2. To compensate for this sure loss, participants received 100 points as initial endowment (see Initial Endowment and Attack Loss in Table 1). They were not at risk of experiencing any more attacks in Periods 1–3.
In the consecutive periods (Periods 4–20), a security update was made available from Day 1 of each period. Per the design, the cost was always high (10 points) on the first day of every period. Thus, cost of doing an immediate update was always high. From Day 2, there was a 15% chance [P (Discount)] that the cost would be reduced to zero points. Participants were at risk of losing points from “security failures or attacks,” if the system was left unprotected (if participants decided not to update). The probability of an attack was 0.0 on Day 1, and 3% on Days 2–10. Choosing to update reduced the risk of such attacks for the rest of the period by more than half (from 3% to 1%) [see “P (Attack) after Update” in Table 1]. For example, if a participant updated immediately on Day 1, their risk reduced for all the nine remaining days in the period; if a participant updated on Day 5, the risk reduced only for the five remaining days; and if a participant chose not to update the risk remained high throughout the period. Each instance of an attack led to a loss of 100 points, which were deducted from the cumulative earnings from the primary choice task. Participants were not attacked on Day 1, to give them an opportunity to make an update decision before experiencing an attack in a period. Also, updates were not made available on days after experiencing the first attack in a period.
Procedure
After providing informed consent, participants entered basic demographic information (age and gender); they then read a hypothetical scenario, which provided a context for the experiment task (see Figure 1). As part of the context, participants were told they were making routine investment decisions and that a security failure may randomly arise that could be avoided by choosing to update. Participants were not made aware of probabilities for the two choices in the primary task; they were simply informed that “A” was a safe choice with low payoff and “B” was risky choice with high payoff (see Figure 2). But, participants received full information about all the game variables pertinent to security update decision making, such as the attack probability, loss from attacks, and variable cost of update (see Figure 2). Participants were also made aware of the conversion rate from experiment points to bonus in actual dollars (see section on payment in Figure 2). Each participant conducted 200 trials (framed as “Days”) divided into 20 periods. As part of the training phase, from Periods 1 to 3, participants made repeated decisions only on the primary task and experienced one certain loss in Period 2. At the end of training phase (Period 3) and before Period 4, participants were informed that security updates were now available to them; they were also reminded about the variability in the cost of update and the effect of update on reducing the probability of attacks. In consequent periods, from 4 to 20, they made decisions on both primary investment task and secondary update task. At the end of each period, participants received feedback about the points they accumulated so far, and any attacks suffered during the 10 days in that specific period. We collected all the decisions, including the decisions on the primary task. We analyzed decisions from only the main phase (Periods 4–20) of the experiment.
Results
We analyzed participants’ update decisions made between Period 4 (period in which the updates were first made available) and Period 20. As mentioned earlier (see Table 1), the optimal decision was to update immediately on Day 1 of every period (“immediate update”). More the participants’ delayed their update during a period, the greater was the risk of experiencing an attack (more days during which the system was unprotected). Updates later than Day 1 (“delayed update”) or no update was sub-optimal.
Proportion of immediate updates, delayed updates, and no updates
On average, across all 17 periods, participants updated 53.7% of the time (SD = 0.07); they updated immediately on 34.1% (SD = 0.47) of the periods. Figure 3 breaks down the proportion of immediate updates, delayed updates, and no updates over the course of the 17 periods. As seen in the figure, we observe a sharp decrease in the rate of Day 1 updates across periods, from 71.1% of participants updating on Day 1 in Period 4 to 43.3% in Period 5 and in Period 6 to 31.2%, followed by a relatively stable rate of immediate updates in the rest of the periods. Supporting this observation, the figure also shows an increase in the rate of delayed updates and specifically, an increase in no updates across periods. For example, we observe a sharp increase in the rate of no-updates from 21.6% in Period 4 to 46.3% in Period 5. These observations suggest that participants made increasingly more sub-optimal decisions with additional experience.

Proportion of immediate updates (in red), delayed updates (in green), and no updates (in blue) by period, Periods 4–20.
The increase in the proportion of delayed updates over the periods provides some support for the hypothesis that participants may be delaying the decision with the intention of updating for lower cost. To test this, we compared the rate of two kinds of delayed updates: costly versus free delayed updates. The proportion of delayed updates when the cost was zero was 74.3%, higher than the 25.6% of delayed updates when the cost was 10. A Wilcoxon signed-rank task indicated that this difference is statistically significant (Z = 13488, P < 0.001). It should be noted, this does not imply that participants updated every time a zero-cost update was available. We observe many participants who neglected to update in a period even though updates were made available for free in that period (e.g. no update rate in Figure 3). Hence, cost is not the only significant predictor of an update decision in a period.
Model of update decisions in a period
An update Decision (y) in each period j can be represented using an ordinal variable with three levels indicating the three possible types of update decisions made in each period: Immediate Update, Delayed Update, and No Update. For each participant i and period j, yij = 2, if the participant i immediately updated on Day 1 in period j; yij = 1, if the participant i delayed the update in period j and yij = 0, if the participant i did not update on any of the days in period j.
Using such a representation, we fit a mixed effects, ordered logistic regression model to determine the effect of recent events on update decisions in a period. We examine the effect of events in one period on the update decision in the following period. For example, update decision of a participant in Period 5 is predicted using events that occurred in the previous period (Period 4). Specifically, the model was used to estimate the effect of points earned in a previous period (), number of days with zero cost updates experienced in the previous period (), whether an update was done in Day 1 in the previous period () and whether an attack occurred in the previous period (). The results of this model are summarized in Table 2.
Mixed effects ordinal logistic model predicting update decisions in a period
. | Beta estimate . | Std. error . | z-Value . | Pr (>|z|) . |
---|---|---|---|---|
Period | 0.094 | 0.060 | 1.572 | 0.116 |
Production | 0.043 | 0.106 | 0.408 | 0.683 |
Zero-cost days | –0.075 | 0.062 | –1.219 | 0.223 |
Immediate update in previous period (IUPP) | 1.238 | 0.196 | 6.302 | 0.001*** |
Attack in previous period (APP) | 0.651 | 0.189 | 3.444 | 0.001*** |
APP*IUPP | –0.982 | 0.414 | –2.374 | 0.018* |
. | Beta estimate . | Std. error . | z-Value . | Pr (>|z|) . |
---|---|---|---|---|
Period | 0.094 | 0.060 | 1.572 | 0.116 |
Production | 0.043 | 0.106 | 0.408 | 0.683 |
Zero-cost days | –0.075 | 0.062 | –1.219 | 0.223 |
Immediate update in previous period (IUPP) | 1.238 | 0.196 | 6.302 | 0.001*** |
Attack in previous period (APP) | 0.651 | 0.189 | 3.444 | 0.001*** |
APP*IUPP | –0.982 | 0.414 | –2.374 | 0.018* |
P < 0.05, **P < 0.005, ***P < 0.001.
Mixed effects ordinal logistic model predicting update decisions in a period
. | Beta estimate . | Std. error . | z-Value . | Pr (>|z|) . |
---|---|---|---|---|
Period | 0.094 | 0.060 | 1.572 | 0.116 |
Production | 0.043 | 0.106 | 0.408 | 0.683 |
Zero-cost days | –0.075 | 0.062 | –1.219 | 0.223 |
Immediate update in previous period (IUPP) | 1.238 | 0.196 | 6.302 | 0.001*** |
Attack in previous period (APP) | 0.651 | 0.189 | 3.444 | 0.001*** |
APP*IUPP | –0.982 | 0.414 | –2.374 | 0.018* |
. | Beta estimate . | Std. error . | z-Value . | Pr (>|z|) . |
---|---|---|---|---|
Period | 0.094 | 0.060 | 1.572 | 0.116 |
Production | 0.043 | 0.106 | 0.408 | 0.683 |
Zero-cost days | –0.075 | 0.062 | –1.219 | 0.223 |
Immediate update in previous period (IUPP) | 1.238 | 0.196 | 6.302 | 0.001*** |
Attack in previous period (APP) | 0.651 | 0.189 | 3.444 | 0.001*** |
APP*IUPP | –0.982 | 0.414 | –2.374 | 0.018* |
P < 0.05, **P < 0.005, ***P < 0.001.
Estimates from the model reveal that participants who updated immediately in one period were also more likely to update immediately in the following period βIUPP=1.238, z = 6.302, P < 0.001 (See Table 2). Across participants, comparing updates between every pair of periods, we found that in 71% of instances a Day 1 update was preceded by a Day 1 update in the previous period. It is also likely that failure or delay to update in one period will lead to failure or delay in update in the following period. The results explain the eventual stability observed with different types of update decisions across periods after the initial drop in immediate updates and increase in the proportion of no-updates (Figure 3). We did not find a significant effect of period on the type of update decision (βj=−0.094, z = 1.5, P = 0.116). Also, the model results show that there is no effect of the points accumulated in the previous period (“Production”) or the number of zero-cost days experienced in the previous period.
Effect of experiencing an attack
Delaying updates exposed participants to the risk of attacks. We found that 96% of participants experienced at least one attack in the entire experiment. Among the 93 participants who faced at least one attack, only 47 participants updated in the period following the attack, and not necessarily on the first day.
In what follows, we examine how recent experience of an attack affects the participants’ decision to update in the period following the attack. Estimates from the model (Table 2) show that having experienced an attack in the previous period significantly influenced the update decision in the following period βAPP=0.651, z = 3.44, P < 0.001. We also found a significant negative interaction effect between having experienced an attack in the previous period and updating on day 1 in the following period, , z=−2.37, P = 0.018. Since these are discrete variables, these effects are interpreted as follows. If a participant was attacked in the previous period after making an immediate update in that period, they are less likely to update in the following period. If participants did not update on Day 1 in the previous period and they were attacked in one of the following days of the period, they were more likely to update in the following period (see Table 3).
Interpretation of interaction effect between attack and immediate update in previous period
Immediate update in previous period? . | Attacked in previous period? . | Immediate update? . |
---|---|---|
Yes | Yes | Less likely |
No | Yes | More likely |
Immediate update in previous period? . | Attacked in previous period? . | Immediate update? . |
---|---|---|
Yes | Yes | Less likely |
No | Yes | More likely |
Interpretation of interaction effect between attack and immediate update in previous period
Immediate update in previous period? . | Attacked in previous period? . | Immediate update? . |
---|---|---|
Yes | Yes | Less likely |
No | Yes | More likely |
Immediate update in previous period? . | Attacked in previous period? . | Immediate update? . |
---|---|---|
Yes | Yes | Less likely |
No | Yes | More likely |
Individual differences in update patterns
This measure ranges from 0 (the participant did not update in any of the periods) to 17 (the participant updated on Day 1 in all 17 periods). Figure 4 shows the distribution of across all participants. The figure reveals substantial individual differences in the way participants updated across the periods. We see that few participants never updated, few participants always updated immediately, whereas majority of them occasionally updated either immediately or later during a period.

Distribution of the weighted sum of update decisions to demonstrate individual differences in update patterns.
To further understand the effect of individual differences on update behavior, we test the effect of two factors—Individual risk-taking tendency and Individual preference for lower update cost—on the average update delay. Next, we discuss how these individual-level variables are measured and the statistical model to describe their relationship.
Risk-taking
Decisions made as part of the primary task in the experiment (choice between a safe and a risky choice) were used to measure the level of risk-taking tendency for each individual participant. We observe a large variance in participants’ preference for risk in the primary task. The total possible risky choices are 170 (17 periods × 10 days each period). We counted the total number of risky choices made by each participant and used that as a measure of risk-taking Ri. The higher the Ri is, the higher is the participant’s tendency toward risk-taking.
We found that the average Ri was 76.42 that indicates that on average participants chose around 76 risky choices. The minimum number of safe choices made by few participants was 0 that indicates very-high risk aversion because these participants “never” chose the risky option and the maximum number of risky choices made was 170 that indicates very-high risk-taking because these participants “always” chose the risky option.
Preference for zero-cost updates
Equation (3) represents the preference for zero-cost update for participant i, where is the number of zero-cost updates made by participant i, ZCAi is the measure of zero-cost availability for participant i and Ni is the total number of decisions made by participant i. ZCAi is the number of periods in which the participant experienced a minimum of 1 day with zero-cost update. This measure of zero-update availability (ZCAi) ranged from 9 to 17 periods where participants on average experienced 13 periods with at least 1 zero-cost update day. We found that only one participant experienced a minimum of 1 day with zero-cost update day in all 17 periods.
Higher values of indicate stronger preferences for zero-cost updates. We found that many participants demonstrated no preference for zero-cost either because they consistently updated for higher-cost on Day 1 of each period or because they consistently neglected to update (see Figure 4). from 0 to 0.85 with a mean preference of 0.15.
Predictors of delay in update decisions
In Figure 3, we observed that participants learned to delay their updates or learned to not update at all. We calculated the average delay for each individual across the 17 periods. In each period, the last available day to update was Day 10 (a sub-optimal decision). A no-update decision in a period was coded as Day 11 indicating the maximum possible delay. Figure 5 shows the distribution of the average delay across all 97 participants. We observe a right-skew in the distribution of delay indicating that a large proportion of participants, on average, chose to delay the updates. Correlation between individual’s average delay and their overall payoff accrued at the end of the experiment shows a marginally significant, negative relationship r=−0.19, P = 0.059; suggesting lower payoffs with longer delays. The correlation is marginal because final payoff is affected by the number of attacks experienced, r=–0.94, .

We fit a linear regression model to predict average delay in updating using individual’s level of risk-taking and preference for zero-cost updates as predictors. As shown in Table 4, both risk-taking and preference for zero-cost updates have a significant effect on the individuals’ tendency to delay the updates. Individuals with high-risk-taking and preference for zero-cost updates were more likely to delay the updates. It should be noted that there is no correlation between individual’s risk-taking and their preference for zero-cost updates, r = 0.07, P = 0.49.
Linear regression model predicting average delay in update using risk aversion and preference for zero-cost as predictors
. | Dependent variable . |
---|---|
. | Average_Delay . |
Preference for zero cost | 0.202** |
(0.098) | |
Risk-taking | 0.233** |
(0.100) | |
Constant | –0.012 |
(0.098) | |
Observations | 96 |
Residual std. error | 0.958 (df = 93) |
F-statistic | 4.506** (df = 2;93) |
. | Dependent variable . |
---|---|
. | Average_Delay . |
Preference for zero cost | 0.202** |
(0.098) | |
Risk-taking | 0.233** |
(0.100) | |
Constant | –0.012 |
(0.098) | |
Observations | 96 |
Residual std. error | 0.958 (df = 93) |
F-statistic | 4.506** (df = 2;93) |
P < 0.1; **P < 0.05; ***P < 0.01.
Linear regression model predicting average delay in update using risk aversion and preference for zero-cost as predictors
. | Dependent variable . |
---|---|
. | Average_Delay . |
Preference for zero cost | 0.202** |
(0.098) | |
Risk-taking | 0.233** |
(0.100) | |
Constant | –0.012 |
(0.098) | |
Observations | 96 |
Residual std. error | 0.958 (df = 93) |
F-statistic | 4.506** (df = 2;93) |
. | Dependent variable . |
---|---|
. | Average_Delay . |
Preference for zero cost | 0.202** |
(0.098) | |
Risk-taking | 0.233** |
(0.100) | |
Constant | –0.012 |
(0.098) | |
Observations | 96 |
Residual std. error | 0.958 (df = 93) |
F-statistic | 4.506** (df = 2;93) |
P < 0.1; **P < 0.05; ***P < 0.01.
Discussion
In a behavioral economics experiment, we examined the role of experience with rare probability events (i.e. attacks) on update decisions. We find that people are likely to learn from experience to underweight the risk of attacks, and make sub-optimal security decisions. Although the optimal (utility-maximizing) decision in this experiment was to update immediately (on Day 1) in each period, the majority of the participants in our experiment quickly learned to delay the update or not update at all. The negative relationship between average delay and final payoffs demonstrates the negative impact of sub-optimal decisions on the payoff outcomes in the experiment. This finding is in line with a related work on two-factor authentication (2FA), which also found that people in low-risk conditions were more likely to make sub-optimal decisions, in this case failing to enable 2FA [66]. They suggested insufficient risk information as the primary explanation [66]. They did not analyze repeated decisions and therefore, did not test the effect of experience. While risk communication is certainly relevant, our results point to end-user’s past experience as a factor influencing sub-optimal security behaviors. Our results are supported by previous findings in the Decisions from Experience literature, demonstrating underweighting of rare events [67], and stand in contrast to models based on theories that build on decisions from description, such as Prospect Theory, which predict high proportion of immediate updates [23]. Thus, our results demonstrate the description-experience gap [15] and stress the need to examine the effect of experience and feedback on security decisions. Users rarely have the convenience of making updates given explicit descriptions of attack probabilities and need to rely on their own experience to make decisions.
In the experiment, in most periods, the rate of immediate updates was not higher than chance (50%). The overall speed of updates observed in this experiment is in line with previous observations and measurements. For example, Möller et al. [9] analyzed updates to mobile applications over 102 days, and found that on average only a minority of users (17%) installed an update on the day of its release and update rate significantly dropped over the period of 102 days and only 53.2% of users, on average, ended up updating within a week. Other works have measured the actual speed of update to be withing several weeks of update becoming available [20, 68].
Not all participants in the experiment made sub-optimal decisions. We found large differences in participants’ individual update speeds (see Figures 4 and 25). Some participants in the experiment consistently updated on Day 1 in majority of the periods, whereas others consistently delayed but eventually updated, and yet a large proportion of participants consistently neglected to update at all. This general tendency of participants to repeatedly update in a certain way is consistent with results from Redmiles et al. [20] who found that people tended to anchor to their typical security behaviors [20]. They observed this effect in both survey reports and also actual measurement data. This result further reinforces the importance of efforts to augment user habits and experience to improve proactive security behaviors [20, 35, 52].
So, why did some individuals delay or neglect to take protective decisions while others did not? Overall, the majority of participants who updated late during a period did so when the update was offered for free, indicating that their preference to delay was deliberate, motivated by the expectation for a lower cost and may not be merely out of forgetfulness or irrationality [16]. At the extreme of delaying updates, we observed 15 choices (by 15 different participants) to update on the last day of the period for free. This action is of course useless, given that there is no protection gained from this action at the end of the period. This extreme deviation from maximization further demonstrates the important role of cost on update decisions.
To further understand production-security trade-off in security updates, we measured the impact of two individual-level variables on their average delay to update: number of risky choices made and individual preference for zero-cost updates. Higher proportion of risky choices indicated that the participant was willing to take higher financial risk in the expectation of accumulating higher payoff. Higher preference for zero-cost update indicated that the participant was more willing to take protective actions when the cost was zero. Our analysis revealed a significant positive impact of both risk-taking and preference for zero-cost on the average delay. Participants with high preference for zero-cost updates were more likely to delay in making the updates. Individuals who made more risky choices in the primary task were more likely to neglect to update. This finding is consistent with similar results from prior work suggesting a strong effect of individual risk-taking on update decisions [21, 54]. Strong correlation between the number of risk choices in the primary task and the number of no-decisions indicate that people with higher affinity toward production were more likely to avoid or neglect to update. There was however no correlation between higher preference for zero-cost and the number of no-decisions. These results suggest a significant role of productivity-security trade-off on individual’s decision to update early or update late or never update at all. People with strong preference for zero-cost would be willing to accept a distraction from the primary task to take action when updating is not costly (e.g. on weekends or nights). On the other hand, people with higher risk preference may be indifferent toward security and therefore, may be willing to avoid making updates even when offered for free. It is possible that people with higher risk preference may also have intended to wait for free update, but forgot to update when made available for free, keeping the status quo even when it is sub-optimal (increased exposure to attacks); they may update only when they are eventually forced by the provider [18].
Furthermore, we found evidence for recency effects [69] where negative outcomes (i.e. experience of an attack) results in a momentary over-weighting of rare event, being more likely to update early in the following period. Our results showed that participants who did not update early but consequently experienced an attack were more likely to update in the next period, demonstrating positive recency (more cautious behavior after an aversive rare event) [69]. Interestingly, we only observe this effect given the experience of an attack (because there was no early update). It seems that the participants’ immediate regret of not protecting oneself may have motivated them to update early on in the following period. However, due to the sporadic nature of attacks, further research is needed to determine extent of the effect of attack on updates and, whether it decays with time. However, we often observe this with back up behaviors where people are more likely to back up their machine immediately after a computer failure but failed to continue doing it.
Another type of regret observed was in terms of protection when an attack is experienced after making a decision to protect. Participants who were attacked after choosing to update in a period, were less likely to update in the following period. It should be noted that participants were informed there is a 1% chance of experiencing an attack even after updating. Therefore, the rare event in this case was the lack of protection offered by the update and we see that the recency effect leads to a momentary over-weighting of this rare event causing lower likelihood of making an update decision in the following period. However, follow-up research is necessary to understand the extent of such a regret.
Our results show that the positive effect of communicating users the probability of threats is short-lived, and that experience could eventually lead them to ignore this information. This is supported by previous study by Erev and colleagues, who demonstrated that when participants are presented with two gambles with known probabilities and outcome, they act as if they overweight small probabilities, however when they start receiving feedback on their choices, this phenomenon was reversed [70]. This was also observed when the dynamics of choice and experience of losses was analyzed over time: the experience of a rare loss appears to result in a momentary over-weighting of this rare outcome given the recency effect, but this effect may be stronger or weaker depending on the interplay of the frequency of the rare event, the recency, and the magnitude of the outcome [69]. Suggestions from past research that call for improving information disclosure about the importance of the security update in relation to protection of the system, as a way to improve security update decision making, may not be sufficient [7, 18, 31].
Reducing cost variability through automatic updates could be seen as a direct implication. However, past research has raised concerns with the practice of automatic updates as the default that includes diminishing security mental model by keeping humans out of the security decision loop [18, 31, 71]. When humans are constantly kept out of the decision loop, it would become harder to learn from experience and adapt to instances when automation fails [18]. Alternatively, people could be offered the option of scheduling an update at a more convenient time, reducing the disruption to present work. But such an option is likely to encourage users to further procrastinate [36]. This research proposes creating solutions that allow people to learn to update quickly from experience through feedback and reinforcements. Particularly in adversarial environment where explicit description of the attacker’s actions may not be available, as in the case of security update decisions. In addition to making security updating easier, it is essential that we focus our efforts on modifying security habits and behaviors of people to produce meaningful, system-level improvements. Modifying peoples’ behavior is a challenge but could be achieved through security training, feedback, rewards, and stringent policies. For example, training people to commit to a specific time for updating their machines and rewarding them for going through on that commitment could develop good security update habits. Subsidies for good behavior (e.g. early update) and penalties for bad behavior (e.g. delayed/neglect to update) is also an approach explored in the past to improve protection decisions [24]. This is also in line with previous research on gentle and frequent reinforcement through reminders or small rewards, which have led to improvement in workers’ safe behavior [72,73]. Feedback in the form of potential attacks evaded and losses prevented due to the updates made can also be helpful in developing good security habits.
In a recent publication, we used this experiment design to explore the impact of cost variability on update decisions [74]. In this work, participants performed a similar task but under two different cost conditions with similar payoff distributions: fixed cost and variable cost [74]. As predicted, participants in the fixed cost condition—cost of update is constantly high in all trials—learned to update immediately whereas participants in the variable cost condition—cost of update on some days is low—did not show any such learning effect. They consistently delayed or ignored to update. Thus, when the cost is consistently high, people behave more optimally (learn to update) than when the cost varies. The results from this experiment suggest the “regret” theory as a possible explanation. This theory suggests that cost variability could cause people to regret updating immediately for a high-cost because one would learn that he or she could have updated at another day or time for lower cost. This would encourage further procrastination, keeping the status-quo “not updated” [74].
Limitations and future work
The Repeated Protective Decision (RPD) paradigm introduced in this paper emulates various features relevant to security update decision-making, such as probability of attack, magnitude of loss, protection guarantee after update, and cost of update. The experiment presented in the paper focused on the effect of three features hypothesized to influence security update decisions: experience, cost variability, and individual risk preference. In a subsequent work, we specifically have investigated the impact of cost variability [74]. However, this design can be extended to conduct more follow-up experiments on other relevant aspects of update decisions. For example, future experiments could study the productivity-security trade-offs involved in update decisions by manipulating the cost of update and the magnitude of loss from attacks.
Participants for this experiment were recruited from Amazon MTurk that may not be a representative sample. Compared to US demographics, MTurk participants tend to be younger, typically more educated, and technologically more savvy. This may have biased our results. Future work could validate the results from this experiment with a wider demographic, especially include older population.
Results in this paper are from a simulated laboratory experiment. Therefore, the behavior observed in the experiment may not fully represent peoples’ behavior in the real world. The parameters used to represent risk and cost are artificial, and may not accurately represent the reality. In reality, the probability of attack is likely to be even lower and therefore, the effect of experience could be stronger in the real-world. Quantifying accurately the opportunity cost and loss from attacks is often challenging. Therefore, for experiment purposes, values for these parameters were carefully chosen in relation to the EV of optimal decision, participant payoffs, and other experiment variables. The difference in EV between optimal and sub-optimal choice in this experiment was small because previous research on decisions from experience has shown high sensitivity to small differences in EV. This could be a limitation, and future work could experiment with different EVs by manipulating the probability of attack and magnitude of loss.
The abstract nature of the task is another limitation of the study. Unlike in the related work [20], we did not employ actual update messages due to the concern of specific message features confounding our results. In this experiment, we simply notified the participants an update was made available to them to protect their assets.
Results indicate that in several instances participants refrained from updating even when it was offered for zero cost. One explanation for this behavior is the lack of attention. Participants may have failed to notice that update was available for zero cost on a specific day. We did not include explicit attention checks. This is a limitation. However, the results also indicate that if a participant had a strong preference for zero cost (like some in our sample did), they were likely to wait in each period for the day with zero cost. Further, this was an incentivized experiment. So, we predict the lack of attention may not be a dominant factor. Future work should identify ways to check participant attention and filter-out low-quality responses.
Similarly, one could argue that the reluctance to update may be the result of perceived effectiveness of the update. Previous studies on the certainty effect in vaccination decisions [75] have shown that a vaccination is more attractive if it eliminates the risk completely than if it reduces the risk, even if the ultimate reduction in risk is equivalent [60]. Future research could isolate this certainty effect from the effect of update cost, by testing whether participants would be more inclined to install updates if they eliminate the risk completely even if the effect.
Finally, the paradigm introduced in this paper could be extended to study how users may choose between competing updates to the system where the expected loss is a function of multiple vulnerabilities. In the current design, participants did not receive feedback on attacks prevented due to timely update decisions. Future experiments could test the effect of such a feedback; particularly, to study whether it would encourage people to continue updating even after instances when they were breached despite choosing to update (Table 3). This design can also be extended to conduct group experiments to study the influence of social learning and feedback on individual update decisions. For example, to study what kind of social feedback is more influential in encouraging people to consistently update in a timely manner.
Conflict of interest statement. The authors declare no conflict of interest.
Acknowledgements
This research was funded by the Army Research Laboratory under Cooperative Agreement W911NF‐13‐2‐0045 (ARL Cyber Security CRA) to Cleotilde Gonzalez. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The authors thank research assistants in the Dynamic Decision Making Laboratory, Carnegie Mellon University, for their help with data collection.
References
US-CERT: top 30 targeted high risk vulnerabilities. US-CERT Alert (TA15-119A),
CCIRC: top 4 strategies to mitigate targeted cyber intrusions. CCIRC,