-
PDF
- Split View
-
Views
-
Cite
Cite
Melanie Clegg, Reto Hofstetter, Emanuel de Bellis, Bernd H Schmitt, Unveiling the Mind of the Machine, Journal of Consumer Research, Volume 51, Issue 2, August 2024, Pages 342–361, https://doi.org/10.1093/jcr/ucad075
- Share Icon Share
Abstract
Previous research has shown that consumers respond differently to decisions made by humans versus algorithms. Many tasks, however, are not performed by humans anymore but entirely by algorithms. In fact, consumers increasingly encounter algorithm-controlled products, such as robotic vacuum cleaners or smart refrigerators, which are steered by different types of algorithms. Building on insights from computer science and consumer research on algorithm perception, this research investigates how consumers respond to different types of algorithms within these products. This research compares high-adaptivity algorithms, which can learn and adapt, versus low-adaptivity algorithms, which are entirely pre-programmed, and explore their impact on consumers' product preferences. Six empirical studies show that, in general, consumers prefer products with high-adaptivity algorithms. However, this preference depends on the desired level of product outcome range—the number of solutions a product is expected to provide within a task or across tasks. The findings also demonstrate that perceived algorithm creativity and predictability drive the observed effects. This research highlights the distinctive role of algorithm types in the perception of consumer goods and reveals the consequences of unveiling the mind of the machine to consumers.
Algorithms are the backbone of the digital society. Strings of software code are at the core of artificial intelligence (AI) applications that boost innovative new technologies ranging from sophisticated chatbot applications like ChatGPT, medical systems, and voice assistants, to products that are connected to the internet of things. Importantly, algorithms also extend the functionality of traditional products. For example, robotic vacuum cleaners chart the most efficient cleaning routes; smart refrigerators suggest recipes based on users’ eating preferences; and smart locks automatically lock and unlock. While many of these products are still operated by means of tangible physical and mechanical components, it is the algorithms that control these physical components. Algorithms have thus become the “mind” of the machine.
Consequently, firms are beginning to unveil the algorithmic mind to consumers. Samsung, for example, advertises its smart fridge as using an “algorithm that adapts.” Nikon claims that its new camera model contains “an advanced algorithm developed using deep-learning technology.” Indeed, over 50% of the Fortune Global 500 firms that sell technological consumer products refer to algorithms or AI in their online presence (web appendix A). Likewise, major media outlets such as the New York Times, The Guardian, and the Wall Street Journal report on the algorithmic underpinnings of algorithm-controlled products, for instance, by writing about algorithms causing self-driving car accidents or generative AI applications that enable human-like conversational abilities of virtual assistants (Doshi-Velez and Kortz 2018; Hern 2022; Packin 2019; Roose 2023). Consumers are also increasingly concerned about how algorithms function and control products. This is exemplified on social media where consumers discuss the algorithms that control Ecobee thermostats, Roomba vacuum cleaners, the Siri voice assistant, and the Tesla autopilot (for further examples, see web appendix B).
In this research, we demonstrate that consumers are interested in how algorithms operate inside products and that the type of implemented algorithm can drive product preferences. Building on insights from computer science on algorithm types and consumer research on algorithm perception, we draw a consumer-relevant distinction between high-adaptivity and low-adaptivity algorithms, where adaptivity refers to the extent that an algorithm can change its operations independently of a programmer. We show that high-adaptivity algorithms are perceived as more creative, but less predictable, than low-adaptivity algorithms, which influences product preferences. We propose that high adaptivity generally increases product preferences because creativity is a desirable trait for many algorithm-controlled products. However, when predictability is desired over creativity, consumers are less likely to prefer high adaptivity. This is the case when a product is expected to provide only a narrow range of outcomes, such as simply locking or unlocking a door with a smart door lock. We refer to this moderating product characteristic as product outcome range (POR), defined as the number of solutions a product is expected to provide within a task or across a number of tasks.
As shown in table 1, products use high-adaptivity and low-adaptivity algorithms for both wide and narrow POR. For example, a smart lock has a narrow POR because it is expected to conduct a task (closing or opening a door) with one specific outcome (lock or unlock); modern smart locks, however, include high-adaptivity algorithms that rely on machine learning by analyzing video and sensor data to infer whether to lock or unlock. By contrast, the apps that can be added to Alexa to fulfill multiple tasks and produce multiple solutions (e.g., play music, recommend a restaurant) can run on low-adaptivity “if-this-then-that” algorithms. Even though from a technical perspective high-adaptivity algorithms can control narrow POR products and low-adaptivity algorithms wide POR products, we find in six empirical studies that consumers generally favor high-adaptivity algorithms except for narrow POR products, where low-adaptivity algorithms are preferred. We thus find that algorithms are perceived not only as different from humans but also as different from each other.
. | Narrow POR products and brands . | Wide POR products and brands . |
---|---|---|
High-adaptivity algorithm |
|
|
Low-adaptivity algorithm |
|
|
. | Narrow POR products and brands . | Wide POR products and brands . |
---|---|---|
High-adaptivity algorithm |
|
|
Low-adaptivity algorithm |
|
|
Note.— See web appendix C for sources and further details of product classification procedure; IFTTT = if-this-then-that; POR = product outcome range.
. | Narrow POR products and brands . | Wide POR products and brands . |
---|---|---|
High-adaptivity algorithm |
|
|
Low-adaptivity algorithm |
|
|
. | Narrow POR products and brands . | Wide POR products and brands . |
---|---|---|
High-adaptivity algorithm |
|
|
Low-adaptivity algorithm |
|
|
Note.— See web appendix C for sources and further details of product classification procedure; IFTTT = if-this-then-that; POR = product outcome range.
The current work makes two key contributions to consumer research. First, we demonstrate that consumers are generally aware of algorithms in their products and that the type of implemented algorithm can drive their product preferences. Specifically, consumers prefer high-adaptivity algorithms in many products, despite the negative news on high-adaptivity algorithms—such as the publicity surrounding autonomous car crashes (Dickson 2018), and despite prior literature implying that consumers normally value algorithms for their high reliability and mechanical (i.e., machine-like) working (Castelo, Bos, and Lehmann 2019; Logg, Minson, and Moore 2019). Second, we reveal situations in which low-adaptivity algorithms are preferred, namely, in contexts with narrow POR. Overall, these insights extend prior research on algorithm perception and technology adoption (de Bellis, Johar, and Poletti 2023; Hoffman and Novak 2018; Puntoni et al. 2021), adding perception of algorithm type as an important concept to be studied in consumer research. For managers of technology firms, these findings highlight the important and distinctive role of algorithms and algorithm types in consumer goods.
CONCEPTUAL BACKGROUND
Low-adaptivity versus High-adaptivity Algorithms
Computer science defines an algorithm as a set of rules that transform an input to an output (Cormen et al. 2009). A key characteristic that distinguishes different types of algorithms is their degree of adaptivity, that is, the extent to which algorithms can change their operations (i.e., processing rules, codes, and parameters) independently of a programmer (Ghahramani 2015). Importantly, adaptivity is distinct from a system’s openness, where code is open source and updateable by anyone (typically human programmers), regardless of whether it is code for a pre-programmed or an adaptive algorithm.
Low-adaptivity algorithms are referred to as pre-programmed, hardwired, or hardcoded (Goodfellow, Bengio, and Courville 2016; Rangaswamy et al. 1989; Schmidhuber 2005, 2010). They include, for example, sequential algorithms following pre-programmed if–then decisions, such as a smart thermostat that follows the rule “If the temperature falls below 70 degrees, turn on the heater.” Such an algorithm can produce many different outcomes. In fact, with extensive if–then branching, a low-adaptivity algorithm can achieve a large variety of possible outcomes (Jordan and Mitchell 2015). For example, in the above case of the thermostat, there may be additional if–then rules, such as “If the temperature falls below 70 degrees and if it is a weekday between 8 AM and 4 PM, do not turn on the heater” or “If it is nighttime, do not turn on the heater.” Such rules can result in highly nuanced product outcomes, even though the algorithm has low adaptivity because it sticks to rules that it cannot change itself.
High-adaptivity algorithms, on the other hand, learn on their own. Their rules are primarily determined by mathematical calculations carried out by the algorithm rather than predefined by a programmer (Ghahramani 2015; Syam and Sharma 2018). For example, artificial neural network algorithms process information through layers of connected artificial “neurons.” The connections between neurons are the parameters that determine data processing and are defined by mathematical approximations (LeCun, Bengio, and Hinton 2015). High-adaptivity algorithms can adapt multiple parameters. Consider Parti, Google’s image generation model, which scales up to 20 billion parameters, or BERT, a language processing model with over 110 billion trainable parameters (Devlin et al. 2018; Reviriego and Gómez-Merino 2022).
Despite the increasing use of high-adaptivity algorithms, consumers tend to be skeptical about them. Fatal accidents caused by self-driving cars or wrongful crime accusations based on facial recognition software frequently capture public attention (Dickson 2018; Harwell 2021), and generative AI applications are seen skeptically as sources of misinformation and deepfakes (Satariano and Mozur 2023). Moreover, high-adaptivity algorithms are oftentimes seen as unexplainable “black box” algorithms, because their rules are frequently neither understandable nor meaningful even for their programmers (Rai 2020). Indeed, lacking explainability is often discussed as one major adoption barrier for algorithms and AI-based systems and products (Davenport et al. 2020; Miller 2019). Other problems include algorithmic biases and discrimination of sub-groups as well as privacy violations due to data-hungry machine learning models (Huang and Rust 2022; Lambrecht and Tucker 2019; Martin and Murphy 2017). Web appendix D provides an extensive discussion of high-adaptivity and low-adaptivity algorithms.
Marketing scholars have studied both algorithm types, but not in terms of how they are perceived by consumers. Low-adaptivity algorithms, especially pre-programmed expert systems, were discussed as tools to support managerial decision-making (Mitchell, Russo, and Wittink 1991; Rangaswamy et al. 1989). High-adaptivity algorithms were investigated as tools to personalize recommendations or website design, yet without consumers’ explicit knowledge about the system they are interacting with (Chung, Rust, and Wedel 2009; Chung, Wedel, and Rust 2016; Hauser et al. 2009). From a consumer perspective, algorithms have been studied as a unified concept, primarily by comparing them against humans.
Consumer Perceptions of Algorithms
A stream of research on algorithm perception has uncovered general preconceived beliefs consumers hold about algorithms, showing how these beliefs affect the acceptance of algorithms for decision-making (for a review, see Burton, Stein, and Jensen 2020). This research has examined algorithms as supporting tools in statistical and forecasting tasks and studied reactions to governmental, legal, health, or hiring decisions based on algorithms (Cadario, Longoni, and Morewedge 2021; Dietvorst, Simmons, and Massey 2015, 2016; Garvey, Kim, and Duhachek 2023; Logg et al. 2019; Longoni, Cian, and Kyung 2023; Yalcin et al. 2022a, 2022b). Other research has focused on non-tangible, algorithm-controlled recommendation services such as online dating, financial, and medical advice, as well as joke recommendation services (Castelo et al. 2019; Longoni, Bonezzi, and Morewedge 2019; Longoni and Cian 2022; Yeomans et al. 2019).
We derive a range of key insights from these studies. First, consumers believe that algorithms behave logically, analytically, objectively, and rationally, and are unlikely to deviate from existing work patterns (Haslam 2006). Second, because they perceive algorithms as acting logically and analytically, consumers appreciate algorithmic advice more than human advice and see algorithms as particularly skilled in mathematical, statistical forecasting, and prediction tasks (Logg et al. 2019). Third, people prefer algorithms over human decision-makers for the performance of objectively quantifiable (vs. subjective) tasks (Castelo et al. 2019), and doubt that algorithms can adequately account for human uniqueness and act creatively (Haslam 2006). Finally, individuals quickly lose trust in an algorithm when it provides unexpected or erroneous results, especially when they cannot modify the algorithm’s output (Dietvorst et al. 2015, 2016; Longoni et al. 2023). Therefore, algorithmic systems are often designed with the intention of permitting only a limited degree of adaptivity (Davenport et al. 2020; Huang and Rust 2018).
However, this prior research did not study consumer reactions to algorithms as parts of products, and it did not differentiate between low- and high-adaptivity algorithms. While this work yields valuable insights for several decision-making contexts, our research explores situations where consumers know the product is controlled by a high-adaptivity or low-adaptivity algorithm. We expect consumer perceptions of different algorithm types to be more nuanced than when algorithms are compared as a unified concept against humans, as we discuss next.
Consumer Perceptions of Adaptivity in Algorithm-controlled Products
Consumers tend to draw inferences about product characteristics and the “mind” of a machine based on the information they have. Since humans tend to attribute minds to other entities and anthropomorphize technologies and products (Aggarwal and McGill 2007; Epley 2018), algorithms may be viewed as the internal operators of products, steering minds whose skills, intentions, and capabilities will shape how the product is perceived.
We argue that the level of adaptivity of an algorithm can be related to its perceived “creativity”—one of the core capabilities that people believe distinguishes humans from machines. As shown in the above review, consumers normally doubt the ability of algorithms to be innovative, to reveal novel solutions, or to act with flexibility, all of which are dimensions of creativity (Mehta and Dahl 2019).
However, we propose that the belief that algorithms are only capable of acting analytically and not creatively is challenged when encountering high-adaptivity algorithms. In humans, adaptivity is enabled by flexibility and agency and is thus closely related to creative skills and personality (Lucas and Nordgren 2020; Runco 2004). While more rule-based and rational actions are already seen as signs of intelligence, high adaptivity is perceived as a distinct indicator of creativity (Cropley 2006; Hirschman 1980; Katz and Giacommelli 1982). Therefore, adaptivity can induce attributions of creative skills in multiple contexts, including school, education, and work (Jaussi and Dionne 2003; Proudfoot, Kay, and Koval 2015; Runco 2004). Following mind perception theory, it is plausible that an algorithm’s adaptivity will increase perceptions of creativity, just like creative skills are attributed to more adaptive humans. Accordingly, we expect that creative skills are attributed to high-adaptivity algorithms rather than low-adaptivity algorithms. Notably, this perception does not necessarily reflect actual abilities of an algorithm since low-adaptivity algorithms are generally capable of generating varying outcomes.
We argue further that perceiving creativity is a highly desirable product characteristic, as consumers increasingly assign novel roles to smart, connected, and autonomous products (Novak and Hoffman 2023). Consumers expect voice assistants to provide creative answers to the same user requests, smart fridges to propose different dishes based on available ingredients, and recommendation agents to provide different suggestions for the same user (Hoffman and Novak 2018; Leung, Paolacci, and Puntoni 2018; Raff, Wentzel, and Obwegeser 2020). Accordingly, a high-adaptivity algorithm implemented in an algorithm-controlled product can counter one of the key weaknesses of algorithms perceived by consumers: lacking creativity. Our core assumption is, therefore, that high-adaptivity algorithms generally increase product preferences because they are perceived to be more creative. Formally:
H1a:Consumers’ preferences for algorithm-controlled products are affected by the type of implemented algorithm. Specifically, high-adaptivity (vs. low-adaptivity) algorithms increase product preferences for algorithm-controlled products.
H1b:The positive effect of high-adaptivity (vs. low-adaptivity) algorithms on product preferences is mediated by perceived algorithm creativity.
When Low Adaptivity Is Preferred: POR as Moderator
Although we expect that algorithm adaptivity is generally seen beneficial, there are situations in which low adaptivity may be valued more. Consider a smart lock. People expect a smart lock to reliably lock or unlock—there are only two possible desired outcomes. Perceptions of algorithm creativity should reduce product preferences for such a product because a deviation from the requested solution would reduce the functionality of the product. This type of product is different from products for which consumers expect a wide range of different outcomes (like a voice assistant that shall provide varying answer to a standard request such as “tell me a joke”).
We refer to this distinguishing product feature as product outcome range (POR). POR is defined as the number of solutions a product is expected to provide within a task or across a number of tasks. By solution we mean the result a consumer gets in response to a task (something that the product is requested to do). Per this definition, POR can refer to both solutions provided within a single task (a voice assistant can crack multiple jokes when tasked to tell a joke) and solutions across multiple tasks (a voice assistant can provide solutions to requests regarding weather forecast, shopping recommendations, or telling jokes).
POR is different from constructs such as product variation (i.e., products sharing core characteristics, yet have slight modifications; Normann 1971), product variability (i.e., products from the same brand with differing quality levels; Gürhan-Canli 2003), or the distinction between search and experience goods (i.e., the type of information consumers rely on before purchase and consumption; Nelson 1970). POR is also distinct from algorithm adaptivity, because adaptivity refers to an algorithm’s ability to change its rules. POR, in contrast, refers to the range of outcomes consumers expect from a product.
We propose that POR moderates the effect of algorithm type on product preferences. Specifically, the creativity of high-adaptivity algorithms should be perceived positively when many different solutions are possible and appropriate (i.e., wide POR). At the same time, however, because high-adaptivity algorithms can change their rules without the involvement of a programmer, they should be seen as less predictable.
Predictability describes the capacity to foresee a technology’s results with a high degree of certainty (Hoff and Bashir 2015). Predictability is generally a positive signal because it makes us feel certain and in control of our environment (LeBoeuf and Norton 2012; Webster and Kruglanski 1994). Therefore, predictability has been seen as one of the most important drivers of trust in technology (Hancock et al. 2011; Lee and See 2004; Muir and Moray 1996). By contrast, lacking predictability is a major obstacle for the adoption of technologies, for instance, when it evokes perceived loss of control or disempowerment in autonomous products (de Bellis and Johar 2020; Rijsdijk and Hultink 2003; Schweitzer and Van den Hende 2016). Accordingly, lacking perceived predictability of high-adaptivity algorithms may reduce consumer preferences for algorithm-controlled products.
We propose that predictability may take preference over creativity for products with narrow POR. Predictability is perceived as particularly beneficial when an activity is highly routinized or when consumers desire a very specific outcome, which is the case in narrow POR situations (Stevenson and Moldoveanu 1995; Surprenant and Solomon 1987). Thus, in our framework (figure 1), POR is the most critical product characteristic that determines whether the mechanism via creativity or predictability is more dominant. Note that a premise of this framework is the reasonable assumption that consumers expect a basic level of functionality from products (i.e., consumers assume that products will not function randomly or erroneously). We grant that the positive effect of high-adaptivity algorithms for wide POR products may be weakened if consumers fear a product lacks these basic functionalities. This should be particularly problematic for very innovative or risky products, like autonomous cars which are frequently discussed in the media in terms of fatal accidents (Dickson 2018). Increasing predictability, for instance, by making the actions of a product more explainable for consumers, may thus be important also for wide POR products, especially if they are seen as very risky (Shariff, Bonnefon, and Rahwan 2017). Formally, we propose the following hypotheses regarding the moderating role of POR:
H2a:POR moderates the effect of algorithm type on product preferences such that high-adaptivity algorithms decrease preferences for products with narrow (vs. wide) POR.
H2b:The negative effect of high-adaptivity (vs. low-adaptivity) algorithms on product preferences for narrow POR is mediated by perceived algorithm predictability.

OVERVIEW OF STUDIES
Our empirical package consists of six studies presented in two parts. The first part focuses on the positive effect of high-adaptivity algorithms (hypotheses 1A and 1B). Using in-depth qualitative interviews, study 1 shows that consumers are well aware of different algorithm types in products and that they value high-adaptivity algorithms for many products. Study 2 experimentally tests the positive main effect of high- versus low-adaptivity algorithms in an incentive-compatible consumption context, revealing that consumers are more likely to use a cooking recipe generation app if the app uses a high-adaptivity algorithm. Study 3, conducted in a controlled lab setting, shows for a voice assistant product that this effect is mediated by creativity. Note that study 2 uses a product with a wide POR within the task of a product (generating recipes), whereas study 3 uses a product with wide POR across tasks (a voice assistant).
In the second empirical part, we turn to the negative effect of high-adaptivity algorithms (hypotheses 2A and 2B). Study 4 provides correlational evidence for a host of different algorithm-controlled products that preferences for algorithm types vary with POR. Study 5 provides experimental evidence of this moderating role of POR in the context of a cooking app. We varied POR within the tasks of the same product, a cooking app, by asking for one specific recipe versus multiple variations of recipes for a dish. Finally, study 6 shows that explainability can account for the lacking predictability of high-adaptivity algorithms in autonomous cars.
In our experimental manipulations, we operationalize algorithm types as the extremes of high-adaptivity and low-adaptivity algorithms (referring to both “adaptive/high-adaptivity algorithms” vs. “pre-programmed/low-adaptivity algorithms”; web appendix E). Results are robust across manipulations, study designs (qualitative, experimental, correlative), product types (e.g., recipe applications, voice assistants, autonomous driving systems), and preference measures (consequential or hypothetical). Table 2 provides an overview of studies and main results, web appendices E to K contain additional study details and analyses. All study materials are accessible via the Open Science Framework (https://osf.io/v82f3/?view_only=0d3334a4b0bc4c8386d677596e0be8a4).
Study . | Experimental group . | N . | Product preference . | Perceived algorithm creativity . | Perceived algorithm predictability . | Main finding . |
---|---|---|---|---|---|---|
Part I: Studies testing the positive effect of high-adaptivity algorithms (H1a, H1b) | ||||||
Study 1 | Qualitative interviews | 15 | Consumers are aware of algorithm types and value high adaptivity | |||
Study 2 | High adaptivity | 98 | 66.33** | A high-adaptivity algorithm increases usage of a recipe generator (% of participants; H1a) | ||
Low adaptivity | 108 | 42.59 | ||||
Study 3 | High adaptivity | 104 | 4.70* | 3.99*** | 4.84 | Perceived algorithm creativity fully mediates the effect of algorithm type on intention to use a voice assistant (H1a, H1b) |
Low adaptivity | 103 | 4.05 | 2.70 | 5.83*** | ||
Part II: Studies testing the negative effect of high-adaptivity algorithms for narrow POR (H2a, H2b) | ||||||
Study 4 | Survey data (correlational) | 200 | Preference for high-adaptivity algorithms increases with wider POR and decreases with narrower POR (r = 0.77***) | |||
Study 5 | High adaptivity, wide POR | 159 | 5.14*** | 5.15*** | 4.36 | POR moderates the effect of algorithm type on intention to use a cooking app (H1a, H1b, H2a, H2b) |
Low adaptivity, wide POR | 150 | 4.24 | 3.43 | 5.26*** | ||
High adaptivity, narrow POR | 141 | 4.35 | 4.66*** | 4.74 | ||
Low adaptivity, narrow POR | 155 | 5.14*** | 3.31 | 5.66*** | ||
Study 6 | High adaptivity, high explainability | 111 | 5.57* | 4.71** | 4.95 | Explainability moderates the effect of algorithm type on predictability and intention to purchase a driving assistance system (H2b) |
Low adaptivity, high explainability | 118 | 4.42 | 4.00 | 5.25 | ||
High adaptivity, low explainability | 121 | 3.96 | 4.40*** | 4.18 | ||
Low adaptivity, low explainability | 127 | 4.54 | 3.30 | 5.20*** | ||
Total | 1,710 |
Study . | Experimental group . | N . | Product preference . | Perceived algorithm creativity . | Perceived algorithm predictability . | Main finding . |
---|---|---|---|---|---|---|
Part I: Studies testing the positive effect of high-adaptivity algorithms (H1a, H1b) | ||||||
Study 1 | Qualitative interviews | 15 | Consumers are aware of algorithm types and value high adaptivity | |||
Study 2 | High adaptivity | 98 | 66.33** | A high-adaptivity algorithm increases usage of a recipe generator (% of participants; H1a) | ||
Low adaptivity | 108 | 42.59 | ||||
Study 3 | High adaptivity | 104 | 4.70* | 3.99*** | 4.84 | Perceived algorithm creativity fully mediates the effect of algorithm type on intention to use a voice assistant (H1a, H1b) |
Low adaptivity | 103 | 4.05 | 2.70 | 5.83*** | ||
Part II: Studies testing the negative effect of high-adaptivity algorithms for narrow POR (H2a, H2b) | ||||||
Study 4 | Survey data (correlational) | 200 | Preference for high-adaptivity algorithms increases with wider POR and decreases with narrower POR (r = 0.77***) | |||
Study 5 | High adaptivity, wide POR | 159 | 5.14*** | 5.15*** | 4.36 | POR moderates the effect of algorithm type on intention to use a cooking app (H1a, H1b, H2a, H2b) |
Low adaptivity, wide POR | 150 | 4.24 | 3.43 | 5.26*** | ||
High adaptivity, narrow POR | 141 | 4.35 | 4.66*** | 4.74 | ||
Low adaptivity, narrow POR | 155 | 5.14*** | 3.31 | 5.66*** | ||
Study 6 | High adaptivity, high explainability | 111 | 5.57* | 4.71** | 4.95 | Explainability moderates the effect of algorithm type on predictability and intention to purchase a driving assistance system (H2b) |
Low adaptivity, high explainability | 118 | 4.42 | 4.00 | 5.25 | ||
High adaptivity, low explainability | 121 | 3.96 | 4.40*** | 4.18 | ||
Low adaptivity, low explainability | 127 | 4.54 | 3.30 | 5.20*** | ||
Total | 1,710 |
Note.— * p < .05, ** p < .01, *** p < .001; POR = product outcome range; statistical tests compare high-adaptivity (1) versus low-adaptivity (0) algorithms; for group comparisons in study 2, we used a chi-square test; in all other studies, we used ANOVAs.
Study . | Experimental group . | N . | Product preference . | Perceived algorithm creativity . | Perceived algorithm predictability . | Main finding . |
---|---|---|---|---|---|---|
Part I: Studies testing the positive effect of high-adaptivity algorithms (H1a, H1b) | ||||||
Study 1 | Qualitative interviews | 15 | Consumers are aware of algorithm types and value high adaptivity | |||
Study 2 | High adaptivity | 98 | 66.33** | A high-adaptivity algorithm increases usage of a recipe generator (% of participants; H1a) | ||
Low adaptivity | 108 | 42.59 | ||||
Study 3 | High adaptivity | 104 | 4.70* | 3.99*** | 4.84 | Perceived algorithm creativity fully mediates the effect of algorithm type on intention to use a voice assistant (H1a, H1b) |
Low adaptivity | 103 | 4.05 | 2.70 | 5.83*** | ||
Part II: Studies testing the negative effect of high-adaptivity algorithms for narrow POR (H2a, H2b) | ||||||
Study 4 | Survey data (correlational) | 200 | Preference for high-adaptivity algorithms increases with wider POR and decreases with narrower POR (r = 0.77***) | |||
Study 5 | High adaptivity, wide POR | 159 | 5.14*** | 5.15*** | 4.36 | POR moderates the effect of algorithm type on intention to use a cooking app (H1a, H1b, H2a, H2b) |
Low adaptivity, wide POR | 150 | 4.24 | 3.43 | 5.26*** | ||
High adaptivity, narrow POR | 141 | 4.35 | 4.66*** | 4.74 | ||
Low adaptivity, narrow POR | 155 | 5.14*** | 3.31 | 5.66*** | ||
Study 6 | High adaptivity, high explainability | 111 | 5.57* | 4.71** | 4.95 | Explainability moderates the effect of algorithm type on predictability and intention to purchase a driving assistance system (H2b) |
Low adaptivity, high explainability | 118 | 4.42 | 4.00 | 5.25 | ||
High adaptivity, low explainability | 121 | 3.96 | 4.40*** | 4.18 | ||
Low adaptivity, low explainability | 127 | 4.54 | 3.30 | 5.20*** | ||
Total | 1,710 |
Study . | Experimental group . | N . | Product preference . | Perceived algorithm creativity . | Perceived algorithm predictability . | Main finding . |
---|---|---|---|---|---|---|
Part I: Studies testing the positive effect of high-adaptivity algorithms (H1a, H1b) | ||||||
Study 1 | Qualitative interviews | 15 | Consumers are aware of algorithm types and value high adaptivity | |||
Study 2 | High adaptivity | 98 | 66.33** | A high-adaptivity algorithm increases usage of a recipe generator (% of participants; H1a) | ||
Low adaptivity | 108 | 42.59 | ||||
Study 3 | High adaptivity | 104 | 4.70* | 3.99*** | 4.84 | Perceived algorithm creativity fully mediates the effect of algorithm type on intention to use a voice assistant (H1a, H1b) |
Low adaptivity | 103 | 4.05 | 2.70 | 5.83*** | ||
Part II: Studies testing the negative effect of high-adaptivity algorithms for narrow POR (H2a, H2b) | ||||||
Study 4 | Survey data (correlational) | 200 | Preference for high-adaptivity algorithms increases with wider POR and decreases with narrower POR (r = 0.77***) | |||
Study 5 | High adaptivity, wide POR | 159 | 5.14*** | 5.15*** | 4.36 | POR moderates the effect of algorithm type on intention to use a cooking app (H1a, H1b, H2a, H2b) |
Low adaptivity, wide POR | 150 | 4.24 | 3.43 | 5.26*** | ||
High adaptivity, narrow POR | 141 | 4.35 | 4.66*** | 4.74 | ||
Low adaptivity, narrow POR | 155 | 5.14*** | 3.31 | 5.66*** | ||
Study 6 | High adaptivity, high explainability | 111 | 5.57* | 4.71** | 4.95 | Explainability moderates the effect of algorithm type on predictability and intention to purchase a driving assistance system (H2b) |
Low adaptivity, high explainability | 118 | 4.42 | 4.00 | 5.25 | ||
High adaptivity, low explainability | 121 | 3.96 | 4.40*** | 4.18 | ||
Low adaptivity, low explainability | 127 | 4.54 | 3.30 | 5.20*** | ||
Total | 1,710 |
Note.— * p < .05, ** p < .01, *** p < .001; POR = product outcome range; statistical tests compare high-adaptivity (1) versus low-adaptivity (0) algorithms; for group comparisons in study 2, we used a chi-square test; in all other studies, we used ANOVAs.
PART I—STUDY 1: CONSUMER RELEVANCE OF ALGORITHM TYPES AND POSITIVE EFFECT OF HIGH ADAPTIVITY
The first study consists of qualitative consumer interviews, designed to understand consumer awareness of algorithms in products and their knowledge of algorithm types.
Method
We invited 15 US consumers from Prolific to take part in interviews about technology products. Since the interviews were designed to get an in-depth perspective on consumer perceptions of this topic, each interview lasted between 24 and 57 minutes (Mduration = 35:45). The interviews were conducted and recorded online via Zoom, and all interviewees reported being in private places.
All interviews followed a semi-structured interview guide (Arsel 2017). First, participants were asked how different technology products (thermostat, fridge, automated solar shades, electric toothbrush, food processor, parking assistant system, and participants’ own product examples) have developed and will further develop. In this first part, the interviewing person did not refer to algorithms or AI to capture consumers’ spontaneous awareness of these concepts. In the second part, participants were asked about their observations of how algorithms have changed in the past and how they will change in the future. The interviews were transcribed using a professional transcript service (GoTranscript). Web appendix F provides details on the interview guide, participants’ demographic data, and quotes for the discussed main topics.
Results
Participants had no difficulty elaborating on past and future developments of technology products. All of them mentioned technological developments of products they had heard of or experienced in recent years and could speculate about potential developments in the coming years. Overall, we found consumers (1) to be highly aware of the importance of algorithms and software in technology products and (2) to perceive adaptivity as a key distinguishing factor both in algorithms and in products.
Consumer Awareness of Algorithms in Products
Consumers perceive algorithms as critical in the functioning and development of technology products. Although the interviewers were strictly advised not to mention algorithms or related concepts, 13 of the 15 participants referred to algorithms or related terms (e.g., “software,” “programming,” “computer,” or “artificial intelligence”) on their own volition. For instance, when asked about electric toothbrushes, a 34-year-old psychology grad student answered promptly: “Oh coding […], I imagine they have […] more electrical software in it, to be more precise in how it functions […].” A 26-year-old lab worker referred to modern fridges that “use a lot of programming or maybe algorithms,” and a 27-year-old teacher’s assistant said that a solar shade now “probably has built-in software.”
Moreover, without specifically being asked, respondents referred to different programming approaches. For instance, a 48-year-old sales manager mentioned that for food processors, “the pre-programming that’s already built into the machines is the big change.” Similarly, a 49-year-old constructor described the manual coding of toothbrushes as follows: “I think you’d program in all the different dental conditions that are out there and then an individual would have his or her own profile that would be matched daily as you use the brush.” Other participants referred directly to AI or learning software, like a 44-year-old artist elaborating on smart homes that “would know what all your preferences are” and that “as artificial intelligence grows, it’ll be able to pick up on nuances of people.”
Consumer Perceptions of Adaptivity in Products and Algorithms
Another core topic discussed by all 15 participants was the increasing capability of products to adapt to its user and to surrounding conditions. Related concepts mentioned include learning, customization, sensory awareness, or the anticipation/prediction of user needs. For example, a 31-year-old minister and stay-at-home mom described a thermostat that “pays attention to what you’re doing, and […] learns your preferences,” and a 58-year-old consultant imagined a food processor that “would adapt to a particular food, and […] just sense what it is.” Similarly, a 45-year-old health aid reported her impression that “the technology is learning about us, learning our behaviors […]” by “analyzing our patterns.”
Perceptions of Differences in Algorithms
The second interview part revealed that learning abilities are also key distinguishing factors that consumers see in algorithms. In fact, consumers described both algorithm types. The previously mentioned lab worker, for instance, juxtaposed an algorithm that depended on self-learning with an externally programmed algorithm: “I think, the way it [a product] does that is normally asked to be programmed to do so. In other cases, it has to have some way to learn about the user itself […].” Similarly, the 48-year-old sales manager who used to work in the automotive sector first described low-adaptivity algorithms this way: “The stuff that I’ve dealt with, it is pre-programmed by the engineers […]. Everything is pre-set, tested, and validated to ensure that it works appropriately.” When asked how this had changed, she referred to high-adaptivity algorithms, describing that an algorithm “would be able to take in the information and adapt based on a feedback loop, so it can learn and make adjustments from the information that it’s getting.” Overall, participants described algorithms both in terms of a more traditional understanding, using phrases like “hard-coded,” “pre-programmed,” and “manually programmed,” but also enhanced learning capacities of algorithms, describing them as “flexible,” “dynamic,” “complex,” or “learning,” “adaptable,” and “agile.”
Discussion
Consumers have concrete beliefs about the basic differences between the algorithm types operating in products. These beliefs include perceptions of algorithms as both intelligent and self-learning entities and mechanical executors of human-coded instructions. Importantly, consumers see both algorithm types as capable of steering products. Nevertheless, adaptivity is seen as a rather positive core component of technological development in both algorithms and products. Apparently, consumers do not perceive all algorithms as rational decision-making and calculation tools, yet they also see algorithms in products as steering entities with the potential to learn and adapt their behaviors. This insight is an important complement to prior research focusing on algorithms in intangible applications and comparing them to humans.
PART I—STUDY 2: HIGH-ADAPTIVITY ALGORITHM INCREASES USAGE OF COOKING RECIPE GENERATOR
Study 2 experimentally tests the main effect of high- versus low-adaptivity algorithms on product preferences. Specifically, we test whether algorithm type influences actual usage of a cooking recipe generator, even when it comes with financial consequences for the consumer.
Method
We recruited 206 participants (Mage = 28.07, SD = 7.70; 47.09% female) via Prolific and assigned them to one of two conditions: low-adaptivity versus high-adaptivity algorithm. We pre-screened for participants from German-speaking Europe, as the recipe generator is currently available only in German. Participants took part in a recipe writing contest in which they could win a bonus payment in addition to their participation payment. They could use a recipe generator app as support; however, doing so reduced their bonus payment by 50%. This recipe generator is able to generate recipes from scratch based on typed-in user ingredients and thus resembles current generative AI applications.
First, participants received information about the task, which involved generating a creative cooking recipe with two prescribed ingredients: banana and cream cheese. In a pretest (N = 52), 92.31% of participants considered this a task with “many different possible solutions” versus “only a few possible solutions” (web appendix G). They learned that the creators of the five best-rated recipes (rated by an independent jury) would win a bonus payment worth USD 5. They were also told that they could use a recipe generator for help, but if they did so and their recipe won, their bonus payment would be reduced to USD 2.50. Before making this decision, they received a description of the recipe generator with the experimental manipulation of algorithm type. The low-adaptivity algorithm was described as an algorithm that uses rules that “do not adapt during usage to new input and data” but follows “rules precoded by its programmer.” In the high-adaptivity condition, the algorithm was described as using a set of rules that “continuously adapt during usage to new input and data” and that “learns the rules independently of its programmer” (figure 2). In this and all following studies, we did not specify which data the high-adaptivity algorithm used. Participants then described the algorithm to make sure they read and understood the information.

LOW-ADAPTIVITY AND HIGH-ADAPTIVITY ALGORITHM DESCRIPTION USED IN STUDY 2
Note.— Original stimuli were shown in German.
Next, participants chose whether they wanted to use the recipe generator in the contest or work on their own. Participants who chose to use the recipe generator were forwarded to its website (available via a public URL in their browser). They could interact with the recipe generator for as long as they wanted and then return to the questionnaire, to the page where they submitted their recipes. Participants who decided to work alone were sent directly to this page.
Next, participants completed a post-questionnaire. Participants who used the recipe generator were asked to indicate their satisfaction with the generator’s recipes and their overall impression of the generator. All participants indicated their personal identification with cooking, preferences for creative recipes, and the importance to attribute the recipe task to themselves. Finally, participants answered additional check questions, including an attention check (“How was the algorithm described in this survey?”; “As an adaptive algorithm,” “As a pre-programmed algorithm,” “I don’t know”), and provided their demographic data (all measures are reported in web appendix G). Throughout the studies, we report intention-to-treat results of all participants that completed the studies irrespective of whether participants failed the attention check (results are comparable when excluding those who failed the checks; all attention checks are reported in the respective web appendices of the studies).
Results and Discussion
A chi-square test revealed a positive main effect of the high-adaptivity (vs. low-adaptivity) algorithm on the choice to use the recipe generator: Significantly more participants chose to use the recipe generator in the high-adaptivity condition (66.33%) as opposed to the low-adaptivity condition (42.59%; χ2(1, N = 206) = 11.65, p = .001, Cramer’s V = 0.24). As expected, the high-adaptivity algorithm increased product usage for this product, even though the usage could have reduced the financial payment if they would win the contest. These results are robust when controlling for personal identification with cooking, individual preferences for creative recipes, and importance to attribute the cooking task to oneself (b = 1.03, p = .002 in a logistic regression; web appendix G). These findings show how important consumers consider adaptivity for generative algorithmic applications, a finding with important implications for emerging generative AI applications.
PART I—STUDY 3: PERCEIVED CREATIVITY MEDIATES THE POSITIVE EFFECT OF HIGH-ADAPTIVITY ALGORITHMS
Study 3 tests if the positive effect of high-adaptivity algorithms on product preferences can be explained by perceived creativity. The focal product is a voice assistant, a product with wide POR across different tasks.
Method
We invited 207 students from Columbia University (Mage = 24.56, SD = 7.73; 56.52% female) to the behavioral lab and assigned them to one of two conditions in a between-subjects experiment: low versus high adaptivity. First, participants received the descriptions of algorithms. The low-adaptivity algorithm was described as the extreme of a pre-programmed algorithm that was “manually programmed with instructions,” and able to adapt to external requirements only via “orders pre-coded by a programmer.” The high-adaptivity algorithm was described as an adaptive algorithm being “able to learn on its own” and that “adapts itself.” As part of the manipulation, we also provided information and graphical illustrations of a prototypical example of each algorithm type (i.e., a conditional if–then flowchart algorithm for the low-adaptivity, and an artificial neural network for the high-adaptivity algorithm). Similar descriptions were used in studies 5 and 6 (for details, see web appendix E).
Next, participants answered a series of questions about how they perceived the algorithm in randomized order, including questions regarding perceived algorithm creativity (four items adapted from Rosengren, Dahlén, and Modig 2013: “I feel this algorithm is creative,” “The creativity level of this algorithm is high,” “This algorithm can be creative,” “I see this algorithm as creative”; α = 0.91), and perceived algorithm predictability (one item: “How predictable is the outcome this algorithm will produce?”; 1 = not at all true/not predictable at all, 7 = very true/very predictable). For exploratory purposes and robustness analyses, we included additional measures for algorithm perceptions in the survey.
Then, participants were introduced to the product, a fictional voice assistant called “SmartVoice,” described as a device that “answers questions, plays music, makes recommendations, and performs actions by delegating requests to a set of internet services.” The product description further included information about the algorithm of the SmartVoice (depending on the condition of either low-adaptivity or high-adaptivity) as well as visual illustrations of the SmartVoice product and the respective algorithm type. We captured usage intention for the voice assistant on a seven-point scale (two items adapted from Venkatesh and Davis 2000: “Assuming I have access to the SmartVoice, I intend to use it,” “Given that I have access to the SmartVoice, I predict that I would use it”; 1 = not at all true, 7 = very true; α = 0.94). After this main task, participants completed a short follow-up questionnaire, an attention check, and provided their demographic data (see web appendix H for all items and study details).
Results
One-way ANOVAs revealed a positive effect of the high-adaptivity versus low-adaptivity algorithm on usage intention (Mhigh = 4.70, SD = 1.86, Mlow = 4.05, SD = 1.80; F(1, 205) = 6.50, p = .012, ηp2 = 0.03) and perceived algorithm creativity (Mhigh = 3.99, SD = 1.53, Mlow = 2.70, SD = 1.21; F(1, 205) = 45.52, p < .001, ηp2 = 0.18), and a negative main effect on perceived algorithm predictability (Mhigh = 4.84, SD = 1.51, Mlow = 5.83, SD = 1.55; F(1, 205) = 21.65, p < .001, ηp2 = 0.10).
To test the underlying role of perceived predictability and perceived creativity of the algorithm, we ran a parallel mediation analysis. We introduced algorithm type (1 = high-adaptivity, 0 = low-adaptivity) as the independent variable, perceived creativity and predictability as mediators, and usage intention as the dependent variable. For this and all further mediation analyses, we used the PROCESS macro (Hayes 2013) with 10,000 resamples for bootstrapped estimates of the indirect effect. The analysis (model 4; Hayes 2013) showed that perceived creativity significantly mediated the effect of algorithm type on usage intention (indirect effect: b = 0.61, SE = 0.15, CI95% = [0.35; 0.92]), but predictability did not (indirect effect: b = −0.02, SE = 0.08, CI95% = [−0.19; 0.14]). Including the mediators into the model produced a non-significant direct effect (c′ = 0.06, SE = 0.27, p = .828) as opposed to a significant total effect (c = 0.65, SE = 0.25, p = .012), indicating full mediation.
In line with our hypotheses 1a and 1b, we find that the high-adaptivity algorithm was perceived as more creative, driving consumers’ intention to use the voice assistant. By contrast, the mediation via perceived predictability was not significant, in line with our theorizing of seeing voice assistants as products with a wide POR. Indeed, only perceived creativity, but not predictability, correlated positively with usage intention (coefficients bcreativity = 0.47, p < .001; bpredictability = 0.02, p = .789), suggesting that creativity is particularly important for the adoption of such algorithm-controlled products. Moreover, predictability does not seem to be a relevant pre-condition to tolerate an algorithm’s creativity, as a serial mediation via predictability and creativity became insignificant (indirect effect: b = 0.05, SE = 0.04, CI95% = [−0.02; 0.14]). This is consistent with our basic assumption that consumers expect a product to perform fundamental functions.
A series of robustness checks highlights the importance of the creativity construct as implied by our conceptualization. Parallel mediation models including alternative mediators such as trust or expected performance did not reveal any significant indirect effect, except for the creativity construct. In contrast, all alternative mediation models consistently reveal perceived algorithm creativity as the most important mediator driving the effect. Even though concepts related to creativity (originality, flexibility, and innovativeness) mediate the effect in single mediation models (as they are highly related to creativity), the established creativity scale that we used in our study accounted for the variance in parallel mediation analyses. These robustness checks are reported in web appendix H.
Discussion
This study implies that the positive effect of high-adaptivity algorithms is mediated by perceived creativity. Together, the initial studies provide robust evidence that algorithm types drive product preferences and that adaptivity is a desirable characteristic for algorithm-controlled products. In the next set of studies (part II of the empirical section), we turn to the potential harmful effects of algorithm adaptivity.
PART II—STUDY 4: PREFERENCE FOR ALGORITHM TYPES VARIES DEPENDING ON POR
Study 4 uses a correlative design to gain a better understanding of consumer preferences for algorithm-controlled products along the full scale of POR.
Method
We compiled a list of 55 algorithm-controlled products and asked two independent samples of respondents to rate them. Participants from the first sample (survey 1; N = 100 participants from Amazon Mechanical Turk; Mage = 40.98, SD = 12.94; 40% female) were informed about both high-adaptivity and low-adaptivity algorithm types in randomized order (descriptions in web appendix E). Then, they indicated their preference for a high-adaptivity versus low-adaptivity algorithm for each product on the list by choosing one algorithm (“Please decide for each product which algorithm type is more appropriate for the particular product—an algorithm with a high or with a low adaptivity”). A second sample (survey 2; N = 100 participants from Amazon Mechanical Turk; Mage = 37.52, SD = 9.62; 39% female) rated the POR of each of the 55 products on a seven-point scale (“Please indicate for each product how you would rate it in terms of its outcome range”; 1 = product with narrow outcome range, 7 = product with wide outcome range). Participants were told that products can vary in the number of possible outcomes (i.e., solutions) they can and should provide when fulfilling their tasks (i.e., either many different solutions/wide POR, or only a limited number of different solutions/narrow POR). For statistical analyses, we merged algorithm preferences from survey 1 (i.e., the proportion of participants choosing the high-adaptivity algorithm over the low-adaptivity algorithm) and survey 2 (i.e., average POR ratings) for each product. Web appendix I provides additional survey details.
Results and Discussion
POR ratings correlated positively with preferences for algorithm type. Specifically, more people wanted high-adaptivity algorithms in their products when these products were of wide POR (r = 0.77, p < .001). For example, fewer participants preferred a high-adaptivity versus low-adaptivity algorithm for bicycle locks, smoke detectors, or coffee machines, and vice versa for surgical robots, voice assistants, or recommendation agents (figure 3 and web appendix I for product list and product-specific results).

CORRELATION OF PRODUCT OUTCOME RANGE AND PROPORTION OF PARTICIPANTS FAVORING A HIGH-ADAPTIVITY ALGORITHM
Note.— The grey area represents the standard error; r = 0.77, p < .001.
PART II—STUDY 5: POR MODERATES THE EFFECT OF ALGORITHM TYPES
Study 5 tests the moderating role of POR on the effect of algorithm type on usage intention for a cooking app. Importantly, we manipulated POR using the same products’ task by holding the product constant and varying the outcome range of the product task.
Method
We recruited 605 participants from Prolific (Mage = 37.71 years, SD = 13.4; 49.8% female). Participants were assigned to one of four conditions in a 2 (algorithm type: high-adaptivity vs. low-adaptivity) × 2 (POR: wide vs. narrow) between-subjects experiment.
Participants were asked to imagine they wanted to bake a cake for friends, guided by a cooking app. The description of task requirements was used to manipulate POR: In the wide POR condition, participants were told to imagine their task would be to bake a “variation of cheesecake,” as the friends loved “many different cheesecakes,” and that accordingly, “there is a large number of possible solutions (i.e., variations of cheesecakes) for this task.” In the narrow POR condition, their task was to bake “the regular cheesecake,” as the friends loved “only this specific cheesecake,” and that accordingly, “there is only one possible solution (i.e., the regular cheesecake) for this task.” Next, participants were told what kind of algorithm the cooking app entailed (similar to study 3 but replacing “adaptive” with “high-adaptivity algorithm” and “pre-programmed” with “low-adaptivity algorithm”; web appendix E). After having received the algorithm and task description, participants were asked to describe the task and cooking app in their own words.
Next, participants responded to the perceived algorithm creativity and predictability scale items (same measures as in study 3; αcreativity = 0.97). Then, they indicated their intention to use the cooking app for the specified baking task on a seven-point scale (“How likely would you be to use the cooking app, with an algorithm with high adaptivity (low adaptivity) in this specific situation, i.e., if you want to bake the regular cheesecake (a variation of cheesecake) for your friends?”; 1 = not likely at all, 7 = very likely). Finally, participants completed a short post-questionnaire including an open-ended answer about why they would use the app, a manipulation check (“The task described in this survey had many different possible solutions”; 1 = totally disagree, 7 = totally agree), attention checks, and provided their demographic data. According to the manipulation check, the POR manipulation worked (Mwide POR = 5.09, SD = 1.52, Mnarrow POR = 2.99, SD = 1.82; F(1, 603) = 237.36, p < .001, ηp2 = 0.28; web appendix J for further details).
Results
A two-way ANOVA revealed a significant interaction between algorithm type and POR (F(1, 601) = 31.67, p < .001, ηp2 = 0.05). The high-adaptivity algorithm increased intention to use the cooking app for the task with wide POR (Mhigh = 5.14, SD = 1.77, Mlow= 4.24, SD = 1.79; F(1, 307) = 19.93, p < .001, ηp2 = 0.06), and reduced usage intention for the task with narrow POR (Mhigh = 4.35, SD = 1.99, Mlow = 5.14, SD = 1.84; F(1, 294) = 12.49, p < .001, ηp2 = 0.04; calculated with one-way ANOVAs; figure 4). The main effects of algorithm type and POR on usage intention were not significant (algorithm type: F(1, 601) = 0.15, p = .697; POR: F(1, 601) = 0.14, p = .71). Further, one-way ANOVAs revealed a positive main effect of the high-adaptivity algorithm on perceived algorithm creativity (Mhigh = 4.92, SD = 1.57, Mlow = 3.37, SD = 1.74; F(1, 603) = 132.23, p < .001, ηp2 = 0.18) and a negative main effect on perceived algorithm predictability (Mhigh = 4.54, SD = 1.36, Mlow = 5.46, SD = 1.42; F(1, 603) = 66.76, p < .001, ηp2 = 0.10).

POSITIVE AND NEGATIVE INFLUENCE OF HIGH-ADAPTIVITY ALGORITHMS ON USAGE INTENTION
Note.— Error bars represent standard errors.
To test the mediations via perceived creativity and predictability moderated by POR, we ran a parallel moderated mediation analysis with algorithm type (1 = high, 0 = low) as the independent variable, POR (1 = wide, 0 = narrow) as the moderator of the b-paths, perceived creativity and predictability as the mediators, and usage intention as the dependent variable. The analysis (model 14; Hayes 2013) showed that perceived creativity significantly mediated the effect of algorithm type on usage intention for narrow POR (indirect effect: bnarrow POR = 0.51, SE = 0.10, CI95% = [0.32; 0.71]) and wide POR (indirect effect: bwide POR = 0.96, SE = 0.12, CI95% = [0.73; 1.21]). Yet the indirect effect via perceived creativity was stronger for wide POR, as indicated by a significant moderated mediation (index = 0.45, SE = 0.13, CI95% = [0.21; 0.72]). Perceived predictability did not mediate the effect of algorithm type on usage intention for wide POR (indirect effect: bwide POR = −0.09, SE = 0.06, CI95% = [−0.20; 0.03]), yet significantly mediated the effect for narrow POR (indirect effect: bnarrow POR = −0.40, SE = 0.09, CI95% = [−0.60; −0.23]). This moderation of the mediation was significant (index of moderated mediation: b = 0.31, SE = 0.11, CI95% = [0.11; 0.54]; figure 5 and web appendix J for regression coefficients and additional analyses).

PRODUCT OUTCOME RANGE MODERATES MEDIATIONS VIA PERCEIVED ALGORITHM CREATIVITY AND PREDICTABILITY
Note.— *p < .05, **p < .01, ***p < .001; +significant at 5% level based on 95% bootstrapped confidence intervals with 10,000 repetitions; direct effect c′ in parentheses.
Discussion
These results corroborate study 3’s finding that a high-adaptivity algorithm is perceived as more creative yet less predictable, influencing product preference depending on POR: Perceived creativity increases usage intention for wide POR more than for narrow POR. In contrast, perceived predictability increases usage intention for narrow POR and decreases it for wide POR. This result shows that adaptivity is not always preferred across algorithm-controlled products, as high-adaptivity algorithms are perceived as less predictable.
This raises the question how predictability could be increased for high-adaptivity algorithms? This issue is not only of theoretical importance but also managerially relevant because many products are using high-adaptivity algorithms today. We propose that consumers will view a high-adaptivity algorithm as more predictable when the actual workings of the algorithm are made explainable (Miller 2019). Therefore, we vary and thus test the explainability of algorithms in study 6.
PART II—STUDY 6: EXPLAINABILITY MODERATES THE EFFECT OF ALGORITHM TYPES
The objectives of study 6 are threefold: First, this study tests explainability as a theoretically and managerially relevant means to increase predictability perceptions for high-adaptivity algorithms (i.e., moderating the mediating effect of predictability). Second, it provides additional evidence for the importance of the predictability mechanism. Third, it reveals a boundary condition of the positive effect of high-adaptivity algorithms for wide POR products (as discussed in the theoretical development section) for the timely product class of autonomous driving systems. Autonomous vehicles are increasingly navigating our streets, yet their adoption is a heated topic of discussion (Awad et al. 2018). Because autonomous vehicles are seen as highly risky and consumers may doubt that they function properly, lacking predictability is particularly detrimental for these products, even though they are of wide POR (Shariff et al. 2017).
Explainability can be a practically implementable means to compensate for lacking predictability of high-adaptivity algorithms. Indeed, providing an explanation of why an algorithm reaches a decision should increase predictability, because it reduces users’ uncertainty regarding why a system acts the way it does (Miller 2019; Wang et al. 2016). Thus, whereas the lacking predictability of high-adaptivity algorithms may dampen consumer preferences for autonomous driving systems, increasing the algorithm’s predictability should enhance the adoption of these systems. However, following our predictability hypothesis, increasing predictability should be helpful only for high-adaptivity algorithms, because low-adaptivity algorithms are already seen as highly predictable.
Method
Four hundred seventy-seven participants from the UK recruited via Prolific (prescreened for car ownership; Mage = 44.88 years, SD = 14.04; 50.5% female) successfully completed the survey. They were assigned to one of four conditions in a 2 (low-adaptivity vs. high-adaptivity algorithm) × 2 (high vs. low explainability) between-subjects experiment.
First, participants were introduced to the topic and product of the study: a driving assistance system (DAS) that used a LIDAR-based algorithmic system (LIDAR is a laser light-based detection technology that works similar to radar). They received information about the algorithm type of the DAS (similar descriptions as in studies 3 and 5) and were asked to describe the algorithm briefly. On the next page, all participants saw a picture in which the DAS made a driving decision and were told that the algorithm of the DAS could predict the best direction to take with 95% accuracy (fixing performance across conditions). In the low explainability condition, participants were informed that the depicted prediction of the algorithm could not be explained. In the high explainability condition, participants received additional explanations about how the algorithm arrived at this prediction (figure 6). Such a causal explanation is known to increase the explainability of algorithms (Liao, Gruen, and Miller 2020).

Then, participants were reminded of the algorithm they read about and described the received information regarding the algorithm and DAS. On the next page, participants answered the four items of the scale for perceived algorithm creativity (α = 0.98), one item regarding perceived algorithm predictability (similar measures as in previous studies), and one item capturing their purchase intention (“To what extent would you be interested in buying a DAS with a [algorithm name] for a car for USD 5,000 if you had the budget?”; 1 = 0%, 11 = 100%). On the last pages, participants completed manipulation and attention checks and provided their demographic data.
Results
A two-way ANOVA revealed a significant interaction between algorithm type and explainability (F(1, 473) = 8.08, p = .005, ηp2 = 0.02). The high-adaptivity algorithm increased purchase intention when the algorithm was highly explainable (Mhigh adaptivity = 5.57, SD = 3.57, Mlow adaptivity = 4.42, SD = 3.29; F(1, 227) = 6.47, p = .012, ηp2 = 0.03). Yet, there was no significant difference with regard to purchase intention between the low-adaptivity and high-adaptivity algorithms, when explainability was low (Mhigh adaptivity = 3.96, SD = 3.02, Mlow adaptivity = 4.54, SD = 3.38; F(1, 246) = 2.00, p = .159; calculated with one-way ANOVAs). The two-way ANOVA revealed a significant main effect of explainability, yet not of algorithm type, on purchase intent (algorithm type: F(1, 473) = 0.90, p = .344; explainability: F(1, 473) = 5.99, p = .015, ηp2 = 0.01; figure 7).

EXPLAINABILITY INCREASES PURCHASE INTENTION FOR THE DAS WITH THE HIGH-ADAPTIVITY ALGORITHM
Note.— Error bars represent standard errors.
Two-way ANOVAs (algorithm type × explainability) revealed significant main effects of algorithm type on both perceived algorithm creativity and predictability (creativity: Mhigh adaptivity = 4.55, SD = 1.73, Mlow adaptivity = 3.64, SD = 1.92; F(1, 473) = 29.78, p < .001, ηp2= 0.06; predictability: Mhigh adaptivity = 4.55, SD = 1.41, Mlow adaptivity = 5.23, SD = 1.34; F(1, 473) = 28.90, p < .001, ηp2 = 0.06). As expected, an interaction of algorithm type and explainability was significant only for perceived predictability (F(1, 473) = 8.33, p = .004, ηp2 = 0.02), but not perceived creativity (F(1, 473) = 1.36, p = .244).
To test the underlying role of perceived predictability, we ran a moderated mediation analysis with algorithm type (1 = high adaptivity, 0 = low adaptivity) as the independent variable, explainability (1 = high explainability, 0 = low explainability) as the moderator of the a-path, perceived predictability as the mediator, and purchase intention as the dependent variable. The analysis (model 7; Hayes 2013) showed that perceived predictability mediated the effect of algorithm type on purchase intention when explainability was low (indirect effect: blow = −0.45, SE = 0.15, CI95% = [−0.77; −0.19]), but not when explainability was high (indirect effect: bhigh = −0.14, SE = 0.08, CI95% = [−0.31; 0.005]). The index of moderated mediation showed a significant difference between conditional indirect effects (index: b = 0.32, SE = 0.15; CI95% = [0.08; 0.65]).
An additional analysis with creativity as a parallel mediator revealed that only the indirect effect via perceived predictability was moderated by explainability (index of moderated mediation: b = 0.35; SE = 0.16; CI95% = [0.09; 0.70]), yet the indirect effect via creativity was not (index of moderated mediation: b = −0.27; SE = 0.23; CI95% = [−0.71; 0.19]; see web appendix K for regression coefficients).
Discussion
These results show that explainability can act as a moderator of predictability, increasing product preferences for high-adaptivity algorithms but does not make a difference for low-adaptivity algorithms, which are already seen as predictable. Importantly, we test this effect in a conservative context by fixing the performance level of the DAS in all conditions at a high level of 95% accuracy (referring to the quality of driving decisions). Thereby, accuracy (in the sense of being correct) is different from predictability because it is a performance indicator of correct or wrong decisions (e.g., making a driving-related decision that will not cause an accident or direct to the wrong road), whereas the latter is referring to whether or not one of the possible outcomes (e.g., it could turn sharply left, normally left, and turn right) could be foreseen. Thus, even if performance is high, explainability still makes a difference for the DAS adoption.
This study has several implications. First, it shows that for some wide POR products, both predictability and creativity play an important role. Therefore, the positive effect of high-adaptivity algorithms only holds true if the product is sufficiently predictable and consumers trust that the product will fulfill its basic functionalities. For such products, which need to exhibit both creativity and predictability at the same time, such as autonomous vehicles, explainability may be particularly relevant because it compensates for the lack of predictability without restricting perceptions of creativity. Second, it shows that better explaining how high-adaptivity algorithms function increases predictability and leads consumers to better understand how the algorithms will generate the desired outcome.
GENERAL DISCUSSION
Our research extends the literature on algorithm perception by comparing different algorithm types with each other rather than contrasting algorithms with humans (Garvey et al. 2023; Longoni and Cian 2022). Six empirical studies show that product preferences for algorithm-controlled products are influenced by the degree of algorithm adaptivity. We find that consumers generally prefer high-adaptivity algorithms because such algorithms are seen as more creative. However, high adaptivity can backfire when the lack of predictability is perceived as negative, as is the case when a product is expected to provide only a narrow range of solutions for a task.
Theoretical and Practical Implications
This research is the first to investigate product perceptions and preferences based on the algorithm operating within the product. It demonstrates that algorithms are an important attribute of contemporary product design, adding algorithm-type perception as a key concept to distinguish among algorithm-controlled products. This research extends prior algorithm perception literature, which has focused on algorithms mainly as decision-makers, statistical prediction tools, or recommendation tools (Castelo et al. 2019; Longoni and Cian 2022). Disclosing different algorithm characteristics to consumers allows us to infer capabilities that consumers value in different algorithm types and algorithm-controlled products: predictability and creativity. The constantly positive effect of an adaptive algorithm’s perceived creativity on product preference suggests that creativity is valued not only as a human trait but can also be a distinguishing feature of algorithm-controlled products (Haslam 2006; Mehta and Dahl 2019).
Thus, our findings contribute to a more complete theoretical understanding of how consumers perceive algorithm-controlled products and the inner workings of technology. This research adds to the existing body of literature on mind perception that is closely related to anthropomorphism, by exploring how consumers perceive and interact with technology based on inner workings. Previous research on anthropomorphism has demonstrated that individuals tend to attribute human-like mental states and abilities to machines, which can lead to the perception of similarities between technology and humans (Epley 2018). However, whereas this earlier research often examined anthropomorphism based on the outer appearance or behavior of products (Aggarwal and McGill 2007; Kim and McGill 2011; Mende et al. 2019), we show that knowledge about its internal processes (i.e., the technical workings of algorithms) can also shape consumer perceptions of different algorithms and associated products.
In addition, our findings provide theoretical insights into consumer responses to the novel category of algorithm-controlled products. As such, we extend literature on product perception that focused on consumer reactions to product design (Aggarwal and McGill 2007), digital or physical status (Atasoy and Morewedge 2018), or product functionalities (Rijsdijk and Hultink 2003). In contrast to traditional and static products, algorithm-controlled products have the potential to offer a variety of outcomes due to the internal processes that guide their functioning. As a result, POR is a significant differentiating factor between algorithm-controlled products, describing the degree of possible desired outcomes from a consumer perspective.
As smart algorithms and AI increasingly control consumer products (Raff et al. 2020; Shanks and Hintermann 2019), our research has important practical implications. One core insight is that consumers are aware of the importance of algorithms in products and how they differ from each other. Our research suggests that firms should carefully consider how to inform consumers about the algorithms in their products. We also offer guiding principles for how firms should communicate about products with consumers. First, describing algorithms as having high versus low adaptivity is a useful and meaningful differentiation of algorithms for consumers in the marketplace. Second, adaptivity is valued in many contemporary technology products, and highlighting adaptive algorithm components is likely to drive product preferences in many product contexts. Third, despite these positive effects, consumers associate high adaptivity with a lack of predictability. Therefore, when considering which type of algorithm to implement, firms should determine whether their customers will be more concerned about algorithm creativity or algorithm predictability, given the POR of the advertised product. Finally, our findings underscore the importance of explaining how algorithms work in fostering trust for algorithm-controlled products—particularly those using high-adaptivity algorithms.
Limitations and Future Research
This research has concentrated on one key distinguishing factor of algorithms: their adaptivity. However, algorithms may also vary in terms of the data that they are fed with, their performance quality, or whether users are able to monitor the algorithm’s learning and improvement (Chung et al. 2016; Kim and Duhachek 2020; Reich, Kaju, and Maglio 2023). Future research should further explore these factors and the moderating role of product characteristics, such as the distinction between search and experience goods. For example, consumers may more readily adopt algorithms for experience goods if they are informed that the implemented algorithm was trained on other users’ preferences and experiences.
Moreover, our empirical results and our hypothetical reasoning show that there may be boundaries for the positive effect of high adaptivity for wide POR. Specifically, study 6 reveals that a lack of explainability dampens the positive adaptivity effect. Thus, although consumers may assume a certain degree of product functionalities for most market-released products, this perception may be reduced in products that are perceived as highly risky or very innovative. While we have used autonomous driving systems as an example, other types of technology such as surgical robots or health recommender systems could also be considered. If trust in basic functionalities is not given, increasing predictability may be a core marketing challenge for products using high-adaptivity algorithms. Study 6 suggests explainability as a remedy for lacking predictability, yet future research should explore for which products predictability is highly relevant, and which further measures can increase predictability perceptions.
This research focused on creativity and predictability to examine the effect of algorithm adaptivity on product preferences. Yet, algorithm adaptivity may also lead to other effects that can influence product preferences. For example, privacy concerns are particularly problematic for high-adaptivity algorithms that require large amounts of training data (Jordan and Mitchell 2015). Another direction would be whether consumers show different emotional or affective reactions to differing algorithm types, in addition to the creativity and predictability perceptions in our research. This case might become particularly relevant if algorithms are used in product design, as prior research already showed differences in reactions toward hand-made versus robot-made products (Granulo, Fuchs, and Puntoni 2021). This interaction between information regarding algorithm type and product origin could reveal interesting insight for consumers’ adoption of future products.
While we compared low-adaptivity and high-adaptivity algorithms, many algorithm-controlled products combine characteristics of both algorithm types. For instance, Amazon’s Alexa uses machine learning for natural language processing of voice input, but many of Alexa’s added skills are fully pre-programmed with “if-this-then-that” commands (Jeffress 2018). Similarly, iRobot’s newest robotic vacuum cleaner, the Roomba s9+, combines adaptive and real-time optical navigation systems to calculate the vacuum cleaner’s location in the room with rule-based commands to avoid obstacles while cleaning (Bennett 2021). Future research could study how hybrid algorithmic systems with pre-programmed and adaptive components affect consumer reactions. Such research could test whether there is an optimal level (e.g., mid-level) of adaptivity, similar to the uncanny valley theory, which predicts an optimal point for the degree of human-like behavior and appearance of robots (Kim, Schmitt, and Thalmann 2019).
Another direction would be to investigate how the two algorithm types fare against consumer perceptions of humans. In an exploratory study, we have already produced first insights into this direction. We measured consumer reactions to a shopping app with a human relative to an artificial assistant with a high-adaptivity versus low-adaptivity algorithm (three-cell between subjects; N = 302 Prolific participants). The high-adaptivity algorithm tended to improve intent to purchase the app relative to the human assistant, with marginal differences in post hoc tests (Mhigh = 4.01, SD = 1.80, Mlow = 2.84, SD = 1.70, Mhuman = 3.41, SD = 1.87; phuman versus low = .089; phuman versus high = .053; phigh versus low < .001). Furthermore, consumers perceived the assistant with the high-adaptivity algorithm as equally competent as the human assistant. Both were rated more competent than the assistant with a low-adaptivity algorithm (Mhigh = 4.85, SD = 1.41, Mlow = 3.53, SD = 1.56, Mhuman = 4.72, SD = 1.48; phuman versus low < .001; phuman versus high = .828; phigh versus low < .001). Overall, human assistance and high-adaptivity algorithms were perceived as equally valuable in this study, and both were seen more favorably than the low-adaptivity algorithm (web appendix L). Adaptive capabilities may thus help blur the boundaries between algorithms and humans in consumers’ minds, a research direction that deserves further investigation.
Conclusion
Algorithms are a key part of products in the digital age. By unveiling the mind of the machine, this research shows that consumers have specific preferences for different types of algorithms depending on the nature of the algorithm-controlled product. We show how merging insights from computer science with consumer research can help to refine our thinking about algorithms and thus contribute to a better understanding of consumer responses to new technologies. In addition to discerning what separates machines from humans, scholars and practitioners should consider what distinguishes machines from each other, and how these differences might matter to consumers and society at large.
DATA COLLECTION STATEMENT
The first author supervised data collection for studies 1 (in 2022), 2 (in 2023), 4 (in 2022), 5 (in 2023), and the general discussion study (in 2022) using the online panels of Prolific and Amazon MTurk as described in the Method sections of the article. The first author also supervised the data collection for the Global Fortune 500 analysis conducted by research assistants at the University of Lucerne from August to December 2020. The third author supervised data collection for study 3 conducted by research assistants at the behavioral lab of Columbia University in 2019. The second author supervised data collection for study 6 via Prolific in 2023. The first author analyzed the data for all studies, partly in exchange and discussion with the other authors. The data are stored in a secure folder on the Open Science Framework under the management of the first author and accessible to all authors. All study materials are disclosed in the main article and web appendix, and the original survey material is accessible via the Open Science Framework (https://osf.io/v82f3/?view_only=0d3334a4b0bc4c8386d677596e0be8a4).
Author notes
Melanie Clegg ([email protected]) is an assistant professor at the Institute for Digital Marketing & Behavioral Insights, Vienna University of Economics and Business, 1020 Vienna, Austria.
Reto Hofstetter ([email protected]) is full professor of digital marketing at the Institute of Marketing and Analytics, University of Lucerne, 6002 Lucerne, Switzerland.
Emanuel de Bellis ([email protected]) is associate professor of empirical research methods at the Institute of Behavioral Science and Technology, University of St. Gallen, 9000 St. Gallen, Switzerland.
Bernd H. Schmitt ([email protected]) is Robert D. Calkins professor of international business, Columbia Business School, New York City, NY 10027, USA.
This article is based on the first author’s dissertation and was supported by a scholarship from the Swiss National Science Foundation (project P1LUP1_191405). The authors thank the editor Richard Lutz, the associate editor Roland Rust, and the anonymous reviewers for their comments and guidance during the review process. The authors also thank Marc Bravin and Marc Pouly for technical support regarding the app employed in study 2. Supplementary materials are included in the web appendix accompanying the online version of this article.