Abstract

Chatbots have several features that may stimulate self-disclosure, such as accessibility, anonymity, convenience and their perceived non-judgmental nature. The aim of this study is to investigate if people disclose (more) intimate information to a chatbot, compared to a human, and to what extent this enhances their emotional well-being through feelings of relief. An experiment with a 2 (human vs. chatbot) by 2 (low empathetic vs. high empathetic) design was conducted (N = 286). Results showed that there was no difference in the self-reported intimacy of self-disclosure between the human and chatbot conditions. Furthermore, people perceived less fear of judgment in the chatbot condition, but more trust in the human interactant compared to the chatbot interactant. Perceived anonymity was the only variable to directly impact self-disclosure intimacy. The finding that humans disclose equally intimate information to chatbots and humans is in line with the CASA paradigm, which states that people can react in a social manner to both computers and humans.

Research highlights:

• There is no difference in intimate self-disclosure between a human and a chatbot interaction partner

• People experience less fear of judgment when talking to a chatbot

• People have more trust in a human interaction partner

• When people feel more anonymous, they self-disclose more intimately

The use of chatbots—conversational programs designed to show humanlike behavior by mimicking text- or voice-based conversations (e.g. Abdul-Kader and Woods, 2015)—in different domains has increased exponentially over the past years. A recent development is the rise of social chatbots used for therapeutic purposes, also called mental health chatbots. Examples of these are Woebot, Tess, Wysa and Replika. The primary goal of such mental health chatbots is to be a virtual companion to its users and to monitor the user’s mood, by guiding them in disclosing their emotions and needs (D'alfonso et al., 2017). Woebot, e.g. was developed at Stanford University to help people suffering from depression or anxiety by monitoring the user’s mood and making use of cognitive behavioral therapy. The number of chatbots created to improve people’s emotional well-being is increasing, which illustrates the need in society for such chatbots. Therefore, it is important to better understand the social and emotional processes while interacting with these social chatbots.

One of the crucial factors in improving one’s well-being is people’s willingness to disclose personal information (e.g., Pennebaker, 1995; Sloan, 2010), so-called self-disclosure (Joinson, 2001). By disclosing personal information, people are able to receive adequate help from family members, friends or professionals (e.g., Colognori et al., 2012). However, disclosing personal information can be perceived as risky and stigmatizing, especially when it concerns intimate or very personal information, which can hinder individuals from stepping forward to seek help from professionals or family and friends to disclose their inner feelings (e.g., Vogel and Wester, 2003; Eisenberg et al., 2009). Chatbots have several features that may stimulate self-disclosure and help-seeking by people in need, such as 24/7 accessibility, anonymity, convenience and their perceived non-judgmental nature (Skjuve and Brandtzæg, 2018). One study shows that self-disclosure to a chatbot can be equally beneficial as self-disclosing to a human (Ho et al., 2018). Another study shows that humans reciprocate the self-disclosure of a dialog system (Ravichander and Black, 2018).

Self-disclosure can also benefit individuals by decreasing their stress symptoms and increasing positive affect (e.g., Kahn et al., 2001). However, in order to further improve well-being, it is important for the interaction partner to react in an empathetic manner to the person’s disclosure of information (Shenk and Fruzzetti, 2011; Reis et al., 2017). It is known that disclosers need to believe that their conversation partner understands them before the positive impact of feeling understood, and hence the relief, can take place (Reis et al., 2017). Research consistently shows that interpersonal processes such as empathy and warmth are essential factors to improve well-being (Lambert and Barley, 2001). However, a chatbot is a computer program that cannot demonstrate true empathy as it does not have the capacity to understand human emotions and inner feelings (Bickmore and Picard, 2005). Therefore, the chatbot’s responses can be perceived as inauthentic and hence not truly empathetic. In contrast, research shows that as long as a virtual agent appears to be empathetic and is accurate in the feedback it gives, it can achieve similar effects compared to a human who displays true empathy (Klein et al., 2002).

In sum, research shows that there is potential in the use of social chatbots to improve the user’s well-being. Moreover, studies highlight the importance of self-disclosure in improving well-being. Although there are studies on self-disclosure in human-chatbot communication and its beneficial effects (e.g. Ho et al., 2018), what is currently lacking is a comparison of the (beneficial) effects of self-disclosure when interacting with a human, compared to a chatbot, and what underlying processes may enhance relief (and whether these differ in human–human and human–chatbot communication). Therefore, the aim of this study is to investigate if people disclose (more) intimate information to a chatbot (compared to a human) and to what extent this enhances their emotional well-being by means of relief and what underlying processes explain this effect.

1. TOWARD A SOCIAL CHATBOT TO IMPROVE WELL-BEING

Social chatbots have become popular in the last few years. The primary goal of such chatbots is to be a virtual companion to its users and to monitor the user’s mood, by guiding them in disclosing their emotions and needs (D'alfonso et al., 2017). Woebot, e.g. aims to help people who are suffering from depression or anxiety as it helps to monitor the user’s mood (https://woebothealth.com). Tess is another example of a popular social chatbot. According to her developers, Tess coaches its users through difficult times with the aim to build resilience via social chats similar to interacting with a friend or a coach (https://x2ai.com). Another popular chatbot is Wysa, which anonymously helps users with their anxiety and feelings of isolation. The chatbot Wysa is free, but if users also want to talk to a real counselor, they have to pay a monthly fee. According to the website, Wysa has helped over 2.5 million people and in 2020 the bot won the Orcha best App in Health & Care (https://wysa.com). In the users’ reviews, you see anecdotal evidence of the positive impact these apps have on their well-being.

Based on the popularity of these social chatbots and on the anecdotal evidence from their users, it seems that many users find these chatbots helpful. Chatbots can provide very efficient, familiar and easy one-on-one interactions with their users (Vaidyam et al., 2021). Furthermore, these interactions are low threshold and are often perceived as enjoyable (Følstad and Brandtzaeg, 2020). Interacting with social chatbots is becoming more and more common and can provide a solution for several issues such as understaffing and waiting lists, as they are cost-efficient, adaptable and scalable (i.e. they are able to provide personalized advice to many people at the same time). These chatbots also have certain attractive features, such as continuous accessibility, convenience and their perceived non-judgmental nature.

However, the functionality of the social chatbots that are currently available is generally limited. Chatbots such as Wysa are more focused on tasking people with assignments rather than engaging in conversation (Fitzpatrick et al., 2017). Through conversation, it is possible to build a trusting bond (Lambert and Barley, 2001), which in turn could enhance willingness to disclose (Corritore et al., 2003), and build an emotional connection between a user and a chatbot (Savin-Baden et al., 2013). Furthermore, scientific evidence on the use and impact of those chatbots is scarce. Previous studies, for instance, had a small sample size, and used the Wizard of Oz method (i.e. a method where participants are told that they will interact with a chatbot, when in actuality they will interact with a human interlocutor) to investigate differences in the perception of humans vs. chatbots (e.g. Bell et al., 2019; Ho et al., 2018). While this method is effective, it is not suitable to gauge the current conversational capabilities of chatbots and contrast them with humans. Therefore, research on the possibilities and impossibilities of the use of chatbots to improve people’s well-being is needed.

2. CAN CHATBOTS STIMULATE INTIMATE SELF-DISCLOSURE?

To study the potential impact of social chatbots on emotional well-being, it is first necessary to determine if humans are willing to disclose their inner feelings to a chatbot. Self-disclosure is a necessary step in improving well-being. Specifically, disclosing one’s inner (hidden) feelings, secrets, memories and immediate experiences can enhance relief and, in turn, improve one’s mood (Farber, 2006). The willingness to disclose depends on several factors such as the anticipated utility (i.e. the perceived value of the outcome to the individual for disclosing), but also the anticipated risks (i.e. the perceived risks of self-disclosing; Vogel and Wester, 2003). The person disclosing the information might be fearful that information is shared with others or they might feel ashamed and be afraid that the recipient is being judgmental or critical (Farber, 2006).

Based on Derlega and Grzelak’s (1979) functional theory of self-disclosure, self-disclosure is a strategic behavior that individuals use to achieve their personal goals. The authors identified five goals that people may achieve: self-expression (venting negative emotions), self-clarification (clarifying one’s own identity and opinions), social validation (gaining social support and acceptance), relationship development (development and/or maintenance of personal relationships) and social control (using information to gain control). Following this functional theory, Omarzu (2000) designed a disclosure decision model, to explain which factors affect disclosure decision-making (see Fig. 1). This model proposes that people pursue strategic goals when self-disclosing and disclose different types of information depending on various media functions and situational cues. For example, a relational development goal is more accessible in a romantic setting (situational cue) compared to an office setting (Bazarova and Choi, 2014). Furthermore, the disclosure decision model poses that subjective risk influences self-disclosure intimacy in particular. Subjective risk refers to the potential risks anticipated by the discloser, such as social rejection (Omarzu, 2000). According to this model, as subjective risk increases, self-disclosure intimacy decreases.

The disclosure decision model.
FIGURE 1

The disclosure decision model.

Although according to the functional theory of self-disclosure situational cues are believed to activate individual disclosure goals, the disclosure decision model does not account for the underlying mechanisms that underlie the activation process (Omarzu, 2000). Based on earlier research on self-disclosure in (online) interpersonal communication (e.g. Antheunis et al., 2012; Joinson, 2001) and human–chatbot interactions (e.g. Ho et al., 2018; Croes and Antheunis, 2021), we have identified three possible underlying mechanisms that may play a role in the activation of self-disclosure to a chatbot, namely perceived anonymity, fear of judgment of the interaction partner and trust in the interaction partner.

3. PERCEIVED ANONYMITY

A first underlying mechanism in the elicitation of self-disclosure is anonymity. Feelings of anonymity stimulate self-disclosure (see a meta-analysis of Clark-Gordon et al., 2019). As people feel more anonymous, their public self-awareness decreases, which reduces identifiability or accountability concerns (Scott, 1998), which in turn results in feelings of disinhibition and can result in more intimate self-disclosure (e.g. Antheunis et al., 2007; Clark-Gordon et al., 2019; Joinson, 2001). This process is oftentimes associated with the stranger on the train phenomenon in which people disclose their inner feelings to unknown travel companions on a train (Antheunis et al., 2007).

Due to the potentials risks, such as stigmatization, associated with disclosing very personal/intimate information with others (Link et al., 1991), people can be hesitant to disclose personal information (e.g. Lucas et al., 2017). The fear of being stigmatized can act as a barrier to disclosing one’s inner feelings, thoughts and symptoms (Lucas et al., 2017). Feeling more anonymous can reduce that barrier.

It is likely that individuals feel more anonymous when interacting with a chatbot compared to a human. Hence, they might feel more disinhibited and dare to disclose more intimate information than they would to a human. For example, research on reporting sensitive information shows that assessment by virtual agents, as they afford anonymity, increases the level of (honest) reporting, such as on suicide (Greist et al., 1973) and posttraumatic stress disorder (Lucas et al., 2017). Furthermore, since a chatbot is an artificial being, people view it as good at keeping secrets, as it cannot share the information with others (Skjuve and Brandtzæg, 2018). Thus, the artificial nature of a chatbot and its lack of feelings means that people are likely to feel more anonymous which means people are more likely to open up to a chatbot, compared to another human. Therefore, we pose the following hypothesis:

H1: (i) Individuals feel more anonymous when interacting with a chatbot, compared to a human interlocutor, which in turn leads to (ii) more intimate self-disclosure to a chatbot compared to a human interlocutor.

4. FEAR OF JUDGMENT

Another underlying mechanism in the elicitation of self-disclosure is a lack of fear of judgment, which also closely relates to perceived anonymity. Humans sometimes avoid disclosing intimate information to other humans out of a fear of negative evaluation (e.g. disapproval, social rejection, stigma, embarrassment), even more so when it is information that might reflect poorly on the self (e.g. Afifi and Guerrero, 2000; Lane and Wegner, 1995). Hence, the fear of negative evaluations hinders humans to disclose intimate information to other humans.

Chatbots might be perceived as non-judgmental as they do not think or form judgments on their own (Lucas et al., 2014). Therefore, individuals might feel more at ease to disclose personal information to a chatbot compared to another human without being judged or embarrassing their interaction partner (Skjuve and Brandtzæg, 2018). This can be beneficial when disclosing potential stigmatizing information, or very intimate information. There is some empirical evidence pointing in that direction. Weisband and Kiesler (1996) found in a meta-analysis that computer administered assessment methods result in more personal self-disclosure than non-computerized methods (i.e. with a human). More recently, using virtual human interviewers, Lucas et al. (2014) showed that virtual humans (avatars) increase the willingness to disclose in situations in which the fear of a negative evaluation is more prominent. Comparable results were found by Kang and Gratch (2010) but only for socially anxious people. When disclosing intimate information to another person, individuals can be afraid of the other person’s moral judgments. This can lead to them abstaining from self-disclosing information that violates certain morals (Mou and Xu, 2017).

This does not appear to be the case when talking to a chatbot, where fear of judgment may decrease or disappear altogether due to a chatbot’s inability to think or form opinions. For this reason, we expect the following:

H2: (i) Individuals experience less fear of judgment when interacting with a chatbot, compared to a human interlocutor, which in turn leads to (ii) more intimate self-disclosure to a chatbot compared to a human interlocutor.

5. TRUST IN THE INTERACTION PARTNER

The third underlying mechanism in the effect of conversation partner on self-disclosure is trust in the interaction partner (the object of trust), which is formed through elements as honesty and psychological safety (Tillmann-Healy, 2003). When someone trusts the interaction partner, they will feel more at ease to self-disclose (Burgoon and Hale, 1984; Lee and Choi, 2017). There is currently no consensus in the literature if humans trust a human interaction partner more than a chatbot interaction partner. On the one hand, human–chatbot communication is likely to foster a sense of trust because of the artificial nature of the object of trust (i.e. the chatbot) and hence confidential nature of the interaction. As mentioned above, because of the artificial nature of chatbots, they are believed to be good at keeping secrets. This inherently suggests that artificial interaction partners, like chatbots, can be trusted more, compared to a human interaction partner (Skjuve and Brandtzæg, 2018). More specifically, when disclosing intimate information to a chatbot, people can trust that this information will not be passed on to others. Thus, the characteristic of the chatbot—confidentiality and artificiality—signals trustworthiness, which fosters trust in the chatbot as an interaction partner.

However, on the other hand, there are several reasons to believe that humans will trust a human interaction partner more compared to a chatbot. A first issue that is at stake is that of moral agency. Corritore et al. (2003) discuss in their work the approach to define the relationship between the trustor (e.g. the human user) and technologies as an object of trust (e.g. the chatbot as the interaction partner). According to philosophers, technologies cannot be seen as moral agents, which can be defined as objects that have intentions and free will (Solomon and Flores, 2001). Technologies do not have intentionality nor free will, and hence cannot be trustworthy. Contrary to this perspective is that technologies can be seen as social actors. Corritore et al. (2003) stated that in order to be trusted technologies do not have to be moral agents, it is enough to be a social actor (see work of Reeves and Nass (1996) and Nass et al. (1995, 1996)).

A second issue is that humans have concerns related to privacy and security of their personal data in a chatbot interaction. In interactions with a chatbot this data is frequently stored automatically as it is used to improve the chatbot’s communication, and this can hinder feelings of trust (Følstad et al., 2018). Users can perceive a risk that the information on the computer will not be stored well and hence their data can be accessed by ill-intentioned people. This risk perception might even be stronger when having a personal interaction with the chatbot instead of a more functional interaction (e.g. customer service). Since the argumentation of trust in chatbot as an interaction partner (vs. a human) is conflicting, we cannot formulate a hypothesis. In its place, we pose a research question:

RQ1: (i) Do individuals trust a chatbot more than a human interlocutor, and does this in turn lead to (ii) more intimate self-disclosure to a chatbot compared to a human interlocutor?

In this study, we first want to investigate if humans are willing to disclose their inner feelings to a chatbot. Based on the underlying processes (i.e. perceived anonymity, fear of judgment, trust in interaction partner) that potentially take place when humans interact with a chatbot versus a human, we might expect that humans are willing to self-disclose to a chatbot interaction partner, however it is not clear if they will disclose more intimate information to a chatbot compared to human interaction partner. Therefore, we formulate a research question:

RQ2: (i) Are people willing to disclose intimate personal information to a chatbot and (ii) do they disclose more intimate personal information to a chatbot or to a human?

6. SELF-DISCLOSURE AND EMOTIONAL WELL-BEING

A second step to investigate is if the self-disclosed intimate information to a chatbot also enhances the discloser’s emotional well-being, by means of relief. Self-disclosure can benefit individuals by decreasing their stress symptoms and increasing positive affect (e.g. Kahn et al., 2001). This can be explained by the cognitive processing of writing about personal matters (Pennebaker, 1993, 1995), also referred to as therapeutic writing or expressive writing paradigm. If people disclose about emotional experiences (e.g. loss, a shameful secret) the negative affect will be reduced as by writing it down one turns the negative emotions and feelings (the affect) into something cognitive instead and they will reevaluate the event and/or the emotion. The transition from affect to cognition can reduce the intensity of the emotion (Lieberman et al., 2007) and hence give some relief.

In line with the positive intrapersonal effect of self-disclosure explained by the expressive writing paradigm, there is also a catharsis effect defined for a positive interpersonal effect of self-disclosure. Relief might be experienced after disclosing intimate information, mostly when the information elicits strong emotions, like shame, fear or worries. In 1935, Freud, 1935 referred to this as the catharsis effect of self-disclosure: ‘Disclosure of distress directly reduces such negative affect through a catharsis effect.’ (Derlaga, & Berg, 1987, p. 233). Because the discloser is openly expressing negative emotions, these emotions are depleted more quickly instead of letting them aggravate. Ample research has found that self-disclosure can improve a person’s emotional state by diminishing negative affect and stress, and growing feelings of relief (e.g., Omarzu, 2000; Farber et al., 2004; Pennebaker and Chung, 2007; Ho et al., 2018). Therefore, we expect that:

H3: Self-disclosure intimacy will enhance perceived relief.

This positive effect of self-disclosure on relief can be strengthened if the interaction partner responds in an empathetic manner (Shenk and Fruzzetti, 2011; Reis et al., 2017). If the interaction partner shows that they understand the discloser, a sense of belonging and acceptance is created and areas of the brain are activated that are associated with connectivity and reward (Reis et al., 2017). Reis and colleagues (Reis and Shaver, 1988; Reis et al., 2017) clearly state that feeling understood, exceeds just recognizing the disclosed information. Feeling understood is established when disclosers really get the impression that the interaction partner understands them. A chatbot, however, is a computer program that cannot demonstrate true empathy as it does not have the capacity to understand human emotions and inner feelings (Bickmore and Picard, 2005). The chatbot’s responses can therefore be perceived as inauthentic and hence not truly empathetic. In contrast, research shows that as long as a virtual agent appears to be empathetic and is accurate in the feedback it gives, it can achieve similar effects compared to a human who displays true empathy (Klein et al., 2002; Ho et al., 2018). Although another study shows that the most reduction of stress and worries was in the human condition, empathetic responses of both humans and a chatbots do contribute to reduction of stress and worries (Meng and Dai, 2021). Thus, we expect a moderating effect on the self-disclosure intimacy effect on relief if the interaction partners respond in an empathetic manner. Our final hypothesis reads:

H4: The effect of self-disclosure intimacy on perceived relief is contingent upon the perceived empathy of the interaction partner.

7. CONTROL VARIABLES

Our proposed hypotheses may also be impacted by age, gender and/or alcohol use. Although we do not formulate hypotheses about the potential effects of these variables, they may have an influence on the dependent variables in our study. Specifically, research shows that younger people (18–25 years old) may feel less inhibited to self-disclose due to lower levels of privacy concerns and higher levels of trust, compared to older people, which may be because of their comfort with using technology (Lappeman et al., 2023). Additionally, gender has been found to impact self-disclosure, with women often disclosing more (intimately) than men (e.g. Dindia and Allen, 1992). Finally, we included alcohol use as a control variable because alcohol consumption can impact self-disclosure as it makes people more disinhibited (e.g. Caudill et al., 1987; Lyvers et al., 2020).

8. METHOD

8.1. Sample and design

A total of 286 (60% female, 40% male) visitors of a large three-day music festival between 16 and 61 years of age (M = 26.23; SD = 7.20) participated in our experiment. The sample is rather skewed for level of education, as the majority was higher educated. Participants were asked for their highest level of education (current or completed) and almost half of the participants were (former) university students (47.9%), (former) applied university students (31.8%), (former) high school students (10.1%) and (former) intermediate vocational education students (8%).

For this study, we adopted a 2 (human versus chatbot) by 2 (non-empathetic versus empathetic) between subject experimental design. In our analyses, we recoded the four conditions so that we only directly compared the human vs. chatbot conditions. Therefore, we did not directly compare the empathic vs. non-empathic conditions. We included this condition in our design to ensure more variation in terms of empathy. Instead, we include the self-report measurement of empathy as a moderator (see H4), as this gives a clearer overview of how empathic participants felt their interaction partner actually came across. The participants were randomly assigned to one of the four conditions. In all the conditions, they had a one-on-one interaction and were asked to confess something to either a human confederate or a chatbot. We used the chat function of the Discord platform in all conditions (see Fig. 2 for details). As the task of the human confederate was intensive, we trained six confederates, which were allocated to 3-hour time slots during the 3 days of data collection (see Appendix A for more detailed confederate instructions and the questions that were asked in all conditions). For the experiment, a chatbot was developed which was used in the chatbot condition. The chatbot is a modular and open-source chatbot (for details, see AUTHORS and url [ANONYMIZED]).

Screenshot of a (simulated) conversation between chatbot (PRIESTESS) and user (Biechthok 1) on the Discord platform that was used for this study.
FIGURE 2

Screenshot of a (simulated) conversation between chatbot (PRIESTESS) and user (Biechthok 1) on the Discord platform that was used for this study.

In both the chatbot and the human condition, the same procedure was followed, using a script with predefined questions and answers (see Appendix A for the questions). For the human condition, an extra interface was created to help with the conversation flow, and answer content (see Fig. 3). At the start of the conversation, the chatbot/confederate asks icebreaker questions (e.g., What do you think of [name of the festival] thus far? Which artists have you seen?). The chatbot interprets users’ answers to these icebreaker questions using predefined lexicons and Dutch sentiment analysis tool Pattern (De Smedt and Daelemans, 2012). This means that every message being sent to the chatbot is scanned for words that may convey the direct answer to the question, or may convey a positive or negative sentiment. The chatbot then uses this information to pick the best answer from a list of preprogrammed answers. Furthermore, the chatbot tries to respond to the users’ self-disclosed personal information in the empathetic condition in an empathetic manner. This is done using LIWC (Pennebaker et al., 2015), a program that uncovers underlying topics in text. More specifically, LIWC scans answers for words that provides information about the topic which the answer was focused on (e.g. family, work, food). This information is then used to pick the most appropriate answer from a list of preprogrammed answers.

Screenshot of the chat interface that confederates could use as support for their interactions.
FIGURE 3

Screenshot of the chat interface that confederates could use as support for their interactions.

8.2. Procedure

This procedure was reviewed and approved by the university Research Ethics and Data Management Committee (REDC # 2019192). This experiment was conducted at a large, annual 3-day music and performing arts festival with over 50 000 visitors in August 2019, resulting in data from a naturalistic setting. The festival’s program consists of a broad variety of music acts, ranging from dance (e.g. Paul Kalkbrenner) to hardcore punk (e.g. Turnstile) and from popular music for the young (e.g. Billie Eilish) and the elderly (e.g. Giorgo Moroder). Hence, the visitors of the festival are pretty heterogenous. While the festival’s main focus is on live music, the festival also offers cinema, theatre, cabaret, literature and the possibility to take part in various scientific experiments at what is called ‘Lowlands Science’. The teaser for our study was Digital Confessions, in which we asked people via posters if they wanted to confess a secret digitally. The visitors that were interested in participating could do so voluntarily.

The visitors of the festival that wanted to participate in our study were thoroughly briefed after which they gave consent. Participants were randomly assigned to one of the four conditions, and they were clearly told beforehand whether they were going to confess to a chatbot or a human, depending on the assigned condition. Next, they were led to a cubicle in which they were seated in front of a laptop (see Fig. 4 for the study setup). One of the researchers then typed ‘start’ in the chat window, which started the interaction. After this cue was entered, either the chatbot or the confederate in the human condition started the interaction by asking some introductory questions about the festival and the bands they have seen to increase the depth of the interaction (Berger and Calabrese, 1975). This part of the conversation was scripted and both the human and the chatbot followed the same script.

Set up of the confessional booths.
FIGURE 4

Set up of the confessional booths.

After these chitchat questions about the festival and bands, participants were asked to confess/tell their secret. The response of the chatbot or human confederate depended on the condition they were in. In the non-empathetic condition, they responded with ‘Thank you for sharing your secret. Is there anything else you want to say?’ after which the participants were thanked for their participation. In the empathetic condition, the conversation partner (human or chatbot) responded empathetically—either automatically (in the chatbot condition, using LIWC) or manually (in the human condition, choosing from several options from a script) on the disclosed topic and also asked, ‘how do you feel after disclosing your secret?’ After ending the chat, participants were sent the link of the questionnaire. When the participants finished the questionnaire, they were tested on their alcohol level with a breath analyzer device. And after that, they were debriefed (i.e. they were told about the exact topic of study) and thanked for their participation.

8.3. Self-report measurement

8.3.1. Fear of judgment

To measure fear of judgment, we used four items of the Fear of Negative Evaluation Scale (Leary, 1983) that were slightly adapted to the situation of this experiment. The items were introduced by ‘During the conversation….’ followed by ‘…I worried what kind of impression I made on her,’ ‘…I worried what she was thinking about me,’ ‘…I worried what she was thinking of me,’ and ‘…I was afraid she was judging me.’ The response categories for each of the items ranged from 1 (completely disagree) to 5 (completely agree). The four items formed a one-dimensional scale (explained variance 86%), with a Cronbach’s Alpha of .94 (M = 2.22, SD = 1.04).

8.3.2. Trust in interaction partner

To measure trust, we used four items from the Individualized Trust Scale (ITS) of Wheeless and Grotz (1977). The items were on a five-point semantic differential scale. The items were introduced by ‘My conversation partner was…’ followed by: Unreliable—Reliable, Untrustworthy—Trustworthy, Insincere—Sincere, Malevolent—Benevolent. These four items formed a one-dimensional scale (explained variance 58%) with a Cronbach’s Alpha of .75 (M = 3.51, SD = 0.80).

8.3.3. Perceived anonymity

To measure perceived anonymity a four-item scale was constructed based on Rains (2007), Qian and Scott (2007), and Hite et al. (2014). Participants were asked to indicate their feelings of anonymity during the conversation. The items were: ‘During the conversation I felt I was anonymous,’ ‘During the conversation I felt I was unrecognizable,’ ‘During the conversation I felt I could not be identified,’ and ‘During the conversation I felt I could share more about myself because she did not know me.’ The items loaded on a one-dimensional scale (explained variance 64%), with a Cronbach’s alpha of .81 (M = 3.31, SD = 0.95).

8.3.4. Perceived self-disclosure intimacy

The perceived level of intimacy of the participant’s self-disclosure was measured by four bipolar items based on the work of Rubin and Shenker (1978) and Lin and Utz (2017). Participants were asked to rate the disclosed secret on a five-point scale. The items were ‘Not at all intimate – Very intimate,’ ‘Very impersonal – Very personal,’ ‘Trivial – Important,’ and ‘Not confidential at all – Very confidential.’ The items formed a one-dimensional scale (explained variance 67%), with a Cronbach’s alpha of .84 (M = 3.27, SD = 1.00).

8.3.5. Perceived empathy

The perceived empathy was measured by four items based on Stiff et al. (1988). The items were ‘The interaction partner said the right thing to make me feel better,’ ‘The interaction partner responded appropriately to my feelings and emotions,’ ‘The interaction partner came across as empathetic,’ and ‘The interaction partner said the right thing at the right time.’ The response categories ranged from 1 (completely disagree) to 5 (completely agree). All items loaded on a one-dimensional scale (explained variance 68%) with a good Cronbach’s alpha of .843 (M = 2.87, SD = 0.84).

8.3.6. Relief

The measurement of relief was based on a measurement used by Ho et al. (2018), which the addition of one extra item. Thus, the final scale consisted of three items, which were ‘I feel more optimistic now that I have confessed my secret,’ ‘I feel better now that I have confessed my secret,’ and ‘I feel relieved now that I have confessed my secret’ (extra item). The response categories for each of the items ranged from 1 (completely disagree) to 5 (completely agree). The items formed a one-dimensional scale (explained variance 85%), with a Cronbach’s alpha of .91 (M = 2.65, SD = 0.95).

8.3.7. Alcohol use

To measure if and how much alcohol participants had consumed, we did an alcohol test after the experiment. The participants had to blow in a breathalyzer, measuring the alcohol in their breath. Out of 286 participants, 178 did not consume any alcohol. For those who did have alcohol, the alcohol ranged from 0.05 to 2.07 per mile.

8.4. Content analysis: self-disclosure intimacy

The conversations in all four conditions were logged and saved, and the confessions were coded for intimacy of self-disclosure by two judges. The average length of the confessions was 32.89 words (SD = 41.65). All 286 confessions were divided evenly among the two judges who received extensive training with a codebook, which was discussed among the judges and contained examples as illustrations. After receiving these instructions, both judges coded the same 64 confessions (20%). The remaining confessions were divided evenly among both judges after intercoder reliability was deemed sufficient. For self-disclosure and intimacy of self-disclosure, Kappa was calculated as a measure for intercoder reliability, with the benchmark by Landis and Koch (1977) to determine strength of agreement.

First, self-disclosure was coded by assigning each confession to either a self-disclosure (1) or no confession (i.e. other) (2). Self-disclosure was operationalized as a confession revealing personal information about the self, telling something about the person, describing the person in some way or referring to the person’s experiences, thoughts or feelings (Antheunis et al., 2012; Tidwell and Walther, 2002). An example of a self-disclosure in the current study is ‘I had a really good date last week’. Confessions that could not be coded as a self-disclosure were coded as ‘other’. These were so-called ‘empty confessions’, such as ‘I do not really have anything to confess’ or ‘I don’t know what to confess’. These ‘confessions’ were excluded from further analyses. Intercoder reliability was perfect for self-disclosure (κ = 1).

Next, the judges coded the degree of intimacy of each disclosure, also known as the depth (Tidwell and Walther, 2002). Altman and Taylor’s (1973) classification scheme was used to rate each disclosure as either low (i), medium (ii) or high (iii) in intimacy. This classification scheme consists of three layers. The first layer is the peripheral layer, which is concerned with biographical information such as age, gender, height and other basic information. An example is ‘My girlfriend and I are living together’. The second layer is the intermediate layer, which is concerned with opinions, attitudes and values, e.g. ‘I really dislike my roommate’. The final layer is the core layer, which consists of personal beliefs, fears, emotions and things people are ashamed of (Antheunis et al., 2012; Tidwell and Walther, 2002). An example is ‘I am afraid that I am no longer in love with my boyfriend’. Intercoder reliability for intimacy of self-disclosure was perfect (κ = 1).

9. RESULTS

To test the first two hypotheses and RQ1–RQ2, a mediation analysis was performed using a PROCESS analysis (model 4). All analyses were conducted twice: with the self-report measure of self-disclosure intimacy and with the coded variable of self-disclosure intimacy. We used bootstrapping to test the mediated effects for significance based on 10 000 bootstrap samples, accompanied by 95% bias corrected and accelerated confidence intervals (BCa CI’s). In the analyses the categorical condition variable was recoded into a dummy variable (i.e. 0 = chatbot, and 1 = human).

H1 proposed that (i) individuals feel more anonymous when interacting with a chatbot, compared to a human interlocutor, which in turn leads to (ii) more intimate self-disclosure to a chatbot compared to a human interlocutor. The results for the self-report data showed that the condition variable did not significantly impact perceived anonymity (b = −0.18, SE = 0.11, P = .114). Next, the analysis revealed a significant effect of perceived anonymity on self-reported self-disclosure intimacy, b = 0.28, SE = 0.06, P < .001. This showed that perceived anonymity enhanced perceived intimate self-disclosure. Furthermore, the analysis revealed that anonymity did not significantly mediate the effect of condition on self-reported self-disclosure intimacy, b = 0.05, SE = 0.03, 95% BCa CI [−.12, .01]). For the coded data, the findings showed that the condition did not significantly impact perceived anonymity, b = −0.18, SE = 0.11, P = .122 and coded self-disclosure intimacy, b = 0.06, SE = 0.06, P = .305. Moreover, anonymity was not a significant mediator either, b = −0.01, SE = 0.01, 95% BCa CI [−.05, .01]. Thus, for self-reported self-disclosure intimacy, hypothesis 1a was rejected and 1b was supported. For the coded data the entire first hypothesis was rejected. The means are shown in Table 1.

TABLE 1

Means and standard deviations for all variables.

Condition
Dependent variableChatbotHuman
Fear of judgment2.06 (0.99)2.44 (1.07)
Anonymity3.38 (0.90)3.20 (1.00)
Trust3.34 (0.82)3.76 (0.69)
Self-disclosure intimacy (self-report)3.24 (1.01)3.31 (0.98)
Self-disclosure intimacy (coded)2.04 (0.91)2.34 (0.82)
Condition
Dependent variableChatbotHuman
Fear of judgment2.06 (0.99)2.44 (1.07)
Anonymity3.38 (0.90)3.20 (1.00)
Trust3.34 (0.82)3.76 (0.69)
Self-disclosure intimacy (self-report)3.24 (1.01)3.31 (0.98)
Self-disclosure intimacy (coded)2.04 (0.91)2.34 (0.82)

Note. Standard deviations appear in brackets below means.

TABLE 1

Means and standard deviations for all variables.

Condition
Dependent variableChatbotHuman
Fear of judgment2.06 (0.99)2.44 (1.07)
Anonymity3.38 (0.90)3.20 (1.00)
Trust3.34 (0.82)3.76 (0.69)
Self-disclosure intimacy (self-report)3.24 (1.01)3.31 (0.98)
Self-disclosure intimacy (coded)2.04 (0.91)2.34 (0.82)
Condition
Dependent variableChatbotHuman
Fear of judgment2.06 (0.99)2.44 (1.07)
Anonymity3.38 (0.90)3.20 (1.00)
Trust3.34 (0.82)3.76 (0.69)
Self-disclosure intimacy (self-report)3.24 (1.01)3.31 (0.98)
Self-disclosure intimacy (coded)2.04 (0.91)2.34 (0.82)

Note. Standard deviations appear in brackets below means.

The results of the mediation analysis are visualized in Figs 5 and 6.

Observed model (part 1; mediation) explaining the effects for self-reported self-disclosure intimacy.
FIGURE 5

Observed model (part 1; mediation) explaining the effects for self-reported self-disclosure intimacy.

Observed model (part 1; mediation) explaining the effects for coded self-disclosure intimacy.
FIGURE 6

Observed model (part 1; mediation) explaining the effects for coded self-disclosure intimacy.

H2 posed that (i) individuals experience less fear of judgment when interacting with a chatbot, compared to a human interlocutor, which in turn leads to (ii) more intimate self-disclosure to a chatbot compared to a human interlocutor. The analysis for the self-report data showed that condition significantly impacted fear of judgment, b = 0.38, SE = 0.12, P = .002. People experienced more fear of judgment with a human interlocutor, compared to a chatbot. Fear of judgment did not significantly impact self-disclosure intimacy, b = 0.07, SE = 0.06, P = .194. Moreover, fear of judgment did not significantly mediate the effect of condition on self-disclosure intimacy, b = 0.03, SE = 0.02, 95% BCa CI [−.02, .08]. For the coded data, the findings also showed that the condition significantly impacted fear of judgment, b = 0.38, SE = 0.12, P = .002. Furthermore, fear of judgment did not significantly impact self-disclosure intimacy, b = 0.00, SE = 0.05, P = .971 and was not a significant mediator either, b = 0.00, SE = 0.02, 95% BCa CI [−.04, .04]. Therefore, for both the self-reported and the coded data, hypothesis 2 was only partially supported.

RQ1 asked whether (i) individuals trust a chatbot more than a human interlocutor, and whether this leads to (ii) more intimate self-disclosure to a chatbot compared to a human interlocutor. The results showed that the condition significantly impacted perceived trust for the self-report data (b = 0.42, SE = 0.09, P < .001). Individuals trusted the human interaction partner more than the chatbot. Trust did not significantly impact self-reported self-disclosure intimacy (b = 0.13, SE = 0.08, P = .085) and was not a significant mediator either (b = 0.06, SE = 0.04, 95% BCa CI [−.01, .14]). Furthermore, for the coded data, the condition was also found to significantly impact trust, b = 0.43, SE = 0.09, P < .001. Trust did not significantly impact the coded self-disclosure variable, b = −0.07, SE = 0.05, P = .359 and was not a significant mediator for this variable either, b = −0.03, SE = 0.03, 95% BCa CI [−.10, .03].

To test H3 and H4, a moderation analysis was performed using PROCESS (model 1), where self-disclosure intimacy was entered as a predictor to relief, and perceived empathy was entered as the moderator. The analysis for the self-report data showed that self-disclosure intimacy did not significantly impact relief, b = 0.20, SE = 0.17, P = .233. The interaction effect between self-disclosure intimacy and empathy was not significant either, b = −0.00, SE = 0.06, P = .983. Regarding the coded data, the analysis showed that the coded self-disclosure intimacy variable did not significantly impact relief, b = −0.12, SE = 0.23, P = .587. The interaction effect between self-disclosure intimacy and empathy was not significant either, b = 0.06, SE = 0.08, P = .404. Thus, for both the self-reported perceived self-disclosure and the coded self-disclosure variable, H3 and H4 were not supported. The results are visualized in Figs 7 and 8.

Observed model (part 2; moderated mediation) explaining perceived empathy as a moderator in the self-disclosure—relief effect for self-reported self-disclosure intimacy.
FIGURE 7

Observed model (part 2; moderated mediation) explaining perceived empathy as a moderator in the self-disclosure—relief effect for self-reported self-disclosure intimacy.

Observed model (part 2; moderated mediation) explaining perceived empathy as a moderator in the self-disclosure—relief effect for coded self-disclosure intimacy.
FIGURE 8

Observed model (part 2; moderated mediation) explaining perceived empathy as a moderator in the self-disclosure—relief effect for coded self-disclosure intimacy.

Regarding RQ2, we analyzed the direct effect of condition on self-disclosure intimacy. The results showed that this effect was not significant for the self-report data, b = 0.03, SE = 0.12, P = .794. These findings suggest that people do disclose intimate information, but the disclosed information is equally intimate when disclosed to the chatbot, compared to the human interlocutor. However, for the coded data, we did find a significant effect, b = 0.35, SE = 0.11, P = .002. Specifically, the results showed that people disclosed more intimate information to a human interlocutor (M = 2.34; SD = 0.82) compared to a chatbot (M = 2.04; SD = 0.91).

9.1. Control variables

We also controlled our analyses for gender and the age of the participants, as well as alcohol use. Here, we only mention the significant effects. The analysis with the self-report data showed that age significantly impacted trust (RQ1), b = −0.01, SE = 0.01, P = .026. This shows that as age increases, trust in the interaction partner decreases. Alcohol use also significantly impacted trust, b = −0.37, SE = 0.14, P = .009; the higher the participants’ alcohol use, the less they trusted their interaction partner. Furthermore, with the addition of the variable alcohol use, there was a significant mediating effect of condition on intimate self-disclosure via trust (RQ1), b = 0.07, SE = 0.04, 95% BCa CI [.00, .15].

For the coded data, we found similar results. Specifically, we found that age significantly impacted trust (RQ1), b = −0.01, SE = 0.01, P = .028. This shows that as age increased, trust in the interaction partner decreased. Alcohol use also significantly impacted trust, b = −0.36, SE = 0.14, P = .009; the higher the participants’ alcohol use, the less they trusted their interaction partner. Furthermore, for the self-report data the results showed that age significantly impacted intimate self-disclosure (RQ2), b = 0.02, SE = 0.01, P = .027. Specifically, as age increased, people disclosed more intimate information.

10. DISCUSSION

In this study, we examined whether people are willing to disclose intimate information to a chatbot and whether they disclose more intimate information to a chatbot, compared to another human. In line with our first hypothesis, we found that perceived anonymity enhances perceived intimate self-disclosure (H1b). We only found this effect for the self-report data and not for the coded data. Previous research also showed that perceived anonymity stimulates self-disclosure, as people feel more disinhibited (e.g. Antheunis et al., 2007; Joinson, 2001). However, our findings also show that people feel equally anonymous when communicating with a human via CMC as when communicating to a chatbot. This can be explained by the fact that the number of cues were exactly the same in both conditions. The only difference was that the participants knew they were talking to either a human or a bot, but the interaction interface was exactly the same.

Furthermore, these findings add to Derlega and Grzelak’s (1979) functional theory of self-disclosure and Omarzu’s (2000) disclosure decision model, which propose that situational cues activate individual disclosure goals. Specifically, in this study, participants confessed a secret in a private, confessional setting, on a laptop in a text-based conversational interface. These situational cues, which were the same in both the human and chatbot conditions, may have activated specific individual goals (i.e. self-expression, relief of distress) and enhanced self-disclosure through perceived anonymity, irrespective of the conversation partner. Neither the functional theory of self-disclosure or the disclosure decision model take into account underlying mechanisms that may explain how self-disclosure is activated and our findings show that perceived anonymity may play an important role in the activation process. This may, however, depend on which self-disclosure goal is activated in a particular setting.

Second, in line with our expectations, we found that the participants in our study perceived the chatbot as less judgmental compared to the human interlocutor, which means they experienced less fear of negative evaluation when making their confession. Although our study confirms that people perceive a chatbot as non-judgmental, this did not enhance intimate self-disclosure. It may be that fear of judgment is only a determinant among people who are socially anxious and are more inhibited to self-disclose. Specifically, Kang and Gratch (2010) found that socially anxious people, who experience more fear of judgment, disclose greater intimate information about themselves when talking to a virtual human. Thus, it may be that for the sample in the present study, which was a general sample of people who voluntarily participated in the experiment and hence were already willing to tell a secret, fear of judgment was not a significant predictor of self-disclosure intimacy.

Regarding trust, previous research showed conflicting findings. Although there is evidence that trust enhances self-disclosure (e.g., Burgoon and Hale, 1984; Lee and Choi, 2017), there is no consensus in the argumentation in the literature if humans trust another human interaction partner more compared to a chatbot. Our findings showed that individuals trusted the human interaction partner more than the chatbot, for which there are several reasons. First, from a philosophical standpoint, technologies cannot be viewed as moral agents and hence objects of trust, because they do not have free will or intentionality (Solomon and Flores, 2001), even when they act as social actors. Second, there may be privacy and security concerns when talking to a chatbot that inhibit trust (Følstad et al., 2018). When interacting with a chatbot, personal data, including the content of the interactions, are often stored and used to improve the chatbot, which can impede trust, especially with social chatbots, as conversations can be quite personal. Our results underscore the potential negative impact of privacy and security concerns in chatbot communication.

When controlling our analyses for alcohol use, we found that the more alcohol individuals consumed, the less they trusted their interaction partner. Research confirms that that consuming alcohol can make people more disinhibited, which can enhance self-disclosure (e.g., Caudill et al., 1987; Fillmore, 2007; Lyvers et al., 2020). However, since alcohol use was only included as a control variable in the present study, future research should dive further into the impact this variable has on self-disclosure intimacy and other relevant variables. Furthermore, with the inclusion of this control variable, we found a positive, mediation effect: when talking to another human, people felt more trust, which increased intimate self-disclosure. This can be explained by the level of suspicion. People might be more suspicious toward new technologies (e.g. chatbots) than toward humans.

Finally, based on previous research we expected that self-disclosure intimacy would enhance positive affect and decrease feelings of stress (Kahn et al., 2001). Disclosing intimate, emotional experiences by writing (or typing) it down can reduce emotional intensity by allowing individuals to reevaluate the experience or emotion, which can provide relief (Lieberman et al., 2007). Specifically, when one openly expresses negative emotions, these emotions dwindle more quickly which can enhance feelings of relief (e.g. Farber et al., 2004). Our findings do not corroborate previous research; in this study self-disclosure intimacy did not enhance relief. Furthermore, this experienced relief was not contingent upon the perceived empathy of the interaction partner, which is also what we expected (H4). This may be explained by the fact that the confessions in the present study were overwhelmingly positive; 189 out of the 286 confessions were positive, 82 were negative, and 14 were coded as being neutral. Previous research shows that especially sharing disclosures that evoke negative emotion relieves stress (Bazarova and Choi, 2014). In contrast, positive disclosures are found to enhance a feeling of connection between two people (Utz, 2015). Since the majority of the disclosures in the present study were positive, this may explain why self-disclosure did not enhance relief.

10.1. Theoretical and practical implications

Our study has several implications for future theory and research. First, our study has implications for research on humans’ social behavior with chatbots, as we not only investigated the willingness to self-disclose toward a chatbot (a computer) versus a human, but we also considered relevant underlying mechanisms in the process of self-disclosure (i.e. anonymity, trust in the interaction partner, fear of judgment). Humans disclose (equally) intimate information to chatbots versus humans, at least according to their own perceptions. This is in line with the CASA paradigm (Nass and Moon, 2000), stating that people can react in a social manner to computers the same way they do to humans. The underlying processes are not straightforward, nor comparable to the underlying mechanisms that play a role in self-disclosure to humans. We find that an important feature of human-chatbot communication is that humans experience less fear of judgment, compared to in interactions with another human. However, humans trust a chatbot less compared to a human interaction partner. Future research should further investigate if that is because the lack of moral agency, because of privacy concerns, or if there are other reasons.

Second, this study extends Derlega and Grzelak’s (1979) functional theory of self-disclosure and Omarzu’s (2000) disclosure decision model, which propose that self-disclosure is a strategic behavior people use to achieve personal goals. Specifically, the theory posits that the default goal most people have for self-disclosure, is social approval: people want to be liked by others. As a result, the content of people’s disclosures is generally socially acceptable and approved by the recipient (Omarzu, 2000). The theory has been criticized for not accounting for underlying mechanisms that may account for the activation of those personal goals. The present research not only tests this theory in a unique, confessional setting, where other goals besides social approval are likely salient (e.g. relief of distress), but also shows that perceived anonymity may play an important role in explaining why people self-disclose in this particular setting. Specifically, previous research shows that when people feel anonymous, this reduces identifiability or accountability concerns (Scott, 1998) and results in feelings of disinhibition (Clark-Gordon et al., 2019). In line with previous research and the findings of the current study, the functional theory of self-disclosure can be extended to include perceived anonymity as an underlying mechanism in the activation of (intimate) self-disclosure.

This study also has implications for practice, in particular regarding the effectiveness of social chatbots in improving well-being. This study showed some first potential for using chatbot applications in improving mental well-being, which could potentially facilitate the mental healthcare sector, which currently deals with understaffing, long waiting lists and increasing costs. This also partly explains the popularity of social chatbots like Woebot and Wysa, which can help people who are anxious and/or depressed. The results of our study shed some light on the potential of these chatbots as a solution to shortages in the mental healthcare sector, as this study indicated that people are willing to disclose intimate information to the chatbot, which is a first requirement for successful therapy. Also, another important plus is that people experience less fear of judgment with the chatbot, which is important when sharing intimate topics, or topics people feel ashamed of. These aspects, combined with other advantages, such as 24/7 availability, low costs, show some potential for implementing such interventions in healthcare. However, to actually implement a successful chatbot intervention more requirements should be met, amongst which empathy is crucial. An empathetic response of the therapist can enhance the patient’s well-being. Our findings showed that there was no moderating effect of empathy on the self-disclosure—relief effect, but the perceived empathy was still the highest in the human condition, which is in line with Meng and Dai (2021). Future research should develop and test chatbots that are able to respond in an empathic and adequate manner.

10.2. Limitations and suggestions for future research

Although our study has shed light on some first steps in investigating people’s willingness to disclose intimate information to a social chatbot, we recognize some limitations. First, the contrast between our conditions (talking to chatbot vs. human via text-based CMC) might not be large enough to find clear differences in the underlying mechanisms for eliciting self-disclosure. For example, we did not find a difference in perceived anonymity between the conditions. It is known that communicating with another human via text-based CMC enhances perceptions of anonymity compared to face-to-face communication (see Clark-Gordon et al., 2019). In order to be sure if the anonymity feature of chatbot communication does (not) exist, future research should compare face-to-face with chatbot communication.

Second, we measured the impact of self-disclosure on emotional state via relief. This was done by means of a confession task in the experiment. We thought that confessions are oftentimes secrets that can weigh heavily on the discloser’s shoulders, which enhance relief after confessing. However, we noticed that ample of the secrets shared were positive in nature, which usually does not evoke relief. Due to this focus on relief instead of also on other positive emotional effects, we cannot be conclusive about that part of our study. Future research should further investigate this in several regards. Not only should a broader measurement on emotional state be included, but research should also be done on the capability of the chatbot to respond in an appropriate empathetic manner.

Finally, it should be noted that the study was administered through a laptop. This is in contrast with most of the common social chatbot applications (e.g., Woebot, Tess, Wysa and Replika) that are predominantly developed for and accessed through a smartphone. While the effect of the medium used to access a chatbot is currently an understudied topic, some evidence seems to suggest that the impact of medium is potentially large on constructs such as user experience and behavioral intention (in favor of smartphones compared to other devices; Persons et al., 2021). These results suggest that the levels of disclosure that were found in this study may be enhanced when a smartphone device is used.

Data Availability

The data underlying this article will be shared on reasonable request to the corresponding author. Requests to access the datasets should be directed to Emmelyn Croes, [email protected].

References

Abdul-Kader
,
S. A.
and
Woods
,
J.
(
2015
)
Survey on chatbot design techniques in speech conversation systems
.
Int. J. Adv. Comput. Sci. Appl.
,
6
,
72
80
.

Afifi
,
W. A.
and
Guerrero
,
L. K.
(
2000
) Motivations underlying topic avoidance in close relationships. In
Petronio
,
S.
(ed.),
Balancing the Secrets of Private Disclosures
, pp.
165
180
.
Erlbaum
,
Mahwah, NJ
.

Altman
,
I.
and
Taylor
,
D. A.
(
1973
)
Social Penetration. The Development of Interpersonal Relationships
.
Holt, Rinehart and Winston
,
New York
.

Antheunis
,
M. L.
,
Valkenburg
,
P. M.
and
Peter
,
J.
(
2007
)
Computer-mediated communication and interpersonal attraction: an experimental test of two explanatory hypotheses
.
CyberPsychol. Behav.
,
10
,
831
836
. https://doi.org/10.1089/cpb.2007.9945.

Antheunis
,
M. L.
,
Schouten
,
A. P.
,
Valkenburg
,
P. M.
and
Peter
,
J.
(
2012
)
Interactive uncertainty reduction strategies and verbal affection in computer-mediated communication
.
Commun. Res.
,
39
,
757
780
. https://doi.org/10.1177/0093650211410420.

Bazarova
,
N. N.
and
Choi
,
Y. H.
(
2014
)
Self-disclosure in social media: extending the functional approach to disclosure motivations and characteristics on social network sites
.
J. Commun.
,
64
,
635
657
. https://doi.org/10.1111/jcom.12106.

Bell
,
S.
,
Wood
,
C.
and
Sarkar
,
A.
(
2019
) Perceptions of chatbots in therapy. In
Proceedings of CHI Conference on Human Factors in Computing Systems Extended Abstracts
, pp.
1
6
.
ACM
,
USA
.

Berger
,
C. R.
and
Calabrese
,
R. J.
(
1975
)
Some explorations in initial interaction and beyond: toward a developmental theory of interpersonal communication
.
Hum. Commun. Res.
,
1
,
99
112
. https://doi.org/10.1111/j.1468-2958.1975.tb00258.x.

Bickmore
,
T. W.
and
Picard
,
R. W.
(
2005
)
Establishing and maintaining long-term human computer relationships
.
ACM Trans. Comput. Hum. Interact.
,
12
,
293
327
. https://doi.org/10.1145/1067860.1067867.

Burgoon
,
J. K.
and
Hale
,
J. L.
(
1984
)
The fundamental topoi of relational communication
.
Commun. Monogr.
,
51
,
193
214
. https://doi.org/10.1080/03637758409390195.

Caudill
,
B. D.
,
Wilson
,
G. T.
and
Abrams
,
D. B.
(
1987
)
Alcohol and self-disclosure: analyses of interpersonal behavior in male and female social drinkers
.
J. Stud. Alcohol
,
48
,
401
409
. https://doi.org/10.15288/jsa.1987.48.401.

Clark-Gordon
,
C. V.
,
Bowman
,
N. D.
,
Goodboy
,
A. K.
and
Wright
,
A.
(
2019
)
Anonymity and online self-disclosure: a meta-analysis
.
Commun. Rep.
,
32
,
98
111
. doi.org/10.1080/08934215.2019.1607516.

Colognori
,
D.
,
Esseling
,
P.
,
Stewart
,
C.
,
Reiss
,
P.
,
Lu
,
F.
,
Case
,
B.
and
Warner
,
C. M.
(
2012
)
Self-disclosure and mental health service use in socially anxious adolescents
.
Sch. Ment. Heal.
,
4
,
219
230
. https://doi.org/10.1007/s12310-012-9082-0.

Corritore
,
C. L.
,
Kracher
,
B.
and
Wiedenbeck
,
S.
(
2003
)
On-line trust: concepts, evolving themes, a model
.
Int. J. Hum. Comput. Stud.
,
58
,
737
758
. doi.org/10.1016/S1071-5819(03)00041-7.

Croes
,
E. A.
and
Antheunis
,
M. L.
(
2021
)
Can we be friends with Mitsuku? A longitudinal study on the process of relationship formation between humans and a social chatbot
.
J. Soc. Pers. Relat.
,
38
,
279
300
. doi.org/10.1177/0265407520959463.

D'alfonso
,
S.
,
Santesteban-Echarri
,
O.
,
Rice
,
S.
,
Wadley
,
G.
and
Alvarez-Jimenez
,
M.
(
2017
)
Artificial intelligence-assisted online social therapy for youth mental health
.
Front. Psychol.
,
8
,
796
. https://doi.org/10.3389/fpsyg.2017.00796.

Derlega
,
V. J.
and
Berg
,
J. H.
(eds) (
1987
)
Self-Disclosure: Theory, Research, and Therapy
.
Springer US
,
Boston, MA
.

Derlega
,
V. J.
and
Grzelak
,
J.
(
1979
). Appropriateness of self-disclosure. In Chelune, G. J. (ed.)
Self-disclosure: Origins, patterns, and implications of openness in interpersonal relationships
, pp.
151
176
.
Jossey-Bass
.

De Smedt
,
T.
and
Daelemans
,
W.
(
2012
).
Pattern for python
.
The Journal of Machine Learning Research
,
13
,
2063
-
2067
.

Dindia
,
K.
and
Allen
,
M.
(
1992
)
Sex differences in self-disclosure: a meta analysis
.
Psychol. Bull.
,
112
,
106
124
. https://doi.org/10.1037/0033-2909.112.1.106.

Eisenberg
,
D.
,
Downs
,
M. F.
,
Golberstein
,
E.
and
Zivin
,
K.
(
2009
)
Stigma and help seeking for mental health among college students
.
Med. Care Res. Rev.
,
66
,
522
541
. https://doi.org/10.1177/1077558709335173.

Farber
,
B. A.
(
2006
)
Self-Disclosure in Psychotherapy
.
The Guilford Press
,
New York
.

Farber
,
B. A.
,
Berano
,
K. C.
and
Capobianco
,
J. A.
(
2004
)
Clients' perceptions of the process and consequences of self-disclosure in psychotherapy
.
J. Couns. Psychol.
,
51
,
340
346
. https://doi.org/10.1037/0022-0167.51.3.340.

Fillmore
,
M. Τ.
(
2007
)
Acute alcohol-induced impairment of cognitive functions: past and present findings
.
Int. J. Disabil. Hum. Dev.
,
6
,
115
126
. doi.org/10.1515/IJDHD.2007.6.2.115.

Fitzpatrick
,
K. K.
,
Darcy
,
A.
and
Vierhile
,
M.
(
2017
)
Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial
.
JMIR Mental Health
,
e19
, e7785. https://doi.org/10.2196/mental.7785.

Følstad
,
A.
and
Brandtzaeg
,
P. B.
(
2020
)
Users' experiences with chatbots: findings from a questionnaire study
.
Qual. User Exp.
,
5
,
3
. https://doi.org/10.1007/s41233-020-00033-2.

Følstad
,
A.
,
Nordheim
,
C. B.
and
Bjørkli
,
C. A.
(
2018
) What Makes Users Trust a Chatbot for Customer Service? An Exploratory Interview Study. In Bodrunova, S. (ed.)
Internet Science. INSCI 2018. Lecture Notes in Computer Science: Vol 11193
.
Springer
,
Cham
. https://doi.org/10.1007/978-3-030-01437-7_16.

Freud
,
S.
(
1935
)
A General Introduction to Psychoanalysis
.
Washington Square Press
,
New York
.

Greist
,
J. H.
,
Laughren
,
T. P.
,
Gustafson
,
D. H.
,
Stauss
,
F. F.
,
Rowse
,
G. L.
and
Chiles
,
J. A.
(
1973
)
A computer interview for suicide-risk prediction
.
Am. J. Psychiatry
,
130
,
1327
1332
. doi.org/10.1176/ajp.130.12.1327.

Hite
,
D. M.
,
Voelker
,
T.
and
Robertson
,
A.
(
2014
)
Measuring perceived anonymity: the development of a context independent instrument
.
J. Methods Meas. Soc. Sci.
,
5
,
22
39
. https://doi.org/10.2458/jmm.v5i1.18305.

Ho
,
A.
,
Hancock
,
J.
and
Miner
,
A. S.
(
2018
)
Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot
.
J. Commun.
,
68
,
712
733
. doi.org/10.1093/joc/jqy026.

Joinson
,
A. N.
(
2001
)
Self-disclosure in computer-mediated communication: the role of self awareness and visual anonymity
.
Eur. J. Soc. Psychol.
,
31
,
177
192
. https://doi.org/10.1002/ejsp.36.

Kahn
,
J. H.
,
Achter
,
J. A.
and
Shambaugh
,
E. J.
(
2001
)
Client distress disclosure, characteristics at intake, and outcome in brief counseling
.
J. Couns. Psychol.
,
48
,
203
211
. doi.org/10.1037/0022-0167.48.2.203.

Kang
,
S. H.
and
Gratch
,
J.
(
2010
)
Virtual humans elicit socially anxious interactants' verbal self-disclosure
.
Comput. Anim. Virt. Worlds
,
21
,
473
482
. https://doi.org/10.1002/cav.345.

Klein
,
J.
,
Moon
,
Y.
and
Picard
,
R. W.
(
2002
)
This computer responds to user frustration, theory, design, and results
.
Interact. Comput.
,
14
,
119
140
. https://doi.org/10.1016/S0953-5438(01)00053-4.

Lambert
,
M. J.
and
Barley
,
D. E.
(
2001
)
Research summary on the therapeutic relationship and psychotherapy outcome
.
Psychother. Theory Res. Pract. Train.
,
38
,
357
361
. doi.org/10.1037/0033-3204.38.4.357.

Landis
,
J. R.
and
Koch
,
G. G.
(
1977
)
An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers
.
Biometrics
,
33
,
363
374
. https://doi.org/10.2307/2529786.

Lane
,
J. D.
and
Wegner
,
D. M.
(
1995
)
The cognitive consequences of secrecy
.
J. Pers. Soc. Psychol.
,
69
,
237
253
. doi.org/10.1037/0022-3514.69.2.237.

Lappeman
,
J.
,
Marlie
,
S.
,
Johnson
,
T.
and
Poggenpoel
,
S.
(
2023
)
Trust and digital privacy: willingness to disclose personal information to banking chatbot services
.
J. Financ. Serv. Mark.
,
28
,
337
357
. https://doi.org/10.1057/s41264-022-00154-z.

Leary
,
M. R.
(
1983
)
A brief version of the fear of negative evaluation scale
.
Personal. Soc. Psychol. Bull.
,
9
,
371
375
. doi.org/10.1177/0146167283093007.

Lee
,
S.
and
Choi
,
J.
(
2017
)
Enhancing user experience with conversational agent for movie recommendation: effects of self-disclosure and reciprocity
.
Int. J. Hum. Comput. Stud.
,
103
,
95
105
. doi.org/10.1016/j.ijhcs.2017.02.005.

Lieberman
,
M. D.
,
Eisenberger
,
N. I.
,
Crockett
,
M. J.
,
Tom
,
S. M.
,
Pfeifer
,
J. H.
and
Way
,
B. M.
(
2007
)
Putting feelings into words: affect labeling disrupts amygdala activity in response to affective stimuli
.
Psychol. Sci.
,
18
,
421
428
. doi.org/10.1111/j.1467-9280.2007.01916.x.

Lin
,
R.
and
Utz
,
S.
(
2017
)
Self-disclosure on SNS: do disclosure intimacy and narrativity influence interpersonal closeness and social attraction?
 
Comput. Hum. Behav.
,
70
,
426
436
. doi.org/10.1016/j.chb.2017.01.012.

Link
,
B. G.
,
Mirotznik
,
J.
and
Cullen
,
F. T.
(
1991
)
The effectiveness of stigma coping orientations: can negative consequences of mental illness labeling be avoided?
 
J. Health Soc. Behav.
,
32
,
302
320
. doi.org/10.2307/2136810.

Lucas
,
G. M.
,
Gratch
,
J.
,
King
,
A.
and
Morency
,
L. P.
(
2014
)
It’s only a computer: virtual humans increase willingness to disclose
.
Comput. Hum. Behav.
,
37
,
94
100
. doi.org/10.1016/j.chb.2014.04.043.

Lucas
,
G. M.
,
Rizzo
,
A.
,
Gratch
,
J.
,
Scherer
,
S. J.
and
Morency
,
L. P.
(
2017
)
Reporting mental health symptoms: breaking down barriers to care with virtual human interviewers
.
Front. Robot. AI
,
4
,
51
. doi.org/10.3389/frobt.2017.00051.

Lyvers
,
M.
,
Cutinho
,
D.
and
Thorberg
,
F. A.
(
2020
)
Alexithymia, impulsivity, disordered social media use, mood and alcohol use in relation to facebook self-disclosure
.
Comput. Hum. Behav.
,
103
,
174
180
. https://doi.org/10.1016/j.chb.2019.09.004.

Meng
,
J.
and
Dai
,
Y.
(
2021
)
Emotional support from AI chatbots: should a supportive partner self-disclose or not?
 
J. Comput. Mediat. Commun.
,
26
,
207
222
. https://doi.org/10.1093/jcmc/zmab005.

Mou
,
Y.
and
Xu
,
K.
(
2017
)
The media inequality: comparing the initial human-human and human-AI social interactions
.
Comput. Hum. Behav.
,
72
,
432
440
. https://doi.org/10.1016/j.chb.2017.02.067.

Nass
,
C.
and
Moon
,
Y.
(
2000
)
Machines and mindlessness: social responses to computers
.
J. Soc. Issues
,
56
,
81
103
. https://doi.org/10.1111/0022-4537.00153.

Nass
,
C.
,
Moon
,
Y.
,
Fogg
,
B. J.
,
Reeves
,
B.
and
Dryer
,
D. C.
(
1995
)
Can computer personalities be human personalities?
 
Int. J. Hum. Comput. Stud.
,
43
,
223
239
. doi.org/10.1006/ijhc.1995.1042.

Nass
,
C.
,
Fogg
,
B. J.
and
Moon
,
Y.
(
1996
)
Can computers be teammates?
 
Int. J. Hum. Comput. Stud.
,
45
,
669
678
. doi.org/10.1006/ijhc.1996.0073.

Omarzu
,
J.
(
2000
)
A disclosure decision model: determining how and when individuals will self-disclose
.
Personal. Soc. Psychol. Rev.
,
4
,
174
185
. doi.org/10.1207/S15327957PSPR0402_05.

Pennebaker
,
J. W.
(
1993
)
Putting stress into words: health, linguistic, and therapeutic implications
.
Behav. Res. Ther.
,
31
,
539
548
. doi.org/10.1016/0005-7967(93)90105-4.

Pennebaker
,
J. W.
(
1995
)
Emotion, Disclosure, & Health
.
American Psychological Association
,
Washington, DC
.

Pennebaker
,
J.W.
,
Boyd
,
R.L.
,
Jordan
,
K.
and
Blackburn
,
K.
(
2015
).
The development and psychometric properties of LIWC2015
.
University of Texas at Austin
,
Austin, TX
.

Pennebaker
,
J. W.
and
Chung
,
C. K.
(
2007
) Expressive writing, emotional upheavals, and health. In
Friedman
,
H. S.
,
Silver
,
R. C.
(eds)
Foundations of Health Psychology
, pp.
263
284
.
Oxford University Press
,
New York, NY
.

Persons
,
B.
,
Jain
,
P.
,
Chagnon
,
C.
and
Djamasbi
,
S.
(
2021
) Designing the Empathetic Research IoT Network (ERIN) Chatbot for Mental Health Resources In Nah, F. FH., Siau, K. (eds)
Lecture Notes in Computer Science: Vol 12783. HCI in Business, Government and Organizations
, pp.
619
629
.
Springer
,
Cham
. https://doi.org/10.1007/978-3-030-77750-0_41.

Qian
,
H.
and
Scott
,
C. R.
(
2007
)
Anonymity and self-disclosure on weblogs
.
J. Comput.-Mediat. Commun.
,
12
,
1428
1451
. doi.org/10.1111/j.1083-6101.2007.00380.x.

Rains
,
S. A.
(
2007
)
The impact of anonymity on perceptions of source credibility and influence in computer-mediated group communication: a test of two competing hypotheses
.
Commun. Res.
,
34
,
100
125
. doi.org/10.1177/0093650206296084.

Ravichander
,
A.
and
Black
,
A. W.
(
2018
) An empirical study of self-disclosure in spoken dialogue systems. In Komatani, K, Litman, D., Yu, K., Parangelis, A., Cayedon, L. and Nakano, M. (eds)
Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue
, pp.
253
263
.
Association for Computational Linguistics
,
Melbourne, Australia
.

Reeves
,
B.
and
Nass
,
C. I.
(
1996
)
The media equation: How people treat computers, television, and new media like real people and places
.
Cambridge University Press
,
New York, NY
.

Reis
,
H. T.
and
Shaver
,
P.
(
1988
) Intimacy as an interpersonal process. In
Duck
,
S. W.
(ed.),
Handbook of personal relationships
, pp.
367
389
.
John Wiley and Sons
,
Chichester, UK
.

Reis
,
H. T.
,
Lemay
,
E. P.
, Jr.
and
Finkenauer
,
C.
(
2017
)
Toward understanding understanding: the importance of feeling understood in relationships
.
Soc. Personal. Psychol. Compass
,
11
, e12308. doi.org/10.1111/spc3.12308.

Rubin
,
Z.
and
Shenker
,
S.
(
1978
)
Friendship, proximity, and self-disclosure
.
J. Pers.
,
46
,
1
22
. doi.org/10.1111/j.1467-6494.1978.tb00599.x.

Savin-Baden
,
M.
,
Tombs
,
G.
,
Burden
,
D.
and
Wood
,
C.
(
2013
)
‘It’s almost like talking to a person’: student disclosure to pedagogical agents in sensitive settings
.
Int. J. Mobile Blended Learn.
,
5
,
78
93
. https://doi.org/10.4018/jmbl.2013040105.

Scott
,
C. R.
(
1998
)
To reveal or not to reveal: a theoretical model of anonymous communication
.
Commun. Theory
,
8
,
381
407
. https://doi.org/10.1111/j.1468-2885.1998.tb00226.

Shenk
,
C. E.
and
Fruzzetti
,
A. E.
(
2011
)
The impact of validating and invalidating responses on emotional reactivity
.
J. Soc. Clin. Psychol.
,
30
,
163
183
. doi.org/10.1521/jscp.2011.30.2.163.

Skjuve
,
M. B.
and
Brandtzæg
,
P. B.
(
2018
) Chatbots as a new user interface for providing health information to young people. In
Andersson
,
Y.
,
Dahlquist
,
U.
,
Ohlsson
,
J.
(eds)
Youth and News in a Digital Media Environment – Nordic-Baltic Perspectives
, pp.
59
66
.
Sintef
,
Norway
.

Sloan
,
D. M.
(
2010
) Self-disclosure and psychological well-being. In
Maddux
,
J. E.
,
Tangney
,
J. P.
(eds)
Social Psychological Foundations of Clinical Psychology
, pp.
212
225
.
The Guilford Press
,
New York, USA
.

Solomon
,
R. C.
and
Flores
,
F.
(
2001
)
Building Trust in Business, Politics, Relationships, and Life
.
Oxford University Press
,
New York
.

Stiff
,
J. B.
,
Dillard
,
J. P.
,
Somera
,
L.
,
Kim
,
H.
and
Sleight
,
C.
(
1988
)
Empathy, communication, and prosocial behavior
.
Commun. Monogr.
,
55
,
198
213
. https://doi.org/10.1080/03637758809376166.

Tidwell
,
L. C.
and
Walther
,
J. B.
(
2002
)
Computer-mediated communication effects on disclosure, impressions, and interpersonal evaluations: getting to know one another a bit at a time
.
Hum. Commun. Res.
,
28
,
317
348
. https://doi.org/10.1111/j.1468-2958.2002.tb00811.x.

Tillmann-Healy
,
L. M.
(
2003
)
Friendship as method
.
Qual. Inq.
,
9
,
729
749
. doi.org/10.1177/1077800403254894.

Utz
,
S.
(
2015
)
The function of self-disclosure on social network sites: not only intimate, but also positive and entertaining self-disclosures increase the feeling of connection
.
Comput. Hum. Behav.
,
45
,
1
10
. doi.org/10.1016/j.chb.2014.11.076.

Vaidyam
,
A. N.
,
Linggonegoro
,
D.
and
Torous
,
J.
(
2021
)
Changes to the psychiatric Chatbot landscape: a systematic review of conversational agents in serious mental illness: Changements du paysage psychiatrique des chatbots: Une revue systématique des agents conversationnels dans la maladie mentale sérieuse
.
Can. J. Psychiatr.
,
66
,
339
348
. https://doi.org/10.1177/0706743720966429.

Vogel
,
D. L.
and
Wester
,
S. R.
(
2003
)
To seek help of not to seek help: the risk of self- disclosure
.
J. Couns. Psychol.
,
50
,
351
361
. https://doi.org/10.1037/0022-0167.50.3.351.

Weisband
,
S.
and
Kiesler
,
S.
(
1996
) Self-disclosure on computer forms: Meta-analysis and implications. In Tauber, M. J. (ed.),
Proceedings of the SIGCHI conference on human factors in computing systems
, pp.
3
10
.
Association for Computing Machinery
,
New York, USA

Wheeless
,
L. R.
and
Grotz
,
J.
(
1977
)
The measurement of trust and its relationship to self- disclosure
.
Hum. Commun. Res.
,
3
,
250
257
. https://doi.org/10.1111/j.1468-2958.1977.tb00523.x.

A. Appendix A

Questions asked in all conditions.

Good morning/good afternoon. I am a female priest, and I am on the other side of the [music festival name] site. My name is Maria. What is your name?

So, <name>, can you tell me where you are from?

Have you been to [music festival name] before?

How are you liking [music festival name] this year?

Which artists have you seen at [music festival name] this year?

Hey < name>, I enjoyed getting to know you better. But, of course, you came here to share a secret. Do you have a secret you’d like to share with me?

Can you like to tell more about how you feel about the secret?*.

Is there anything else you want to share about your secret?

Thank you for sharing your secret! That’s it. Glad you wanted to participate in our study. You will soon receive a questionnaire from one of the researchers.

*Note. This question was only asked in the ‘high perceived understanding’ conditions.

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact [email protected]