Abstract

Storytelling is a human universal. The ubiquity of stories and the rapid development in Artificial Intelligence (AI) pose important questions: can AI like ChatGPT tell engaging and persuasive stories? If so, what makes a narrative engaging and persuasive? Three pre-registered experiments comparing human-generated narratives from existing research and the ChatGPT-generated versions using descriptions and materials from these studies show that labeling AI as a narrative source led to lower transportation, higher counterarguing, and lower story-consistent beliefs. However, AI-generated narratives led to lower (Study 1 and 3) or similar levels (Study 2) of counterarguing than the human-generated version. Readers showed lower (Study 2) or similar levels of transportation (Study 1 and 3) when reading the AI- than the human-generated stories. We suggest the AI model’s linguistic competence and logical coherence contribute to its stories’ verisimilitude. However, AI’s lack of lived experience and creativity may limit its storytelling ability.

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/pages/standard-publication-reuse-rights)
Open Science Framework awards
Open Materials Open Materials
The components of the research methodology needed to reproduce the reported procedure and analysis are publicly available for this article.
You do not currently have access to this article.