Abstract

Two decades ago, Gaston and O'Neill (2004) deliberated on why automated species identification had not become widely employed. We no longer have to wonder: This AI-based technology is here, embedded in numerous web and mobile apps used by large audiences interested in nature. Now that automated species identification tools are available, popular, and efficient, it is time to look at how the apps are developed, what they promise, and how users appraise them. Delving into the automated species identification apps landscape, we found that free and paid apps differ fundamentally in presentation, experience, and the use of biodiversity and personal data. However, these two business models are deeply intertwined. Going forward, although big tech companies will eventually take over the landscape, citizen science programs will likely continue to have their own identification tools because of their specific purpose and their ability to create a strong sense of belonging among naturalist communities.

Digital technologies are now part of everyday life. Among these technologies, the smartphone is most prevalent, with the number of connected devices having increased from 3.6 billion in 2016 to over 7 billion by 2024 (Taylor 2023). Smartphones have profoundly affected how people create and share content on social networks, foster social relationships, seek information, entertain themselves, and learn (Sarker 2019). As a result, the smartphone, with its myriad of applications (software programs designed for mobile use, hereafter referred to as apps) is fundamentally changing how people interact with each other and their environment, including nature.

Smartphones and their apps enable a wide variety of technology-mediated experiences of nature (Truong and Clayton 2020). Among them, there are indirect experiences, such as viewing images and videos of nature on social networks, video and webcam platforms, and immersion in virtual environments (Truong 2024). A growing body of recent research suggests that these indirect experiences of nature may influence numerous aspects of people's lives (Litleskare et al. 2020, Silk et al. 2021, Frost et al. 2022). They can substitute—at least partly—for direct contact with nature, a form of engagement that has become difficult for increasingly urban and sedentary populations (Truong et al. 2018, Arts et al. 2021b). However, technology-mediated experiences can also improve people's environmental knowledge (Crowley et al. 2021), raise awareness of environmental issues, and encourage individual action (Boissat et al. 2021). Smartphones and their apps can also be involved in more direct, embodied interactions with natural environments, such as through supporting field-based natural history (Tewksbury et al. 2014). Species identification keys and field guides transposed onto smartphones create an information medium comparable to or better than their analogue predecessors (Farnsworth et al. 2013, Sharma et al. 2019) and with unprecedented immediacy and availability (Finger et al. 2022). Never before has knowledge about nature, formerly held by naturalists, been so readily accessible. These and parallel digital technology developments have enabled the emergence of large and structured communities of observers who take part in biodiversity monitoring and provide data to online citizen science platforms such as eBird, iNaturalist, and Pl@ntNet. These platforms facilitate rapid information sharing and exchange between participants, allowing them to submit and manage their observations using globally accessible databases, as well as identify and validate other participants’ data (Unger et al. 2020). All these data are of great importance to scientists, who could not have collected such a large number of observations, identifications, and records in so many locations across the world. Their importance is such that, currently, more than 50% of species occurrence records held by the Global Biodiversity Information Facility (GBIF)—the world's largest repository of biodiversity data—are citizen science observations (Waller 2019).

Given the rapid influx of AI-generated data sets, the future role of citizen science as the primary source of species observations is uncertain (McClure et al. 2020, Klasen et al. 2022). Enormous investment and progress in artificial intelligence, including the emergence of so-called convolutional neural networks (CNNs), has enabled sustained growth in technology and computing science research on automated species recognition—an approach arising in the late 1960s and worked to perfection ever since (figure 1a, 1b; Lecun et al. 2015). Plant species suited the development of this approach particularly well because of their sedentary nature and the associated availability of ample photographic material, necessary for the development of highly data-intensive algorithms (figure 1c). Insects and birds attracted research interest too but less so and possibly for different reasons; with birds being the most popular species group in society (67% of data shown on GBIF.org are bird records) and insects potentially benefiting most from automated species recognition approaches, given the stark decline in the number of human taxonomic experts for a species group that encompasses most of the world's biodiversity (Hughes et al. 2021, Mandeville et al. 2021).

Automated species identification (ASI): research attention (a–c) and app development (e-f). (a) Early rise and sustained growth in research attention to ASI, measured as the number of scientific publications per year (extracted from Scopus). (b) Association between research attention to ASI and AI, the wider computing science and technology landscape it is part of. (c) Differential onset and growth over time in ASI research attention paid to the main species groups focused on (plants, insects and birds, mammals). (d) Growth in the number of ASI app releases over time and by operational system (Android, Apple, both; extracted from Google Playstore and Apple's App Store in September 2013). The line shows the research attention to ASI in the respective years from panel (a), demonstrating both parallel development and considerable (more than 20 years) delay acceleration of ASI research effort and app release. (e) Stark differences in the total number of apps released with different foci and whether of not having to pay a fee. The top seven categories concern natural history interest (i.e., different species groups; n = 206 apps) and the latter three wider adjacent interests.
Figure 1.

Automated species identification (ASI): research attention (a–c) and app development (e-f). (a) Early rise and sustained growth in research attention to ASI, measured as the number of scientific publications per year (extracted from Scopus). (b) Association between research attention to ASI and AI, the wider computing science and technology landscape it is part of. (c) Differential onset and growth over time in ASI research attention paid to the main species groups focused on (plants, insects and birds, mammals). (d) Growth in the number of ASI app releases over time and by operational system (Android, Apple, both; extracted from Google Playstore and Apple's App Store in September 2013). The line shows the research attention to ASI in the respective years from panel (a), demonstrating both parallel development and considerable (more than 20 years) delay acceleration of ASI research effort and app release. (e) Stark differences in the total number of apps released with different foci and whether of not having to pay a fee. The top seven categories concern natural history interest (i.e., different species groups; n = 206 apps) and the latter three wider adjacent interests.

As the research gained traction, it attracted interest from disciplines beyond computing science and technology, including applied sciences that concentrate on species identification and classification. Computer scientists working on image recognition got interested in species identification because large data sets were available to work from and publish on. By working on this with naturalists, image recognition has become species identification. Automated species identification gained attention in the 2000s as a challenging but highly promising solution for the development of new research activities in taxonomy, biology, and ecology (Gaston and O'Neill 2004). What followed in wider society was a rise in popularity of the iPhone (launched in 2007), triggering a worldwide expansion of smartphone use that boosted the number, diversity, popularity, and data-gathering capacity of citizen science programs (Requier et al. 2020). CNNs for automated identification were first reported in 2012 and resulted in a dramatic increase in identification accuracy (Krizhevsky et al. 2017). CNNs represented a significant improvement on earlier automated identification algorithms (Unger et al. 2016), because the images required little preprocessing and, therefore, less human input or intervention. With the advent of these species identification tools, it was suggested that artificial intelligence could complement or even replace human identification expertise (Wäldchen and Mäder 2018a), mitigating the taxonomic gap (Bonnet et al. 2016) created by the lack of taxonomic experts. As a result, the large data sets assembled by citizen scientists could now be processed to further develop automated species identification (Trouille et al. 2019). Citizen science programs such as Pl@ntNet and iNaturalist pioneered automated identification of user-uploaded photos and records in their operations. Since 2017, iNaturalist competitions hosted on the online platform Kaggle during the annual Fine-Grained Visual Categorization workshops have showcased the potential of these technologies for species identification, encouraging further development. In turn, these events have helped iNaturalist build better AI models.

Steady research interest in automated species identification has allowed for the quality, diversity and abundance of these tools to continue to increase (Besson et al. 2022, Borowiec et al. 2022). Large-scale testing shows strong performances in automated identification of plants (Bonnet et al. 2018, Goëau et al. 2018), birds (Castro et al. 2019, Ruff et al. 2020), mammals (Norouzzadeh et al. 2018, Shiu et al. 2020) and fungi (Wang et al. 2020). Apart from bumblebees (Spiesman et al. 2021) and butterflies (Fathimathul et al. 2022, Xi et al. 2022)—groups with relatively few species and large volumes of photographic material available—insects remain a major challenge because of their great diversity, often cryptic determination characteristics and limited photographic resolution and documentation (Høye et al. 2022).

We have reached the point where automated species identification is visible to wider society, outside of the naturalist realm. As such, they can have a potentially large impact on environmental learning (Uzunboylu et al. 2009) and on people's relationship with nature in everyday life (box 1). With this in mind, we ask the following questions: What does the automated species identification apps landscape look like? What experience and knowledge do they promote? What do users say about them? What happens to observational and personal data? And finally, what will the future hold?

Box 1.
Identification of a species with an app.

Automated species identification tools are out on the net as part of websites, as web apps and as smartphone apps. Given that species identification is a process that typically starts outdoors, smartphone apps best align with the aspiration of most users. In practice, the use of these apps is as follows: The user takes a picture or makes a sound recording of the species they want to identify; the automated identification tool suggests an identification or a list of probable species; and the user has access in the app to information about the identified species or access to external resources to learn more. Many apps have additional functionalities that flow from automated species identification, such as personal galleries to keep track of one's identifications, opportunities for validating identifications of other users, data mining and visualization, and more social functions such as sharing imagery, discussion forums and chats.

graphic

What is out there?

Apps using automated identification tools apply deep learning to two main approaches: computer vision and acoustic identification. Computer vision is an interdisciplinary field that develops software that mimics the capability of human vision to interpret visual materials. It uses machine-learning techniques and algorithms for object recognition, distinction, and classification by size, shape, and color. It also detects and interprets patterns in digital visual data, such as photos and videos (Spiesman et al. 2021). In the case of automated species identification apps, computer vision systems analyze pictures taken by users. They then compare visual (pixel-based) patterns present in these pictures with a database the system is trained on and to which the app is connected. Acoustic identification, on the other hand, is a technique based on pattern recognition and signal analysis, where the acoustic data picked up by the app (or autonomous sensors; see, e.g., Wägele et al. 2022) is turned into a visual pattern representation called a spectrogram, which captures the amplitude, duration, and frequency of the recorded sounds (Ruff et al. 2020). From this, like for computer vision-based apps, the data is processed and compared with a preexisting database to provide one or more possible identifications (Stowell et al. 2019).

Numerous automated species identification apps can be found online, free or for a fee. This proprietary landscape is both rich and dynamic, with apps coming and going, and, therefore, its nature changes over time. To map the landscape of automated species identification apps, we used a method that mirrors a typical user's path to finding an app that can help identify a specimen via a smartphone. Initial queries were made on the two predominant platforms for app acquisition, Google Playstore and Apple's App Store, using broad search terms (e.g., “species identification AI,” “species ID AI,” and “species identifier”). These were then iteratively refined, as well as expanded on, with more targeted queries (e.g., plants, insects, birds, fish, and mushrooms) in response to the landscape that slowly became visible and to ensure a thorough cataloguing. During this process, we also came across adjacent topics such as rocks, plant diseases and care, and pets. To ensure that no important natural history apps were missed, we included these topics in our searches, and because they were part of our findings, we included them in the final data set.

Our search yields a total of 250 currently available applications, with 140 exclusive to Apple, 68 exclusive to Android, and the remaining 42 available for both platforms (see the Supplemental material for the full data set and Truong and Van der Wal 2024). Each app's details were investigated on three specialized websites (appadvice.com, appbrain.com, sensortower.com), selected because they display data that is not available on app stores and that is difficult to find elsewhere. This included the year of release, the associated business model, academic support (if any), and an estimated number of downloads. The oldest automated species identification app currently available is from 2013 (Bird Song id UK). The number of apps released per year gradually increased until 2020, after which new ones suddenly start to come out at a much greater rate (figure 1d) for both the Android and Apple platforms. Automated plant identification apps are by far the most abundant (32% of all 250 apps; figure 1e), followed by those handling insects (18%), birds (10%), mushrooms (8%), and multiple groups (4.8%). About three quarters are paid apps, and for some interests, such as mushrooms, the figure is even higher (90%). For other species groups, such as birds, the majority are free (58%). Although our search method was designed to identify all relevant natural history apps, it revealed that automated species recognition is also used for related but different interests, such as domestic animals (8%, mostly cat and dog breeds), garden and (house) plant care (6%), and rocks (4%). Only 7 apps (3%, all for free) were developed with the support of academic institutions; the other 243 are products of private individuals or companies.

We obtained detailed information for 219 out of 250 automated species identification apps (88% of the full app list). Some information was unavailable for a few newer, smaller apps, as well as a subset of apps exclusively hosted on the Apple App Store. On the basis of the obtained data, we estimate the collective number of downloads to be 199 million, of which 123 million (62%) are paid downloads. The overwhelming majority of downloads are for apps developed for both Android and Apple platforms (86%), suggesting that developing for both platforms may offer certain advantages, such as access to more resources, improved app quality, and better marketing opportunities. These benefits could lead to greater app popularity, which may be less accessible to developers who focus exclusively on the Apple (0.6%) or Android (14%) platform. Of these 199 million downloads, 125 million concern natural history apps (63%), and 75 million involve adjacent topics. This reveals large-scale use and interest in learning about the nature around us through this digital technology. Looking at the taxonomic groups identified, the majority of downloads concern plants, either in a natural history context (51% of all downloads) or as cultural objects (garden and house plants; 29%). All other interests were minor in comparison (6% or less), but most still concerned millions of downloads (e.g., domestic animals, 11.3 million; birds, 9.7 million; mushrooms, 7.3 million).

Although there are numerous (n = 206) automated species identification apps focused on natural history available, the majority have low download rates: 65% of both free and paid apps are downloaded less than 10,000 times (figure 2a). This is in no small part because of many being released relatively recently, because the number of downloads increases exponentially with the number of years they have been available (figure 2c). Only 12 apps (7%) have been downloaded a million or more times and are therefore highly popular. Five of these are citizen science apps developed by academic institutions, and those represent a substantial share of the total downloads (34%; see figure 2a). Although there are many more free apps that are not connected to citizen science program apps (n = 53), these have a negligible share of downloads (0.8%). More than 80% of all natural history–focused automated species identification app downloads concern plants (figure 2b), largely because of three similar-size heavyweights (Pl@ntNet, PlantSnap and PictureThis—Plant Identifier, each within the 10–50 million downloads range).

The popularity of automated species identification (ASI) natural history apps. (a) Frequency distribution of ASI apps across different popularity classes, based on the reported numbers of downloads (obtainable for 86% of the apps), distinguishing between paid and free apps, and the cumulative percentage of downloads of both citizen science apps and all natural history apps. The percentages accrue from left to right—that is, going from apps with a very low to a very high number of downloads. The classes used are less than 5000, 5000–10,000,10,000–50,000, 50,000–100,000, 100,000–500,000, 500,000–1 million, 1–5 million, 10–50 million. (b) Differential popularity of the main different species groups, depicted as the percentage of total downloads of all natural history apps and indicating whether downloads were free or for a fee. (c) The popularity of individual apps (plotted data points, on a log10 scale) in relation to how far back in time they were released and, therefore, how many years each had to accrue downloads. Significant relationships were found for plants, birds, and mushrooms. For multigroup, insect and fish apps, most apps were too young (released during the last 3 years) to meaningfully test for trends in the number of downloads over time.
Figure 2.

The popularity of automated species identification (ASI) natural history apps. (a) Frequency distribution of ASI apps across different popularity classes, based on the reported numbers of downloads (obtainable for 86% of the apps), distinguishing between paid and free apps, and the cumulative percentage of downloads of both citizen science apps and all natural history apps. The percentages accrue from left to right—that is, going from apps with a very low to a very high number of downloads. The classes used are less than 5000, 5000–10,000,10,000–50,000, 50,000–100,000, 100,000–500,000, 500,000–1 million, 1–5 million, 10–50 million. (b) Differential popularity of the main different species groups, depicted as the percentage of total downloads of all natural history apps and indicating whether downloads were free or for a fee. (c) The popularity of individual apps (plotted data points, on a log10 scale) in relation to how far back in time they were released and, therefore, how many years each had to accrue downloads. Significant relationships were found for plants, birds, and mushrooms. For multigroup, insect and fish apps, most apps were too young (released during the last 3 years) to meaningfully test for trends in the number of downloads over time.

What experience and knowledge do they promote?

Apps can be directly downloaded from online repositories, but background information is often provided on dedicated webpages. Investigating the websites associated with all popular natural history apps—that is, those with a million or more downloads (representing 94% of all downloads; see figure 2a)—and several smaller apps frequently cited by nature websites and blogs provided insights into the rationale and philosophy behind each species identification app. These apps have all been created to address the need to make species identification available to a wide audience (box 2). However, despite sharing a common goal, the presentation of these apps on their respective websites varies significantly. This can affect the user's ability to learn about the app, their motivation to use it, and their expectations prior to downloading it. Most strikingly, different apps put different emphasis—on their websites and through the functionalities of the app—on two key dimensions, which could be summarized as participation and learning.

Box 2.
From specific groups to all-purpose identification.

Benefiting from the development of ever more powerful identification algorithms and the increasingly advanced functionalities of new generations of smartphones, a wide variety of automated species identification apps emerged over the last 10 years. Similarly to field guides, the objective of these applications is to help identify an animal, plant or mushroom encountered by lay audiences. However, because these apps use automated identification AIs, they play a much more active role in the identification process than field guides. They analyse the user's observations, photographs and sound recordings, and propose one or more species names on the basis of a likelihood index that may or may not be displayed. Most of these apps work from visuals—photographs taken by the user with their smartphone. Initially, this concerned individual species groups such as plants (e.g., Pl@ntNet, 2013), birds (e.g., Merlin Bird ID, 2017) and insects (e.g., Picture Insect: Bug Identifier, 2019), allowing some of them to become firmly embedded into powerful platforms (e.g., Merlin Bird ID in eBird). Some have reached reference status for their species group (e.g., Pl@ntNet) and form the foundation of more focused projects (E-surveyor, 2021). Subsequently, advances in object identification recognition allowed for the development of general identification tools handling all flora and fauna, developed within large citizen science programs such as iNaturalist (Seek, 2018) and private enterprise (Earthsnap, 2022). What started off as a specific scientific endeavour has now become part of a universal tech ecosystem, where species are among many objects to be recognized by algorithms, with tools simply embedded in any smart phone photo function (Google Lens, from 2017, in Android phones and Visual Lookups, from 2022, in Apple products).

Apps that emphasize participation give a central role to users and their field observations. These are typically citizen science programs (figure 2a), supported by academic institutions, which need users and their observations to answer research questions or conduct species conservation programs. For example, Pl@ntNet “aims in particular to contribute to the monitoring of plant biodiversity on a global scale, thanks to the involvement of the citizens of the planet.” This program states that the users of their tool are important for the scientific research carried out by those working from the gathered data. The webpages of these citizen science apps typically contain a wealth of information, including details on recorded data, blog posts, project news, and links to scientific publications resulting from their participants’ data.

The notion of personal involvement in using an app is less evident for apps that emphasize learning. For this topic, websites often promise a more personal, individual experience. In this approach, the apps take over from nature guides and accompany their users in the field. For example, Song Sleuth aims to help “you become a better birder.” Similarly, PlantSnap communicates to help its users “reconnect with nature, share photos and thoughts with PlantSnappers around the world, and learn about the plants and trees you encounter every day.” This category of apps includes commercial ones carried not by academia but by companies and are likely to have fee-based options. Their webpages are simpler and have a more modern look than those of the citizen science programs and offer less in terms of details about the tool's presentation and its features.

Although, for citizen science supporters, it may be the ecology of the species identified that is central (Van der Wal et al. 2016), for other users, it could be something entirely different. Apps cater to this and are designed with very different users in mind. This is particularly visible for mushroom and plant apps. For example, the Mushroom app (2020, by Vocum software) allows users to purchase an extension with recipes, and Mushroom Identify-Automatic (2018, by Annapurnapp Technologies) has built-in functionality for buying and selling wild mushrooms. Some plant identification apps provide users with information about their houseplants, including how to care for them and which plants are toxic to pets (e.g., Blossom-Plant Identification, 2022)—a rather different take on the purpose of species identification.

Although the algorithm behind these tools generates a probability for a suggested identification of an object, users may interpret it as a definitive identification. This may be particularly problematic in the context of gathering plants and mushrooms for consumption, traditional medicine, or recreational use. When browsing the net, there are ample warnings such as “Only use the plant ID app as a best guess, then get off your screen and the internet and open a real, physical book called a Wild Flower Key” (Harford 2020) or “Apps are best used as tools, not as sources of truth” and “leaving your safety up to machine-learning AI is nothing but a recipe for disaster” (Meulemans 2020). However, very few apps have built-in warnings of that kind, apart from those concerning mushroom identification, and some apps may be better at helping users to identify poisonous plants than others (Otter et al. 2021). Therefore, there is a genuine public health concern when individuals use these applications to consume plants (or portions of them such as leaves, roots, and berries) and fungi. Government bodies such as the French National Agency for Food, Environmental, and Occupational Health and Safety are aware of the potential risks and issue regular warnings in national newspapers to caution the public against excessive reliance on mushroom species identification apps because of their potential hazards (Anses 2020).

Another aspect of experiences promoted differentially is social interaction. Apps that merely assist species identification are unlikely to engage with social dimensions, because they are not reliant on this but on advertisement and marketing that gets them first in line on search engine results. Citizen science platforms, to the contrary, critically depend on forming communities of practice (Baudry et al. 2022, Torres et al. 2022). Therefore, apps linked to citizen science platforms with well-established communities (e.g., eBird, iNaturalist) are the ones that encourage social interactions between users. In most circumstances, it is not the automated species identification app per se that is important but its embedding in a wider ecosystem through which social interaction is being promoted. In this case, aspects such as gamification (e.g., leader boards, challenges, badges, games), in-app group formation (by, e.g., region, country, or continent), project data portrayal (collective and individual contribution), data sharing, crowdsourcing, and online community building (e.g., social media) come into play. Regarding the latter, BirdNET, for example, exchanges daily on X (formerly Twitter) with those who follow this sound-recording-based project, highlighting a focal bird, the daily number of submissions, and the most frequent bird observation in Europe. This phenomenon, although still underdeveloped, demonstrates a shift from simple technology use to a more community-centric digital experience, where mobile apps and social media converge to enhance social interaction and community engagement.

What do the users say about the apps?

To obtain insights into whether user experiences differ among widely used automated species identification apps, we scraped users’ reviews of all of the apps with over 100,000 downloads (n = 19), using Python and the packages Google Play Scraper and Pandas. Because no such packages were available for analyzing the Apple App Store, we conducted our analysis solely on Google Play (99.6% of all natural history app downloads concern apps available for both operating systems). The resulting 19 corpora of comments added up to a data set of almost 118,000 reviews, with large disparity in the number of comments among apps (see tables 1a and 1b). Out of these apps, 7 are free—including 6 citizen sciences programs—whereas 12 require purchase to access all their features.

Table 1a.

Overview of main topics in user comments for the most downloaded free automated species identification apps on Google Play.

AppGoogle play download rangeNumber of commentsCost and subscription concernsNegative app use experienceNegative species identification experiencePositive app use experiencePositive species identification experiencePractical comments and feature requests
Pl@ntNet10 million–50 million18,975255817
BirdNet1 million–5 million212744451
FloraIncognita1 million–5 million107580155
Merlin Bird ID1 million–5 million18,843442827
SEEK1 million–5 million194722699
Obsidentify500,000–1 million81284230
BirdUp100,000–500,000362285517
AppGoogle play download rangeNumber of commentsCost and subscription concernsNegative app use experienceNegative species identification experiencePositive app use experiencePositive species identification experiencePractical comments and feature requests
Pl@ntNet10 million–50 million18,975255817
BirdNet1 million–5 million212744451
FloraIncognita1 million–5 million107580155
Merlin Bird ID1 million–5 million18,843442827
SEEK1 million–5 million194722699
Obsidentify500,000–1 million81284230
BirdUp100,000–500,000362285517
Table 1a.

Overview of main topics in user comments for the most downloaded free automated species identification apps on Google Play.

AppGoogle play download rangeNumber of commentsCost and subscription concernsNegative app use experienceNegative species identification experiencePositive app use experiencePositive species identification experiencePractical comments and feature requests
Pl@ntNet10 million–50 million18,975255817
BirdNet1 million–5 million212744451
FloraIncognita1 million–5 million107580155
Merlin Bird ID1 million–5 million18,843442827
SEEK1 million–5 million194722699
Obsidentify500,000–1 million81284230
BirdUp100,000–500,000362285517
AppGoogle play download rangeNumber of commentsCost and subscription concernsNegative app use experienceNegative species identification experiencePositive app use experiencePositive species identification experiencePractical comments and feature requests
Pl@ntNet10 million–50 million18,975255817
BirdNet1 million–5 million212744451
FloraIncognita1 million–5 million107580155
Merlin Bird ID1 million–5 million18,843442827
SEEK1 million–5 million194722699
Obsidentify500,000–1 million81284230
BirdUp100,000–500,000362285517
Table 1b.

Overview of main topics in user comments for the most downloaded paid automated species identification apps on Google Play.

AppGoogle play download rangeNumber of commentsCost and subscription concernsNegative app use experienceNegative species identification experiencePositive app use experiencePositive species identification experiencePractical comments and feature requests
PictureThis—Plant Identifier10 million–50 million36,501372735
PlantSnap10 million–50 million28,134293239
Mushroom Identify Automatic1 million–5 million1421344719
Picture Bird—Bird Identifier1 million–5 million1369352441
Picture Mushroom—Mushroom ID1 million–5 million1610304129
PlantApp1 million–5 million393273439
Insect Identifier By Photo Cam500,000–1 million423314920
EarthSnap100,000–500,000125165430
Insect ID AI Bug Identifier100,000–500,00080235422
Insect Spider and Bug Identifier100,000–500,000155266212
Picture Fish100,000–500,000501671419
Picture Insect: Spiders and Bugs100,000–500,0003857304030
AppGoogle play download rangeNumber of commentsCost and subscription concernsNegative app use experienceNegative species identification experiencePositive app use experiencePositive species identification experiencePractical comments and feature requests
PictureThis—Plant Identifier10 million–50 million36,501372735
PlantSnap10 million–50 million28,134293239
Mushroom Identify Automatic1 million–5 million1421344719
Picture Bird—Bird Identifier1 million–5 million1369352441
Picture Mushroom—Mushroom ID1 million–5 million1610304129
PlantApp1 million–5 million393273439
Insect Identifier By Photo Cam500,000–1 million423314920
EarthSnap100,000–500,000125165430
Insect ID AI Bug Identifier100,000–500,00080235422
Insect Spider and Bug Identifier100,000–500,000155266212
Picture Fish100,000–500,000501671419
Picture Insect: Spiders and Bugs100,000–500,0003857304030
Table 1b.

Overview of main topics in user comments for the most downloaded paid automated species identification apps on Google Play.

AppGoogle play download rangeNumber of commentsCost and subscription concernsNegative app use experienceNegative species identification experiencePositive app use experiencePositive species identification experiencePractical comments and feature requests
PictureThis—Plant Identifier10 million–50 million36,501372735
PlantSnap10 million–50 million28,134293239
Mushroom Identify Automatic1 million–5 million1421344719
Picture Bird—Bird Identifier1 million–5 million1369352441
Picture Mushroom—Mushroom ID1 million–5 million1610304129
PlantApp1 million–5 million393273439
Insect Identifier By Photo Cam500,000–1 million423314920
EarthSnap100,000–500,000125165430
Insect ID AI Bug Identifier100,000–500,00080235422
Insect Spider and Bug Identifier100,000–500,000155266212
Picture Fish100,000–500,000501671419
Picture Insect: Spiders and Bugs100,000–500,0003857304030
AppGoogle play download rangeNumber of commentsCost and subscription concernsNegative app use experienceNegative species identification experiencePositive app use experiencePositive species identification experiencePractical comments and feature requests
PictureThis—Plant Identifier10 million–50 million36,501372735
PlantSnap10 million–50 million28,134293239
Mushroom Identify Automatic1 million–5 million1421344719
Picture Bird—Bird Identifier1 million–5 million1369352441
Picture Mushroom—Mushroom ID1 million–5 million1610304129
PlantApp1 million–5 million393273439
Insect Identifier By Photo Cam500,000–1 million423314920
EarthSnap100,000–500,000125165430
Insect ID AI Bug Identifier100,000–500,00080235422
Insect Spider and Bug Identifier100,000–500,000155266212
Picture Fish100,000–500,000501671419
Picture Insect: Spiders and Bugs100,000–500,0003857304030

To identify the main topics discussed within user comments, we performed text analysis on each app's full corpus, examining word distribution and co-occurrence using the latent Dirichlet allocation model in Python's Gensim package. In an effort to strike a balance between detailed thematic analysis and interpretative clarity (Oggier and Datta 2023), we iteratively refined our approach and limited our analysis to the three most significant topics per app. From our comment analysis of the 19 apps in our sample, six main topics emerged: The first was cost and subscription concerns regarding the pricing structure of apps, including the cost to download, any required subscription, perceived value for money given the available features, and whether the subscription model provides a satisfactory level of content and functionality. Users also warn others about practices of app developers they consider as fraudulent. The next was positive experiences with species identification, focusing on perceived accuracy, with users expressing satisfaction when the primary function of the app—accurate species identification—is effectively achieved. The analysis revealed a theme of negative experiences with species identification, focusing on user dissatisfaction when accurate species identification is not achieved, leading to frustration and disappointment. The comments also contained other positive experiences with app use, highlighting user-friendly design, intuitive interfaces and additional features beyond species identification that contribute to a satisfactory and enjoyable user experience. Other negative experiences with app use also surfaced, including interface challenges, technical glitches and unintuitive design that cause users to express dissatisfaction and frustration with the app's performance and overall experience. Finally, the responses included practical comments and feature requests including constructive suggestions for new features or changes they would like to see in future updates.

The single most telling aspect explaining the variation in topics identified among the 19 sets of comments was whether an app is for free or needs to be purchased (table 1). On average, 38% of all of the comments for paid apps are negative (range 0%–81%), whereas this is only 4% for free apps (range 0%–25%). Indeed, for 7 out of the 12 paid apps, a substantial part (more than 25%) of all user comments were negative, compared with only 1 (Pl@ntNet) out of 7 for free apps. Pl@ntNet began as a European project, but it quickly gained popularity worldwide. Initially, the app's database covered only European plant species, and many negative comments were made about misidentifying specimens from outside Europe, which explains this peculiarity in our data.

Another telling aspect of our analysis is that more free apps (6 out of 7) than paid apps (3 out of 12) received practical comments and feature requests, indicating appreciation of and engagement with the product. With one exception (BirdUp), the free apps in our sample are not only tools for species identification but also entry points for participatory science, where users contribute to broader scientific endeavors. This dual role could explain the higher engagement levels and the more positive sentiment observed in the feedback for these apps (96% versus 62% of user feedback being positive for free and paid apps, respectively; table 1). Citizen science programs embedded within free apps may encourage a sense of ownership and community among users (Torres et al. 2022). Users might be more forgiving of shortcomings and more motivated to provide constructive feedback, because they feel part of a community effort contributing to scientific knowledge. Paid apps such as PictureThis—Plant Identifier and Insect Identifier by Photo Cam, on the other hand, might be perceived primarily as a service, with users adopting a consumer mindset and expecting the product to work flawlessly in exchange for payment. This could account for the high incidence of negative comments related to cost, subscription concerns, and a sense of betrayal when the app does not meet the paying user's expectations. Furthermore, paid apps tend to evoke more intense reactions when they fail, because the monetary investment raises the stakes for its users, leading to heightened criticism. However, they also garner significant praise from their user base. Regardless of their pricing model, it is clear that both paid and free apps are good at species identification.

What happens to observational and personal data?

Mobile apps that use automated species identification collect two types of data: personal data and biodiversity data. Looking at their websites again, stark differences between paid apps and free ones are visible for both data types. Most free, academia-driven apps are transparent about the uses of biodiversity data—namely, to improve algorithms and provide data to research programs. Pl@ntNet, for example, states that “shared data are used to improve the performance of Pl@ntNet services, contribute to computer science and ecological research, [and] enable large-scale plant biodiversity monitoring.” For a species observation to become a biological record, there should be a recorder, a specimen, an observation date, and a location (i.e., who, what, when, and where; Isaac and Pocock 2015). Because of the latter, most free apps ask the user for authorization to gather geolocation data automatically. Several apps, including Pl@ntNet, ObsIdentify, and Artsorakel, share biological records with the Global Biodiversity Information Facility (GBIF), making them accessible to researchers worldwide. However, several other free (citizen science) apps appear not to share with an international database, but they use records in other ways, including for the production of academic papers (e.g., Flora Incognita and BirdNET).

Few of the paid apps make clear where the biological data go, and therefore whether biological records are being created. The focus seems to be on improving the service provided, which includes increasing the accuracy of species identifications and creating a user community and data platform of their own. For example, PlantSnap states, “We are continuously working to improve PlantSnap and one of the most important aspects is creating a better database, so you are just as much a part of our team as the developers are!” However, none of these paid apps transfer their data to an international database such as GBIF, which means that a considerable amount of biodiversity data lies dormant on servers without it having become biological records that are publicly available.

When it comes to personal data, more clarity is given to what data are gathered and how they are used—a pattern likely driven by legal frameworks such as the European General Data Protection Regulation. Although this holds for both paid and free apps, there are again clear differences because of their respective aims and business model. Regarding paid apps, PlantSnap, for example, transparently announces that they collect personal data such as the user's name, email and home address, phone number, webpages visited, and cookies, to enable marketing and third party advertising to the user. Similarly, the plant identification app Picture This declares using personal data to “market our products and services as well as enable third parties to provide advertisements to [the user] via the Application” and to “send emails and push notifications to existing customers/users for similar goods and services.” Such ways of using personal data simply reflect how companies behind paid apps make money and may put business interests first (e.g., “retain [user's] personal information for as long as it is necessary and relevant for business”). Paid apps also use personal data for customization, such as displaying earlier identifications made by the user.

The operational model of organizations behind the most popular free apps, anchored in academia, does not demand the sale of personal data, although a donation button is likely to be nearby (Verma et al. 2015). Therefore, such organizations typically state that data related to the identity of its users are not used for commercial purposes. Instead, what matters to them is the creation and growth of a user community with specific interests. This leads to different ways of handling personal data, which, in many cases, reflect how this is done in research. The producers of the Merlin app, for example, state that eBird activity “revolves around sharing bird observations with scientists, conservationists, and other birders. We will never use or share your information for commercial purposes.” The same goes for BirdNET, where “all information and recordings will be used for research purposes only and… will be deleted if they are no longer valuable for this use. However, we will eventually delete all information from our servers five years after [its] creation.” Personal data serve the historic and prime purpose of registering an observer against a species observation, required to produce a biological record. But this is not where the importance of personal data stop. Customization well beyond what is generated by paid apps allows users to build an array of connections, to their collection of imagery offered to the app (e.g., my gallery), to their biological records (my observations) and those of others, and to other users (my contacts). Therefore, personal data is extensively used to serve the user. At the same time, it also serves the app producer, by leveraging the social interactions of new and existing users to cement and expand a user community. It is with this purpose in mind that app producers try to persuade users to register or log in.

Conclusions and perspectives

Most studies on the topic of automated species identification are conducted by academics specialized in computer vision and focus on the efficiency of algorithms to recognize species (Mohanty et al. 2020, Shirai et al. 2022). Ecologist and biologists working with these algorithms write primarily on the capabilities and potential of this AI tool without much reservation (August et al. 2020, Jones 2020, Høye et al. 2021, Kahl et al. 2021), although some also report on how to use this technology better (Terry et al. 2020, Koch et al. 2022, Mandeville et al. 2023). We have taken a fresh look at the overall landscape of automated species identification apps, what producers promise, what users say, and where both biodiversity and personal data go.

The same but different

We did not set out to create a division between free, academia-driven apps and paid apps. However, in the process of obtaining an overview, we realized that this was the single biggest difference among apps, a condition opening fundamentally different doors to users. By using free apps, users nearly always connect to an academic world where their personal data seem of limited importance but where their biodiversity data is key. The reverse holds for paid apps, ultimately designed to generate revenue for companies through sales and subscriptions. Personal data take center stage, collected and used for commercial purposes, such as targeted advertising or market research (Zuboff 2019), whereas biodiversity data are a by-product rarely shared with the broader scientific community. This is a fundamental and, for users, palpable difference. Some companies, such as Next Vision Limited, roll out new paid automated identification apps all the time, using the same backbone for different entities—in their case, insects, fishes, birds, mushrooms, rocks, and snakes but also ones that tap into different interests such as coins and healing crystals. For such companies, it is a matter of training an algorithm, creating an attractive product and marketing it for profit. For such a strategy to pay off, the end product needs to be good and must align with user interests and expectations (Otter et al. 2021). For a citizen science project, the stakes are different, because building and maintaining a community is key to making its business model work. This puts focus on relevance—ensuring alignment with community norms, practices, and expectations (Robinson et al. 2021)—but also credibility and purpose. Visibility is important, but for many projects, investment therein may be constrained by time, money, and (limited) commercial instinct. Gathering species data is key, to address research questions, cater for naturalist recording community interests, raise conservation issues, and build big data sets (Farley et al. 2018). The latter are used for data mining, for publications furthering the personal careers of scientists, and to demonstrate ability and relevance. Research organizations attempt to communicate their relevance such that users are aware that they are as much providers of data as consumers of information (Ganzevoort et al. 2017). This awareness of being useful to science and the wider community is a major motivation for many to participate in citizen science (Everett and Geoghegan 2016). In the case of paid apps, where little is known about what happens to species identification data, users are consumers of information, a service for which they pay.

It is worth noting that free species identification apps are fundamentally different from free apps as are typically encountered in wider society and studied by business scholars. For those, free apps are commercial products that come with the expectation of privacy concerns (Han et al. 2019, van Angeren et al. 2022). However, empirical studies show that there is little difference in the extent to which personal data are protected or used between free and paid apps, leading to the conclusion that it is difficult to pay for privacy in the wider app landscape (Bamberger et al. 2020). Our free apps appear rather different and perhaps could be viewed as a specific class, which could be named community apps, and for which personal data are not of particular interest because gathering biodiversity data is the key concern. This principle is likely to include other subjects, beyond natural history, as part of community-driven data collection or knowledge-sharing initiatives.

Because species identification is at the heart of naturalist activities (Ellis 2011, Tewksbury et al. 2014), tools that potentially facilitate this process are almost immediately visible. This does not necessarily mean that automated species identification tools will therefore be widely adopted; there may be resistance because of doubts about their value or how they would shape relations to nature. However, once these tools are established within the respective communities, knowledge about them will circulate, making them more accessible and easier to discover by new users. Because of the growing awareness that using smartphone technology to identify species is an option, people who are not part of interest groups may search online and may be directed to paid apps when they want to identify a species. Not only did we find these to be more abundant, these apps also pay for higher visibility in search engine results and have advertising strategies that target a wider audience beyond interest groups. They specifically need to attract a broader audience beyond just naturalists and citizen scientists, because these latter are likely to use tools and apps recommended by their community. These differences in business model direct who is connected to what. Although they all use what, in effect, is the same technology, the above principles segregate: Specialized communities use free apps, and wider society uses paid apps.

Separated by a paywall but otherwise intertwined

Our comment analysis made clear that payment for an app comes with the risk of disgruntlement, to the point that virtues of a product may no longer be visible. Asking for payment sets expectations, and if they are not met—at all or not in full—a negative outlook may result. Users seem even less forgiving when apps offer a free trial but request card details and make the cancellation process difficult (Arora et al. 2017). In such circumstances, people feel taken advantage of. Because paid apps are the first to be listed in app stores, bad experiences with them will reduce the likelihood of users registering for other apps, including genuinely free ones, because they may be perceived as fraudulent. This could close doors for programs in need of participants but also for people's relationships to nature through digital technology. The situation described above indicates that paid and unpaid apps are connected in more ways than just their focus on species and the technologies used. This relationship seems asymmetrical, with paid apps—which, in principle, could connect to the largest audiences and seem to do so (figure 2b)—putting pressure on free apps, rather than the other way around. Commercial producers are much more likely to have the means and personnel to create modern, user-friendly, and smooth-running apps with attractive designs and functionalities than are producers within academia, going from one short-term project to the next (Arts et al. 2015, Speaker et al. 2022). This sets expectations on app design, upping the pressure on—typically low-budget—free app development to prevent people from jumping ship. At the same time, paid app producers spend a lot of time on marketing to ensure visibility. This pressures free app developers to invest more in communication in order to remain visible to the existing user community and to attract newcomers, including by being active and creative on social media. Another pressure put by paid apps on free apps is related to the way they are funded. The business model of free apps is not based on the goal of making a monetary profit or on gathering personal data. But under the pressure of needing to receive larger funds or otherwise generating money, a risk we identify in the present article is that to stay in competition, free apps may also start using personal information as merchandise (Zuboff 2019). In a world where research projects are mostly short term and where funding is difficult to obtain, we understand that, faced with others who do not hesitate to use personal data for targeted advertising and marketing, the temptation to do likewise is great.

Early leading programs such as Pl@ntNet and Merlin Bird ID come from academic partnerships between computer scientists and biologists. Commercial ones followed in their footsteps. Free apps, which collect species observation data and feed large international databases such as GBIF, could place a moral judgment on the developers of fee-based apps to prompt them to share their identification data to participate in the common effort to monitor and preserve biodiversity. Commercial apps increasingly seek approval from biodiversity professionals. This could push paid-for apps to further improve their identification algorithms, match the rigor of international databases, and open up their data to the wider biodiversity community. One point on which both categories of apps converge is that, when they work, they allow users to readily identify plants, animals, and mushrooms encountered in their daily lives. This may initiate or further participation in research and learning—for example, about species’ characteristics, habitat use, and behavior, in turn shaping environmental outlooks and relationships with environments we are part of (Sharma et al. 2019, Santori et al. 2021).

Where to go from here?

Research on automated species identification apps has so far been focused on technical and technological performance (Wäldchen and Mäder 2018b). To foster the continued development and growth of this field, it is crucial to shift our attention to the users of these apps. Key areas to be explored include understanding user motivations, examining their experiences, and assessing outcomes such as enhanced biodiversity knowledge and the potential development of environmental citizenship (Jørgensen and Jørgensen 2021), which refers to individuals’ sense of responsibility and active engagement in environmental issues and sustainability practices. Collectively, these form a field of research that urgently deserves attention, given the speed of technological progress and societal uptake (Speaker et al. 2022, Turnbull et al. 2022). We expect user-related aspects concerning further optimization of apps’ interfaces to be a research focus of the wider human computer interface research community. Research fields investigating human–nature relationships will need to turn their attention to what app use actually does to and with people and how such technology-mediated experiences of nature sit with and reconfigure the current experiential landscape (Edwards and Larson 2020, Arts et al. 2021a).

Smartphone use and technological development have created a desire for quick, on-the-go information and assistance, as well as an increasing reliance on image recognition and linked search tools (Zuboff 2019). As a result, Google (Google Lens) and Apple (Visual Look Up) are now embedding automated object identification tools in the camera app of their most advanced smartphones. These functions allow users to identify a wide range of objects and search for relevant information. One of the many objects taken on are species. Google and Apple have now moved into what was previously the domain of an expanding but nevertheless specific set of—academic and commercial—actors. This way, people remain engaged with Google's and Apple's products and are kept in these ecosystems. At the same time, having automated species identification as part of one's portfolio for platforms to do with nature is simply going to be the new normal.

With these two giants on the market, we predict that the days are counted for many of the smaller automated species identification apps, particularly the paid ones, because they are required to make money. One of the routes through which digital giants may oust the light for others is by upping their game to a level that cannot be followed. Also, for any new project, to gain visibility is hard unless the project is connected to a focused community. For well-established (citizen science) platforms, it will be a challenge to keep up the pace with Google and Apple, but where communities are tight or very large, it is likely that users will continue to favor platform apps because of norms, purpose (e.g., gathering biodiversity data), and a sense of belonging. We see future in partnerships between citizen science programs and digital giants (Joppa 2015) but at what costs? And will the big tech companies really make biodiversity data available to scientists and international bodies without capitalizing on social data to serve their own interests?

Acknowledgments

This project was funded under grant agreement no. 872557 by the European Union's Horizon 2020 Research and Innovation Programme. We are grateful for in-depth referee comments on earlier versions of this article.

Author contributions

Minh-Xuan A. Truong (Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Writing—original draft, Writing—review & editing), and René Van der Wal (Conceptualization, Funding acquisition, Investigation, Project administration, Supervision, Validation, Writing—review & editing).

Author Biography

Minh-Xuan A. Truong ([email protected]) is a researcher in conservation social sciences, and René Van der Wal is a professor in ecology at the Swedish University of Agricultural Sciences, in Uppsala, Sweden.

References cited

Anses
.
2020
.
La saison de cueillette des champignons commence: Restez vigilants face aux risques d'intoxications! (The mushroom picking season is starting: Stay vigilant to the risks of poisoning!)
.
Agence Nationale de Sécurité Sanitaire de l'Alimentation, de l'Environnement et du Travail. www.anses.fr/fr/content/la-saison-de-cueillette-des-champignons-commence-restez-vigilants-face-aux-risques-d-0
.

Arora
 
S
,
Ter Hofstede
 
F
,
Mahajan
 
V
.
2017
.
The implications of offering free versions for the performance of paid mobile apps
.
Journal of Marketing
 
81
:
62
78
.

Arts
 
K
,
Van der Wal
 
R
,
Adams
 
WM
.
2015
.
Digital technology and the conservation of nature
.
Ambio
 
44
:
661
673
.

Arts
 
I
,
Fischer
 
A
,
Duckett
 
D
,
Van der Wal
 
R
.
2021a
.
Information technology and the optimisation of experience: The role of mobile devices and social media in human–nature interactions
.
Geoforum
 
122
:
55
62
.

Arts
 
I
,
Fischer
 
A
,
Duckett
 
D
,
Van der Wal
 
R
.
2021b
.
The instagrammable outdoors: Investigating the sharing of nature experiences through visual social media
.
People and Nature
 
3
:
1244
1256
.

August
 
TA
,
Pescott
 
OL
,
Joly
 
A
,
Bonnet
 
P
.
2020
.
AI naturalists might hold the key to unlocking
.
Biodiversity Data in Social Media Imagery Patterns
 
1
:
100116
.

Bamberger
 
KA
,
Egelman
 
S
,
Han
 
C
,
Elazari
 
A
,
Reyes
 
I
.
2020
.
Can you pay for privacy? Consumer expectations and the behavior of free and paid apps
.
Berkeley Technology Law Journal
 
35
:
327
365
.

Baudry
 
J
,
Tancoigne
 
É
,
Strasser
 
BJ
.
2022
.
Turning crowds into communities: The collectives of online citizen science
.
Social Studies of Science
 
52
:
399
424
.

Besson
 
M
,
Alison
 
J
,
Bjerge
 
K
,
Gorochowski
 
TE
,
Høye
 
TT
,
Jucker
 
T
,
Mann
 
HMR
,
Clements
 
CF
.
2022
.
Towards the fully automated monitoring of ecological communities
.
Ecology Letters
 
25
:
2753
2775
.

Boissat
 
L
,
Thomas-Walters
 
L
,
Veríssimo
 
D
.
2021
.
Nature documentaries as catalysts for change: Mapping out the blackfish effect
.
People and Nature
 
3
:
1179
1192
.

Bonnet
 
P
,
Joly
 
A
,
Goëau
 
H
,
Champ
 
J
,
Vignau
 
C
,
Molino
 
JF
,
Barthélémy
 
D
,
Boujemaa
 
N
.
2016
.
Plant identification: Man vs. machine: LifeCLEF 2014 plant identification challenge
.
Multimedia Tools and Applications
 
75
:
1647
1665
.

Bonnet
 
P
,
Goëau
 
H
,
Hang
 
ST
,
Lasseck
 
M
,
Šulc
 
M
,
Malécot
 
V
,
Jauzein
 
P
,
Melet
 
J-C
,
You
 
C
,
Joly
 
A
.
2018
.
Plant identification: Experts vs. machines in the era of deep learning
. Pages
131
149
in
Joly
 
A
,
Vrochidis
 
S
,
Karatzas
 
K
,
Karppinen
 
A
,
Bonnet
 
P
, eds.
Multimedia Tools and Applications for Environmental and Biodiversity Informatics
.
Springer International
.

Borowiec
 
ML
,
Dikow
 
RB
,
Frandsen
 
PB
,
McKeeken
 
A
,
Valentini
 
G
,
White
 
AE
.
2022
.
Deep learning as a tool for ecology and evolution
.
Methods in Ecology and Evolution
 
13
:
1640
1660
.

Castro
 
I
,
De Rosa
 
A
,
Priyadarshani
 
N
,
Bradbury
 
L
,
Marsland
 
S
.
2019
.
Experimental test of birdcall detection by autonomous recorder units and by human observers using broadcast
.
Ecology and Evolution
 
9
:
2376
2397
.

Crowley
 
EJ
,
Silk
 
MJ
,
Crowley
 
SL
.
2021
.
The educational value of virtual ecologies in Red Dead Redemption 2
.
People and Nature
 
3
:
1229
1243
.

Edwards
 
RC
,
Larson
 
BMH.
 
2020
.
When screens replace backyards: Strategies to connect digital-media-oriented young people to nature
.
Environmental Education Research
 
26
:
950
968
.

Ellis
 
R
 
2011
.
Jizz and the joy of pattern recognition: Virtuosity, discipline and the agency of insight in UK naturalists’ arts of seeing
.
Social Studies of Science
 
41
:
769
790
.

Everett
 
G
,
Geoghegan
 
H
 
2016
.
Initiating and continuing participation in citizen science for natural history
.
BMC Ecology
 
16
:
15
22
.

Farley
 
SS
,
Dawson
 
A
,
Goring
 
SJ
,
Williams
 
JW.
 
2018
.
Overview articles situating ecology as a big-data science: Current advances, challenges, and solutions
.
BioScience
 
68
:
563
576
.

Farnsworth
 
EJ
,
Chu
 
M
,
Kress
 
WJ
,
Neill
 
AK
,
Best
 
JH
,
Pickering
 
J
,
Stevenson
 
RD
,
Courtney
 
GW
,
Vandyk
 
JK
,
Ellison
 
AM.
 
2013
.
Next-generation field guides
.
BioScience
 
63
:
891
899
.

Fathimathul
 
RPP
,
Orban
 
R
,
Vadivel
 
KS
,
Subramanian
 
M
,
Muthusamy
 
S
,
Elminaam
 
DSA
,
Nabil
 
A
,
Abulaigh
 
L
,
Ahmadi
 
M
,
Ali
 
MAS.
 
2022
.
A novel method for the classification of butterfly species using pre-trained CNN models
.
Electronics
 
11
:
11132016
.

Finger
 
A
,
Groß
 
J
,
Zabel
 
J.
 
2022
.
Plant identification in the 21st century: What possibilities do modern identification keys offer for biology lessons?
 
Education Sciences
 
12
:
849
.

Frost
 
S
,
Kannis-Dymand
 
L
,
Schaffer
 
V
,
Millear
 
P
,
Allen
 
A
,
Stallman
 
H
,
Mason
 
J
,
Wood
 
A
,
Atkinson-Nolte
 
J.
 
2022
.
Virtual immersion in nature and psychological well-being: A systematic literature review
.
Journal of Environmental Psychology
 
80
:
101765
.

Ganzevoort
 
W
,
Van den Born
 
RJG
,
Halffman
 
W
,
Turnhout
 
S.
 
2017
.
Sharing biodiversity data: Citizen scientists’ concerns and motivations
.
Biodiversity and Conservation
 
26
:
2821
2837
.

Gaston
 
KJ
,
O'Neill
 
MA
.
2004
.
Automated species identification: Why not?
 
Philosophical Transactions of the Royal Society B
 
359
:
655
667
.

Goëau
 
H
,
Bonnet
 
P
,
Joly
 
A.
 
2018
.
Overview of ExpertLifeCLEF 2018: How Far Automated Identification Systems Are from the Best Experts?
 
Conference and Labs of the Evaluation Forum
.

Han
 
C
,
Reyes
 
I
,
Elazari
 
A
,
On
 
B
,
Reardon
 
J
,
Feal
 
Á
,
Bamberger
 
KA
,
Egelman
 
S
,
Vallina-Rodriguez
 
N.
 
2019
.
Do You Get What You Pay for? Comparing the Privacy Behaviors of Free vs. Paid Apps. Institute of Electrical and Electronics Engineers
.

Harford
 
R.
 
2020
.
Careful With that Plant ID App
.
Eatweeds.
 .

Høye
 
TT
,
Ärje
 
J
,
Bjerge
 
K
,
Hansen
 
OLP
,
Iosifidis
 
A
,
Leese
 
F
,
Mann
 
HMR
,
Meissner
 
K
,
Melvad
 
C
,
Raitoharju
 
J.
 
2021
.
Deep learning and computer vision will transform entomology
.
Proceedings of the National Academy of Sciences
 
118
:
1
10
.

Høye
 
TT
,
Dyrmann
 
M
,
Kjær
 
C
,
Nielsen
 
J
,
Mielec
 
CL
,
Vesterdal
 
MS
,
Bjerge
 
K
,
Madsen
 
SA
,
Jeppesen
 
MR
,
Melvad
 
C.
 
2022
.
Accurate image-based identification of macroinvertebrate specimens using deep learning
.
PeerJ
 
10
:
e13837
.

Hughes
 
AC
,
Orr
 
MC
,
Ma
 
K
,
Costello
 
MJ
,
Waller
 
J
,
Provoost
 
P
,
Yang
 
Q
,
Zhu
 
C
,
Qiao
 
H.
 
2021
.
Sampling biases shape our view of the natural world
.
Ecography
 
44
:
1259
1269
.

Isaac
 
NJB
,
Pocock
 
MJO.
 
2015
.
Bias and information in biological records
.
Biological Journal of the Linnean Society
 
115
:
522
531
.

Jones
 
HG.
 
2020
.
What plant is that? Tests of automated image recognition apps for plant identification on plants from the British flora
.
AoB PLANTS
 
12
: .

Joppa
 
LN.
 
2015
.
Technology for nature conservation: An industry perspective
.
Ambio
 
44
:
522
526
.

Jørgensen
 
FA
,
Jørgensen
 
D.
 
2021
.
Citizen science for environmental citizenship
.
Conservation Biology
 
35
:
1344
1347
.

Kahl
 
S
,
Wood
 
CM
,
Eibl
 
M
,
Klinck
 
H.
 
2021
.
BirdNET: A deep learning solution for avian diversity monitoring
.
Ecological Informatics
 
61
:
101236
.

Klasen
 
M
,
Ahrens
 
D
,
Eberle
 
J
,
Steinhage
 
V.
 
2022
.
Image-based automated species identification: Can virtual data augmentation overcome problems of insufficient sampling?
 
Systematic Biology
 
71
:
320
333
.

Koch
 
W
,
Hogeweg
 
L
,
Nilsen
 
EB
,
Finstad
 
AG.
 
2022
.
Maximizing citizen scientists’ contribution to automated species recognition
.
Scientific Reports
 
12
:
1
10
.

Krizhevsky
 
A
,
Sutskever
 
I
,
Hinton
 
GE.
 
2017
.
ImageNet classification with deep convolutional neural networks
.
Communications of the ACM
 
60
:
84
90
.

Lecun
 
Y
,
Bengio
 
Y
,
Hinton
 
G.
 
2015
.
Deep learning
.
Nature
 
521
:
436
444
.

Litleskare
 
S
,
Macintyre
 
TE
,
Calogiuri
 
G.
 
2020
.
Enable, reconnect and augment: A new era of virtual nature research and application
.
International Journal of Environmental Research and Public Health
 
17
:
1738
.

Mandeville
 
CP
,
Koch
 
W
,
Nilsen
 
EB
,
Finstad
 
AG.
 
2021
.
Open data practices among users of primary biodiversity data
.
BioScience
 
71
:
1128
1147
.

Mandeville
 
CP
,
Nilsen
 
EB
,
Herfindal
 
I
,
Finstad
 
AG.
 
2023
.
Participatory monitoring drives biodiversity knowledge in global protected areas
.
Communications Earth and Environment
 
4
:
240
.

McClure
 
EC
,
Sievers
 
M
,
Brown
 
CJ
,
Buelow
 
CA
,
Ditria
 
EM
,
Hayes
 
MA
,
Pearson
 
RM
,
Tulloch
 
VJD
,
Unsworth
 
RKF
,
Connolly
 
RM.
 
2020
.
Artificial intelligence meets citizen science to supercharge ecological monitoring
.
Patterns
 
1
:
100109
.

Meulemans
 
D.
 
2020
.
Our Favorite Apps for Foraging
.
Think Woodsy.
 .

Mohanty
 
R
,
Mallik
 
BK
,
Solanki
 
SS.
 
2020
.
Automatic bird species recognition system using neural network based on spike
.
Applied Acoustics
 
161
:
107177
.

Norouzzadeh
 
MS
,
Nguyen
 
A
,
Kosmala
 
M
,
Swanson
 
A
,
Palmer
 
MS
,
Packer
 
C
,
Clune
 
J.
 
2018
.
Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning
.
Proceedings of the National Academy of Sciences
 
115
:
E5716
E5725
.

Oggier
 
F
,
Datta
 
A
.
2023
.
On parsimony and clustering
.
Peer Journal of Computer Science
 
9
:
1
17
.

Otter
 
J
,
Mayer
 
S
,
Tomaszewski
 
CA.
 
2021
.
Swipe right: A comparison of accuracy of plant identification apps for toxic plants
.
Journal of Medical Toxicology
 
17
:
42
47
.

Requier
 
F
,
Andersson
 
GKS
,
Oddi
 
FJ
,
Garibaldi
 
LA.
 
2020
.
Citizen science in developing countries: How to improve volunteer participation
.
Frontiers in Ecology and the Environment
 
18
:
101
108
.

Robinson
 
JA
,
Kocman
 
D
,
Speyer
 
O
,
Gerasopoulos
 
E.
 
2021
.
Meeting volunteer expectations: A review of volunteer motivations in citizen science and best practices for their retention through implementation of functional features in CS tools
.
Journal of Environmental Planning and Management
 
64
:
2089
2113
.

Ruff
 
ZJ
,
Lesmeister
 
DB
,
Duchac
 
LS
,
Padmaraju
 
BK
,
Sullivan
 
CM.
 
2020
.
Automated identification of avian vocalizations with deep convolutional neural networks
.
Remote Sensing in Ecology and Conservation
 
6
:
79
92
.

Santori
 
C
,
Keith
 
RJ
,
Whittington
 
CM
,
Thompson
 
MB
,
Van Dyke
 
JU
,
Spencer
 
RJ.
 
2021
.
Changes in participant behaviour and attitudes are associated with knowledge and skills gained by using a turtle conservation citizen science app
.
People and Nature
 
3
:
66
76
.

Sarker
 
IH.
 
2019
.
Context-aware rule learning from smartphone data: Survey, challenges and future directions
.
Journal of Big Data
 
6
:
95
.

Sharma
 
N
 et al.  
2019
.
From citizen science to citizen action: Analysing the potential for a digital platform to cultivate attachments to nature
.
Journal of Science Communication
 
18
:
1
35
.

Shirai
 
M
 et al.  
2022
.
Development of a system for the automated identification of herbarium specimens with high accuracy
.
Scientific Reports
 
12
:
1
14
.

Shiu
 
Y
,
Palmer
 
KJ
,
Roch
 
MA
,
Fleishman
 
E
,
Liu
 
X
,
Nosal
 
EM
,
Helble
 
T
,
Cholewiak
 
D
,
Gillespie
 
D
,
Klinck
 
H.
 
2020
.
Deep neural networks for automated detection of marine mammal species
.
Scientific Reports
 
10
:
1
12
.

Silk
 
M
,
Correia
 
R
,
Veríssimo
 
D
,
Verma
 
A
,
Crowley
 
SL.
 
2021
.
The implications of digital visual media for human–nature relationships
.
People and Nature
 
3
:
1130
1137
.

Speaker
 
T
 et al.  
2022
.
A global community-sourced assessment of the state of conservation technology
.
Conservation Biology
 
36
:
1
13
.

Spiesman
 
BJ
,
Gratton
 
C
,
Hatfield
 
RG
,
Hsu
 
WH
,
Jepsen
 
S
,
McCornack
 
B
,
Patel
 
K
,
Wang
 
G.
 
2021
.
Assessing the potential for deep learning and computer vision to identify bumble bee species from images
.
Scientific Reports
 
11
:
7580
.

Stowell
 
D
,
Petrusková
 
T
,
Šálek
 
M
,
Linhart
 
P.
 
2019
.
Automatic acoustic identification of individuals in multiple species: Improving identification across recording conditions
.
Journal of the Royal Society Interface
 
16
:
0940
.

Taylor
 
P
 
2023
.
Number of smartphone subscriptions worldwide from 2016 to 2021, with forecasts from 2022 to 2027
. .

Terry
 
JCD
,
Roy
 
HE
,
August
 
TA
.
2020
.
Thinking like a naturalist: Enhancing computer vision of citizen science images by harnessing contextual data
.
Methods in Ecology and Evolution
 
11
:
303
315
.

Tewksbury
 
JJ
 et al.  
2014
.
Natural history's place in science and society
.
BioScience
 
64
:
300
310
.

Torres
 
AC
,
Bedessem
 
B
,
Deguines
 
N
,
Fontaine
 
C.
 
2022
.
Online data sharing with virtual social interactions favor scientific and educational successes in a biodiversity citizen science project
.
Journal of Responsible Innovation
 
10
:
2019970
.

Trouille
 
L
,
Lintott
 
CJ
,
Fortson
 
LF.
 
2019
.
Citizen science frontiers: Efficiency, engagement, and serendipitous discovery with human–machine systems
.
Proceedings of the National Academy of Sciences
 
116
:
1902
1909
.

Truong
 
M-XA.
 
2024
.
The contribution of new technologies to our experiences of nature
. Pages
177
190
in
Kohlmann
 
E
, ed.
Linking with Nature in the Digital Age
.
Wiley
.

Truong
 
M-XA
,
Clayton
 
S
.
2020
.
Technologically transformed experiences of nature: A challenge for environmental conservation?
 
Biological Conservation
 
244
:
108532
.

Truong
 
M-XA
,
Van der Wal
 
R.
 
2024
. Mapping the landscape of automated species identification apps.
Figshare
. .

Truong
 
M-X
,
Prévot
 
A-C
,
Clayton
 
S
.
2018
.
Gamers like it green: The significance of vegetation in online gaming
.
Ecopsychology
 
10
:
37
.

Turnbull
 
J
,
Searle
 
A
,
Hartman Davies
 
O
,
Dodsworth
 
J
,
Chasseray-Peraldi
 
P
,
von Essen
 
E
,
Anderson-Elliott
 
H
.
2022
.
Digital ecologies: Materialities, encounters, governance
.
Progress in Environmental Geography
 
2
:
275396872211456
.

Unger
 
J
,
Merhof
 
D
,
Renner
 
S.
 
2016
.
Computer vision applied to herbarium specimens of German trees: Testing the future utility of the millions of herbarium specimen images for automated identification
.
BMC Evolutionary Biology
 
16
:
248
.

Unger
 
S
,
Rollins
 
M
,
Tietz
 
A
,
Dumais
 
H
 
2020
.
iNaturalist as an engaging tool for identifying organisms in outdoor activities
.
Journal of Biological Education
 
55
:
537
547
.

Uzunboylu
 
H
,
Cavus
 
N
,
Ercag
 
E.
 
2009
.
Using mobile learning to increase environmental awareness
.
Computers and Education
 
52
:
381
389
.

Van Angeren
 
J
,
Vroom
 
G
,
McCann
 
BT
,
Podoynitsyna
 
K
,
Langerak
 
F.
 
2022
.
Optimal distinctiveness across revenue models: Performance effects of differentiation of paid and free products in a mobile app market
.
Strategic Management Journal
 
43
:
2066
2100
.

Van der Wal
 
R
,
Sharma
 
N
,
Mellish
 
C
,
Robinson
 
A
,
Siddharthan
 
A.
 
2016
.
The role of automated feedback in training and retaining biological recorders for citizen science
.
Conservation Biology
 
30
:
550
561
.

Verma
 
A
,
Van der Wal
 
R
,
Fischer
 
A
.
2015
.
Microscope and spectacle: On the complexities of using new visual technologies to communicate about wildlife conservation
.
Ambio
 
44
:
648
660
.

Wägele
 
JW
 et al.  
2022
.
Towards a multisensor station for automated biodiversity monitoring
.
Basic and Applied Ecology
 
59
:
105
138
.

Wäldchen
 
J
,
Mäder
 
P.
 
2018
.
Machine learning for image based species identification
.
Methods in Ecology and Evolution
 
9
:
2216
2225
.

Wäldchen
 
J
,
Mäder
 
P.
 
2018b
.
Plant Species Identification Using Computer Vision Techniques: A Systematic Literature Review
.
Springer
.

Waller
 
J.
 
2019
.
Will citizen science take over?
 
Data Blog
 .

Wang
 
Y
,
Du
 
J
,
Zhang
 
H
,
Yang
 
X.
 
2020
.
Mushroom toxicity recognition based on multigrained cascade forest
.
Scientific Programming
 
2020
:
8849011
.

Xi
 
T
,
Wang
 
J
,
Han
 
Y
,
Lin
 
C
,
Ji
 
L
.
2022
.
Multiple butterfly recognition based on deep residual learning and image analysis
.
Entomological Research
 
52
:
44
53
.

Zuboff
 
S.
 
2019
.
The Age of Surveillance Capitalism: The Fight for the Future at the New Frontier of Power
.
PublicAffairs
.

This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial License (https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact [email protected]

Supplementary data