Skip to Main Content
Book cover for The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI

Contents

We all eventually fall from the edge of sentience. If we are lucky, the transition will be sharp and sudden. If we are not, we may spend years on the brink as more fortunate souls debate whether we are sentient or not. When their judgement is mistaken, the consequences can be terrible.

Consider Kate Bainbridge.1 At age 26, she contracted encephalomyelitis, inflammation of the brain and spinal cord. For around five months, she was unresponsive but still had sleep-wake cycles, a condition called a prolonged disorder of consciousness. She was given a tracheostomy and a feeding tube, but no one explained this to her, because she was presumed unconscious. Bainbridge came back from the edge. When she regained responsiveness, and (later) the ability to communicate via a keyboard, she was able to report that her clinician’s presumption had been false. Her testimony is, in places, harrowing:

I can’t tell you how frightening it was, especially suction through the mouth. I tried to hold my breath to get away from all the pain. They never told me about my tube. I wondered why I did not eat.2

Bainbridge’s testimony was written to be heard, amplified, repeated. It sends a resounding message: never simply presume the absence of sentience in a case where it could realistically be present. And in the face of uncertainty, do not treat a potentially sentient being as if they felt nothing.

This is a book about how our practices and ways of thinking need to change, across many areas, once we face up to this principle.

A sentient being has ethically significant experiences. They have a subjective point of view on the world and on their own body.3 If I throw a ball across a field, there are physical facts about the ball’s trajectory as it sails through the air, but there is nothing it feels like from the ball’s point of view. The ball feels no joy or pain; it doesn’t experience the rush of the air or the colour of the sky. The ball is a blank, as far as subjective experience is concerned. Now imagine a child chasing the ball. There will again be facts about the physical processes at work as the child sprints across the grass. But this time, there will also be something it feels like from the child’s point of view. There will be experiences of odour, sound, colour, bodily sensations, positive or negative emotions.

The capacity to have subjective experiences does not imply any level of reflective or rational ability. It does not imply an ability to reflect on one’s experiences, or to judge oneself to be having them, or to understand that others also have them. It is simply a matter of having experiences. (These ideas are developed further in Chapter 2.)

This idea of a ‘subjective point of view’ is easiest to grasp in the case of vision. Ernst Mach, in the Analysis of Sensations, famously attempted to draw his own visual point of view (Fig. 1.1). But there is far more to human subjective experience than vision. Our subjective point of view includes sounds, odours, tastes, tactile experiences: a rich sensory world. And these sensory experiences of a world outside us are integrated with experiences of a world within: bodily feelings, emotions, conscious thoughts, conscious memories, and imagination. When I talk of a subjective point of view, I mean it in this broad sense.

Drawing from Ernst Mach, The Analysis of Sensations (1914). Public domain.
Fig. 1.1

Drawing from Ernst Mach, The Analysis of Sensations (1914). Public domain.

This subjective point of view can be contrasted with a great mass of brain processing that occurs unconsciously, without surfacing in experience. In humans, this mass includes the early stages of sensory processing, as well as many processes of bodily self-regulation and motor control. I don’t feel the registration on my retina of light too dim to perceive consciously. I don’t feel the release of hormones from my pituitary gland. I don’t feel the micro-adjustments of my muscles as I walk. As I grasp a cup, I don’t feel my grip strength altering very slightly from one moment to the next, finely tuned to generate just the right amount of friction. My conscious experiences are, it seems, the tip of an iceberg of unconscious computation.

Given this, we can always wonder, for any other animal: do they too have a subjective point of view? Or do they just have the unconscious side of what I have: the underwater part of the iceberg? As we look across the animal kingdom, all of us have our own threshold of doubt: the point at which an animal becomes so evolutionarily distant from humans, and so dissimilar, that hesitancy to ascribe sentience to it begins to creep in. For a small minority, even other mammals evoke some doubt, especially once we look beyond the primates.4 I must say, however, that I have met very few people who can sustain a doubt that at least all mammals (like cats, dogs, and rats) are sentient.

For some, doubt begins when we turn to birds. The mammalian and avian lineages diverged between 170 and 340 million years ago.5 There are substantial similarities between the brains of mammals and birds but many differences too. When we attribute sentience to birds, we are implicitly recognizing that sentience can be achieved in a brain with a substantially different organization from our own.

For others, doubt creeps in when we look beyond mammals and birds to other vertebrates, such as fishes.6 Could a fish be a total subjective blank, like a ball or a rock? Could a fish feel no pain—or anything at all—when it is caught? Some have argued that we must take this possibility seriously. In response, some animal welfare scientists, motivated by understandable concern about the welfare of fishes, have called the sceptics ‘sentience deniers’.7 Even though I think there is strong evidence for sentience in fishes, I find the ‘sentience denier’ label too divisive in an area of genuine uncertainty.

Even those fully convinced that fishes are sentient will have their own threshold of doubt. For some, it is to be found at the point where we turn our attention from vertebrates to invertebrates—from animals with a backbone to those without. When we look at an octopus, snail, bee, crab, or spider, we are looking at a lineage that has been separate from our own for at least 560 million years.8 We are also considering an animal with a very, very differently organized brain. At this point, I think most—though not all—people start to entertain serious doubts about sentience.

Even those who cannot entertain such doubts about octopuses will tend to find doubt assailing them regarding other invertebrates. Think, for example, of cnidarians like jellyfish, sea anemones, and corals. Think also of very small crustaceans. On the windowsill in my kitchen I have an aquarium of brine shrimp, each a few millimetres in length. They are very commonly used in aquaculture—as live feed for other animals. I look at them and wonder: is there anything it’s like to be a brine shrimp? Copepods, another type of tiny crustacean, are famously added to New York’s tap water to clear out mosquito larvae.9

We should not forget that many invertebrates are microscopic, far smaller than even copepods or brine shrimp. Our environment is full of nematode worms, dust mites, tardigrades, rotifers, and more. Many plankton are zooplankton, part of the animal kingdom. If all animals are sentient, then sentience must be achievable on a microscopic scale, and our beds and carpets must be teeming with sentient beings. It is not unreasonable to have doubts about that. It is a realistic possibility that sentience could be present in larger, more complex invertebrates, like octopuses, yet absent in many other invertebrates.

We can talk of an edge of sentience in multiple senses. There is an edge in the world: a real boundary to the class of sentient beings. There is also a boundary in our confidence, marking the point at which beings become dissimilar enough to ourselves that we start to entertain serious doubts about their sentience. We can hope that the two line up well: that the real boundary is located somewhere in the region where we feel least confident. But we should be aware of the risk that we may have got things very wrong: it could be that our levels of confidence systematically fail to track the real boundary. Different again are practical edges of sentience: the boundaries we draw in contexts where we have to make decisions. This book is about all three kinds of edge, but it is the practical edges that will receive most attention.

Where, then, is the line between sentient and non-sentient beings to be drawn? It is tempting to throw our hands aloft and say ‘Maybe we’ll never know!’. But practical and legal contexts force a choice.

I had some direct experience of this when I advised the UK government on what is now the Animal Welfare (Sentience) Act 2022, or ‘Sentience Act’. The UK had just left the European Union, which includes (in its Lisbon Treaty) a commitment to regard animals as sentient beings. The government declined to import this clause directly into UK law, leading to some bad press. It reacted by promising to introduce new legislation to enshrine respect for animal sentience, and the proposed new law aimed to do that. Moreover, it sought to surpass the Lisbon Treaty by putting all ministers under a statutory duty to consider the animal welfare impacts of their decisions.

The government (more specifically, the Department for Environment, Food, and Rural Affairs, Defra) ran into a thorny problem: which animals should be covered by this duty? All of them, including copepods, dust mites, tardigrades, microscopic zooplankton? Just mammals? Their first draft covered all vertebrates, leading to understandable criticism from animal welfare organizations, who felt at least some invertebrates—and especially octopuses, crabs, and lobsters—should also be included.

Defra commissioned a team led by me to review the evidence of sentience in two specific invertebrate taxa: the cephalopod molluscs (including octopuses, squid, and cuttlefish) and the decapod crustaceans (including lobsters, crayfish, true crabs, and true shrimps; Fig. 1.2). Defra was clear from the outset that other invertebrate taxa were not on the table for possible inclusion. They wanted an informed opinion about these two. We reviewed over three hundred relevant scientific studies, synthesizing a complicated, gradated, messy evidential picture.10 We arrived at a clear recommendation: all cephalopod molluscs and decapod crustaceans should be included in the scope of animal welfare laws. To its credit, the government accepted our recommendation and amended its bill. The Sentience Act does encompass all cephalopod molluscs and all decapod crustaceans. It does not extend to brine shrimps or copepods since, like most crustaceans, they are not decapods.

Decapod crustaceans. Plate from Ernst Haeckel, Kunstformen der Natur (1904). Public domain.
Fig. 1.2

Decapod crustaceans. Plate from Ernst Haeckel, Kunstformen der Natur (1904). Public domain.

I am pleased we moved past ‘Maybe we’ll never know!’, reviewed all the relevant evidence we could find, and made a sensible practical recommendation on the basis of that evidence. We never achieved—or claimed to have achieved—certainty. Our approach was based on evaluating the evidence and communicating its strength as honestly and transparently as we could. I will reflect more on what I learned from this experience in Chapter 12.

In 2022, the journal Neuron published an article called ‘In vitro neurons learn and exhibit sentience when embodied in a simulated game-world’.11 The authors used human stem cells and brain tissue from mouse embryos to grow networks of around 1 million cortical neurons (i.e. cells of a type normally found in the neocortex, the part of the brain traditionally associated with higher-cognitive functions) (Fig. 1.3). The number is comparable to the total number of neurons in the brain of a bee.

An electron micrograph of DishBrain, a network of cortical neurons mounted on a high-density multi-electrode array. Reproduced from Kagan et al. (2022) under a CC-BY 4.0 licence.
Fig. 1.3

An electron micrograph of DishBrain, a network of cortical neurons mounted on a high-density multi-electrode array. Reproduced from Kagan et al. (2022) under a CC-BY 4.0 licence.

They mounted the network on a computer interface called a high-density multi-electrode array, giving it, in effect, control over the paddle in a game of Pong. Just twenty minutes of ‘gameplay’ was enough to produce a statistically significant improvement in performance, with more hits per minute and longer rallies. The performance was not good in absolute terms, unsurprisingly, but it is remarkable enough that performance measurably improved. The system learned. There was no evidence, however, of the learning being retained between sessions. In each session of ‘gameplay’, the system learned anew.

The researchers’ claim about ‘sentience’ merits scepticism. They defined sentience as ‘responsiveness to sensory impressions through adaptive internal processes’ and counted electrical stimulation through the array as a sensory impression. This is a definition so minimal that it trivializes the idea of sentience, detaching it entirely from conscious experience and the mind. I see it as a mistake to define sentience in this way. DishBrain is indeed sentient in this minimal sense, but all living cells, including those of brain-dead humans, would also be likely to count as sentient in this sense, and this should give us pause. These issues of definition will be picked up again in the next chapter. For now, I will just say that I think it is important to define sentience in a way that makes the concept apt for its important role in ethics and policy. We should take care not to trivialize it.

This study is a high-profile example of neural organoid research: research on models of human brain functions constructed using human stem cells. This area has tremendous promise, and, in principle at least, it gives scientists ways to model the human brain without experimenting on other animals. Organoid research is steaming ahead with great self-confidence, and even with a sense of humour, as shown by terms such as ‘DishBrain’.

I suspect the humour will start to drain away as researchers face up to the gravity of what they are doing. As ethical concerns grow, labels that playfully exaggerate the similarity with human brains will give way to cautious terminology that emphasizes difference. We need to be careful on both sides. We must not overestimate the similarities. But at the same time, we must not rule out the possibility of genuine sentience—ethically significant experience—in constructions made from living human brain tissue.

I have spoken to regulators in this area and found a great deal of worry and perplexity about how to regulate this emerging area of research. The potential scientific and medical benefits are very large. We should not crack down on it heavy-handedly. Yet, intuitively, there is a point at which we should stop doing this kind of research, no matter what the benefits. If we construct sentient beings and force them to live as disembodied brains on which we can experiment freely, we will have crossed an ethical line. The problem is what to do now, when we have not crossed that line but find it hard to see where the line is through the fog of uncertainty.

These questions cannot be fully separated from broader questions about the relation between sentience and human brain development. When a human fetus develops normally, the onset of sentience is no more clearly marked, and no less mysterious, than it is in an organoid. The uncertainty is agonizing, because important clinical decisions hinge on when exactly we start regarding the fetus as a potentially sentient being. As I’ll explain in Chapter 10, I do not think questions about the permissibility of abortion turn on the issue of sentience, even though they may initially seem to do so. But other very important decisions, such as whether to use anaesthetics during medical procedures on the fetus, do turn on this issue.

Some years ago, my eye was caught by the headline, ‘We’ve put a worm’s mind in a Lego robot’s body’.12 The article was about the OpenWorm project, a long-running attempt to emulate in computer software the entire nervous system of the nematode worm Caenorhabditis elegans, an animal with fewer than four hundred neurons (less than one-thousandth the size of DishBrain). Researchers on that project had put their latest emulation in control of a small robot and watched as the robot navigated its environment in something like (but, in truth, not all that much like) the way the original worm would. I was struck by a troubling thought: the same uncertainty about sentience that grips us when we think about invertebrates and human fetuses was beginning to resurface in artificial systems. If a worm could be sentient, could a neuron-by-neuron emulation of a worm in a computer also be sentient?

These fears about the emergence of artificial sentience, extremely niche and often dismissed back then, have since become rather more mainstream. I now fear we may achieve artificial sentience long before we realize we have done so. At the same time, we are also facing a different but perhaps even more urgent problem: people rampantly over-attributing sentience to systems that can skilfully mimic the behaviours that make humans think sentience is present.13 We already see signs of this with current large language models (LLMs). There is already a subculture in which people develop intimate emotional bonds with AI companions—or at least think they do. How can we tell skilful mimicry from the real thing?

In late 2022, two colleagues—Patrick Butlin and Rob Long—invited me to join an ambitious project that aimed to devise a list of indicators of sentience in AI.14 The media coverage of our eventual report was rather generous. Nature wrote ‘if AI becomes conscious, here’s how researchers will know’.15 In truth, talk of ‘knowledge’ is inappropriate. As I’ll explain in Part V, the difficulties we face in this area are even greater than those we face in the case of other animals. Other animals are not capable of gaming our criteria. They do not have an internet-sized corpus of training data to mine for effective ways of persuading human observers. So, when animals display a pattern of behaviour that is well explained by a feeling (such as pain), the best explanation is usually that they do indeed have that feeling. With AI, by contrast, two explanations compete: maybe the system has feelings, but maybe it is just responding as a human would respond, exploiting its vast reservoir of data on how humans express their feelings.

I could have written an inert discussion of abstract questions, floating past real-world decisions at a great distance. But I did not want to write that book. This book starts with the urgency of real life—matters of life and death that confront us all—and tries to find ways to decide, ways to agree.

The motto of my approach is ‘no magic tricks’. We start in a position of horrible, disorienting, apparently inescapable uncertainty about other minds, and then…the uncertainty is still there at the end. Sorry, it is inescapable. Anyone who tells you otherwise is not being honest or has not properly faced up to the problem. I am not in the business of selling magical escape routes from uncertainty. My aim is to construct a framework that allows us to reach collective decisions despite our uncertainty: decisions that command our confidence and reflect our shared values.

At the core of the framework is the thought that we need to find ways to err on the side of caution in these cases. The risks of over-attributing and under-attributing sentience are not equal. When we deny the sentience of sentient beings, acting as if they felt nothing, we tend to do them terrible harms. We are often responsible for those harms even though they were unintended, because our actions were negligent or reckless. Think here of Kate Bainbridge. The lack of any intention to cause psychological trauma on the part of her doctors does not mean they acted properly. Meanwhile, when we treat non-sentient beings as if they were sentient, we may still do some harm (if the precautions we take are very costly and time-consuming and distract our attention away from other cases), but the harms are often much less serious and of a different, more controllable kind.

In other contexts (especially in discussions of the environment and public health), this type of idea is sometimes called ‘the precautionary principle’. But the logic of my framework is not the following: ‘the precautionary principle’ is the correct general decision rule, so we must apply it to this particular set of decision problems. That is not what I’m saying. The idea is rather that the asymmetry of risk that stares us in the face when we think about cases at the edge of sentience presents us with strong and obvious reasons to start thinking about precautions, independently of whether this is also a good way to approach other policy challenges. The motivation for erring on the side of caution here is ‘bottom-up’—it comes from reflecting on the asymmetries of risk that jump out at us in these specific cases—rather than ‘top-down’, flowing from some high-level commitment to some general truth called ‘the precautionary principle’. I doubt there is any such general truth. What I mean will probably become clearer when we reach Chapter 6.

This general idea has been around for a long time in discussions of sentience (the history will be reviewed case by case in later chapters).16 My framework, however, combines the thought that we need to err on the side of caution with another, equally important thought: it is not enough to simply advise people to ‘err on the side of caution’ and leave it there. Almost any action at all, from outrageously costly precautions to the tiniest gesture, can be described as ‘erring on the side of caution’. We need ways of choosing among possible precautions. As in other areas where precautionary thinking is important, the crucial concept we need is proportionality: our precautions should be proportionate to the identified risks.17

I do not think proportionality reduces to a cost-benefit calculation. It requires us to resolve deep value conflicts, conflicts that obstruct any attempt to quantify benefits and costs in an uncontroversial common currency. Further down the line (in Chapters 7 and 8), I will give a pragmatic analysis of what it means to be proportionate, emphasizing that proportionate responses need to be permissible-in-principle, adequate, reasonably necessary, and consistent (I call these the ‘PARC’ tests). I will then turn to the question: what sort of procedures should we use, in a democratic society, to assess proportionality? My proposals will give a key role to citizens’ panels or assemblies, which attempt to bring ordinary members of the public into the discussion in an informed way in order to reach recommendations that reflect our shared values.

Because I think these decisions should be made by democratic, inclusive processes—and not by any individual expert—I think my own proposals about specific cases should be read as just that: proposals. They are not supposed to be the final word on any of these issues. I am not auditioning for the role of ‘sentience tsar’. It would be a mistake for any government to implement my proposals straight away, without discussion and debate. But I have given a lot of thought to what courses of action are plausibly proportionate to the challenges we currently face, and I am publishing my proposals in the hope of provoking debates I see as urgently needed. If I succeed in stimulating discussion, I can dare to hope the discussion may lead, via democratic and inclusive processes, to action.

My framework aspires to generality, but it also tries not to lose sight of the great differences between cases at the edge of sentience. There is a question of taste when humans with brain injuries are discussed in the same book as non-human animals. It raises the question: are you drawing an equivalence between the two cases? Are you saying that a brain injury can render a person less than human? That is not what I am saying at all. I think my repeated disavowals of it will make that clear enough. I am not claiming that there is a moral equivalence between these cases, or that our obligations towards an injured person are the same as our obligations towards other animals. Sensitivity to the vast differences between these cases is absolutely crucial.

What these cases do have in common is a resemblance in our state of uncertainty when we, as decision-makers, are forced to choose what to do. We must somehow move from horrible, vertiginous uncertainty to action. Our actions will have consequences, those consequences will depend on facts we are not in a position to know, and we may never know what the consequences were, even in hindsight. In all these cases, we feel a general imperative to err on the side of caution but are left wondering what erring on the side of caution requires of us. What precautions must we take and why? Is it possible to go too far in the direction of taking precautions and, if it is, where are the limits?

Once we see that our predicament has this common shape across all cases at the edge of sentience, it raises the hope that there might be versatile, transferrable insights about how to handle that type of predicament: how to move from uncertainty to action, how to adopt an appropriately precautionary attitude. It is in that spirit that I am bringing these cases together in the same book.

Parts I and II of the book will gradually assemble the pieces of an adequate precautionary framework. As I see it, a good framework for designing public policy should ideally be based on what John Rawls called overlapping consensus: principles that all reasonable people, for all their diversity and disagreement, can endorse for the right reasons.18 But to find principles all reasonable people could get behind, we first need to understand what sentience is and why there is so much disagreement about it in the first place—and which views in that space of disagreement are reasonable and which are not. There is a very wide ‘zone of reasonable disagreement’, and a good framework for making decisions will respect all the views that lie within that zone, as difficult as that may be. So, the first step towards a good framework is to map out that zone.19

In doing this, I will be trying to take a step back from my own personal opinions. Among the reasonable views, there are those I see as more or less likely (and I think my opinions will come across) but access to the zone of reasonable disagreement does not require my stamp of approval. It is fundamentally about whether the view is shaped by, and responds to, evidence and argument.

I imagined, years ago, a book that would begin with a general discussion of precautionary thinking and the science-policy interface and would only then zoom in on the special case of sentience. I came to see that this was the wrong approach. Intellectually wrong, because I think the reasons that drive precautionary thinking about sentience are ‘bottom-up’ rather than ‘top-down’ in the sense just explained. But also not true to the trajectory of my own thinking. For me, worrying about sentience has been at the core of this project from the beginning. So, this book maintains a relentless focus on sentience.

An upshot is that there is no natural place for me to acknowledge some of the influences on my approach from the wider philosophical and ‘science studies’ literature, so I want to do that at the outset. The literature on other precautionary principles is a major influence, especially the work of Daniel Steel, Stephen John, and Andy Stirling.20 The literature on values in science and inductive risk has also shaped my approach, notably the work of Heather Douglas.21 So has the literature on the proper relationship between science and policy in a democratic society, in particular the work of Philip Kitcher and Sandra Mitchell.22 The deliberative democracy literature, and especially the work of Helene Landemore, Alexander Guerrero, and John Dryzek, has also left a significant mark.23 And I have been inspired by analyses of very different cases by Anna Alexandrova (on well-being), Richard Bradley and Katie Steele (on climate change), Tim Lewens (on mitochondrial donation), and Anya Plutynski (on cancer screening).24 I am highlighting these authors here because they have not written directly on the topic of sentience—those who have will be acknowledged in later chapters.

When I first wrote about sentience and the precautionary principle, in 2017, more than twenty commentators kindly offered responses to my arguments.25 When I wrote another target article (with Andrew Crump, Alexandra Schnell, Charlotte Burn, and Heather Browning) in 2022, this time on sentience in decapod crustaceans, we received thirty commentaries.26 These critical responses have ended up shaping my thinking in important ways. I am very grateful to the editor of Animal Sentience, Stevan Harnad, for facilitating this process, and for his tireless work to encourage everyone to think more carefully about contested cases of sentience.

There is a family of cases at the edge of sentience. In these cases, grave decisions hinge on whether we regard sentience—initially introduced, informally, as ‘ethically significant experience’—to be present or absent in a person, animal, or other cognitive system. The family includes people with disorders of consciousness, embryos and fetuses, neural organoids, other animals (especially invertebrates), and AI technologies that reproduce brain functions and/or mimic human behaviour.

It is worth studying these cases together not because there is a moral equivalence between them but because they present us with similar types of uncertainty. We need frameworks for helping us to manage that uncertainty and reach decisions. This book aims to develop a consistent precautionary framework that enshrines—but also goes beyond—the insight that we must err on the side of caution in these cases, take proportionate steps to manage risk, and avoid reckless or negligent behaviour. Where sentience is in doubt, we should give these systems the benefit of the doubt. What that means in practice will be considered in the rest of the book.

Notes
1

I learned about this case via Syd Johnson’s (2022) book on the ethics of managing disorders of consciousness, which relates the case in more detail. The case was originally documented by Wilson and Gracey (2001).

3

Here I introduce a convention I will use throughout the book: the use of ‘they’ rather than ‘it’ to describe a sentient being.

4

See, especially, those who defend higher-order theories that link consciousness to granular prefrontal cortex (e.g. Rolls 2004).

5

For a wonderful tool for dating divergences, see https://www.timetree.org. The methodology is explained by Kumar et al. (2017).

6

Throughout the book, I will follow Balcombe’s (2016) suggestion to say ‘fishes’ not ‘fish’ to help us remember that we are talking about individual animals.

8

Divergences this ancient are exceptionally hard to date precisely. On the difficulties, see Peterson et al. (2004).

16

My own first encounter with the idea was in a paper by R. H. Bradshaw (1998).

17

Colin Klein (2017), in a commentary on my work, urged me to think more about proportionality—and was right.

18

Rawls (1993). Wolff (2020) has emphasized the wide relevance of the ‘overlapping consensus’ concept to public policy challenges, including challenges concerning non-human animals.

19

Federico Zuolo (2020) has undertaken a related task, mapping out reasonable disagreement in the specific case of the human treatment of other animals. The zone of reasonable disagreement about the edge of sentience is in some dimensions rather wider. It includes, for example, disagreement about substrate neutrality vs sensitivity (§3.5).

20

Steel (2015); John (2010, 2011, 2019); Stirling (2007, 2016). See also Buchak (2019); Clarke (2005); Dreyer et al. (2008); Driesen (2013); Gardiner (2006); Hartzell-Nichols (2012); Morgan-Knapp (2015); Munthe (2011); Persson (2016); and Steele (2006). I also count as influences those who have criticized ‘the precautionary principle’ as a general decision rule, such as Carter and Peterson (2015); Sunstein (2005); and Thoma (2022a). Their criticisms dissuaded me from arguing for precautions in a top-down fashion.

21

Douglas (2009). See also Steele (2012) and the case studies collected in Elliott and Richards (2017).

Close
This Feature Is Available To Subscribers Only

Sign In or Create an Account

Close

This PDF is available to Subscribers Only

View Article Abstract & Purchase Options

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Close