How should you update your (degrees of ) belief about a proposition when you find out that someone else — as reliable as you are in these matters — disagrees with you about its truth value? There are now several different answers to this question — the question of ‘peer disagreement’ — in the literature, but none, I think, is plausible. Even more importantly, none of the answers in the literature places the peer-disagreement debate in its natural place among the most general traditional concerns of normative epistemology. In this paper I try to do better. I start by emphasizing how we cannot and should not treat ourselves as ‘truthometers’ — merely devices with a certain probability of tracking the truth. I argue that the truthometer view is the main motivation for the Equal Weight View in the context of peer disagreement. With this fact in mind, the discussion of peer disagreement becomes more complicated, sensitive to the justification of the relevant background degrees of belief (including the conditional ones), and to some of the most general points that arise in the context of discussions of scepticism. I argue that thus understood, peer disagreement is less special as an epistemic phenomenon than may be thought, and so that there is very little by way of positive theory that we can give about peer disagreement in general.
1. The question, and some preliminaries
Suppose you trust someone — call him Adam — to be your epistemic peer with regard to a certain topic, for instance philosophy. If asked to evaluate the probability of you giving a correct answer to an unspecified philosophical question and the probability of Adam doing so, you give roughly the same answer. You treat Adam as your philosophical peer (and for now we can safely assume that he is indeed your peer, and that you are justified in so treating him). You then find out that you disagree with Adam about a given philosophical question — for some philosophical p, you believe p, and Adam believes not-p. How should you update your belief with regard to p given this further evidence (Adam’s view regarding p)? Should you be less confident now in p than you were before finding out about Adam’s view? If so, how much less confident?
One natural reply is the Equal Weight View, according to which you should give equal weight to your belief and to that of the one you take to be your peer, and so in our case suspend judgement about p. Here, for instance, is Adam Elga’s official presentation of the view of which the Equal Weight View is a particular instance:1
If you treat Adam as your peer prior to the disagreement, your prior conditional probability that you would be right in a case of disagreement is .5. And this, according to Elga’s view, should be your probability that you are right when the disagreement enters the scene. That is, you should suspend judgement about p. It cannot seriously be denied, I think, that the Equal Weight View has considerable appeal (more on this appeal shortly).
Upon finding out that an advisor disagrees, your probability that you are right should equal your prior conditional probability that you would be right. Prior to what? Prior to your thinking through the disputed issue, and finding out what the advisor thinks of it. Conditional on what? On whatever you have learned about the circumstances of the disagreement.2 (Elga 2007, p. 490)
The Equal Weight View, however, seems to give rise to highly implausible consequences. Perhaps chief among them is the one Elga (2007, p. 484) calls ‘spinelessness’: for it seems to follow from the Equal Weight View, in conjunction with plausible assumptions about the extent of disagreement among those you (even justifiably) take to be your philosophical peers, that you should be far less confident in your philosophical views than you actually are, indeed perhaps to the point of suspension of judgement. And it follows from the Equal Weight View, in conjunction with plausible assumptions about the extent of disagreement among those you take to be your moral and political peers, that you should be far less confident in your moral and political views, perhaps to the point of suspension of judgement. And so on. If the Equal Weight View does entail the requirement to be epistemically spineless, this seems to count heavily against it. But what would be an acceptable alternative view? The Extra Weight View — according to which, roughly, the fact that one of the two competing views is mine gives me reason to prefer it — seems just as suspicious, perhaps the epistemic analogue of some kind of chauvinism.
What should we say, then, about cases of peer disagreement? Given the fact that Adam rejects a philosophical p, and that you (even justifiably, and rightly) take him to be your philosophical peer, how if at all should you revise your degree of belief in p?
Before proceeding, though, we need to get some preliminaries out of our way. First, our question is of course entirely normative. The question is how we should revise our degrees of belief given peer disagreement, not the psychological question of how we in fact respond to such disagreement. (Psychological questions may be relevant to the normative one, but such relevance has to be argued for.)
Second, the phenomenon of disagreement is sometimes used in arguments that are supposed to establish a metaphysical rather than an epistemological conclusion. In ethics, for instance, there are many attempts to show that the phenomenon of moral disagreement supports some less-than-fully-realist metaethical view. I am highly sceptical of such arguments,3 but we can safely ignore all this here. Our concern here is with cases in which some metaphysical non-factualism, or relativism of some sort, is just not a relevant option (perhaps because we have strong independent reasons to rule it out). Our question, then, is entirely epistemological.4
Third, I will put things in terms of degrees of belief rather than all-or-nothing belief. In this I follow most of the literature focusing on peer disagreement (though Feldman (2006) conducts his discussion in terms of all-or-nothing beliefs). Indeed, as Kelly (forthcoming, p. 6) notes, there is in our context special reason to focus (at least initially) on degrees of belief. The point relevant to my concerns here is that it is quite natural to ask to what degree our confidence in a belief should be sensitive to peer disagreement. I suspect that much of what I am about to say can be applied, suitably modified, to all-or-nothing beliefs as well, but I will not do so here. When I speak of beliefs as if they were all-or-nothing below, then, I do this just as shorthand for degrees of belief.
Fourth, by your ‘peer’ I will understand someone who is, somewhat roughly, antecedently as likely as you are to get things right (on matters of the relevant kind). This may be due to the fact that she is as smart, rational, sensitive, imaginative, etc. as you are. But whether this is so is not to the point here — what is relevant here is just that she is (and is taken by you to be) as likely as you are to get things right.5 Notice also that your taking Adam to be your peer amounts to your having some positive attitude — a belief, perhaps, or a conditional probability — to the effect that you take him to be your peer. The absence of an attitude — your failing to have a belief that he is more likely than you are to get things right, and your failing to have a belief that he is less likely than you are to get things right — does not suffice for your taking him to be your peer, in the sense that I will (following the literature) be interested in.6
Fifth, and again following the literature here, I will focus on cases where the disagreeing peers share all the relevant evidence, and indeed where this very fact is a matter of common knowledge between them. Typical examples include a simple arithmetical calculation (what evidence could anyone possibly lack here?), philosophical debates where all concerned know of all the relevant arguments, and perhaps also moral debates of a similar nature.7 Such a restriction can simplify matters (you do not have to worry, for instance, about the possibility that Adam’s disagreeing with you is some evidence that there is further evidence — evidence you lack — that not-p), and as the examples above show, this restriction does not make things unrealistically simple. Nevertheless, given the nature of the debate over peer disagreement, and its general epistemological context — namely, that of considering evidence for one’s own fallibility in general, and for a specific error one is making in particular8 — this simplifying assumption is not unproblematic: we are, after all, no less fallible with regard to the question whether our peer has some evidence we lack than with regard to any other relevant judgement. Ideally, one would want an answer to our question (how if at all to revise one’s beliefs given peer disagreement) without relying on such an assumption. Again, I suspect much of what I say below can be applied to the more generally realistic cases as well, but I will not argue the point here, and will for the most part ignore this complication (though I return to it briefly in the final section below).
Sixth, our epistemological question is a rather focused one. The question is not what you should — all things considered — believe regarding p. The question is, rather, what pro tanto epistemic reason is given to you — if any — by the disagreement with Adam; whether, in other words, the disagreement itself gives you epistemic reason to lower your confidence in p, and by how much. As will become clear later on (in discussing Kelly’s relevant views), this distinction is not without importance.
Seventh, I will be assuming that for any given state of evidence, there is a unique degree of belief that it warrants. I will, in other words, assume the Uniqueness Thesis (see Feldman 2007), that is, that there is no epistemic permissiveness. This is not because I am convinced that the Uniqueness Thesis is true.9 Rather, I think that what is interesting about peer disagreement does not depend on what we end up saying about epistemic permissiveness.10 Be that as it may, the discussion that follows assumes Uniqueness.11
Finally, the question with which I will be concerned here is not practical in any straightforward sense. I will be discussing the relevance of peer disagreement to epistemic, not pragmatic, justification. Christensen (2007, p. 215) is right, then, when insisting that even if we can show that (say) philosophical discussion is best promoted if disagreeing peers stand their respective grounds (in a kind of efficient marketplace-of-arguments), still nothing follows from this with regard to the fate of the Equal Weight View. Just as importantly, though, Christensen (2007, p. 204) is wrong in (partly) relying on intuitive judgements about what should be done in cases of disagreement (his most powerful example is that of disagreement between physicians about a possible treatment). The considerations relevant to answering such practical questions are presumably varied, and they include more than just the purely epistemological ones in which we are interested here.12 Of course, the epistemic considerations may be relevant to these practical questions as well, and so such practical examples need not be entirely irrelevant. But their relevance can at best be indirect, and needs argumentative support (like the claim that what best explains some practical judgement is some epistemic one). Now, from time to time I will myself resort to analogies with more practical questions, but the analogies will hold (or so I shall claim) on a much more abstract level.
2. The truthometer view (or: more on the appeal of the Equal Weight View)
Suppose you have two thermometers in the reliability of which you (justifiably) have equal trust.13 On a specific occasion you want to know the temperature outside, and you use both thermometers, which give different readings, say one indicating it is 65 degrees Fahrenheit and the other 70. You have, let us assume, no further evidence on the matter, and in particular it does not ‘feel’ to you more like 70 than like 65, or the other way around. What should you believe about the temperature? Presumably, you have no (overall) reason to believe it is 65 degrees rather than 70, or 70 rather than 65. (You may be justified in believing it is either 65 or 70, or perhaps between 65 and 70, or perhaps between 66 and 69, or between 62 and 73, but none of this is relevant for our purposes.) Your (justified) prior probabilities that each thermometer would be right (conditional on everything you have learned about the circumstances of their ‘disagreement’) are the same for both, and so you are no more justified in relying on one than on the other. It goes without saying that none of this changes if you first find out about the reading of just one of the thermometers, form your belief accordingly, and only then find out about the reading of the other. In such a case you should, upon finding out about the other reading, update your beliefs about the temperatures so that symmetry is restored.
Now suppose you have two friends, Adam and Tom. Adam and Tom are mathematicians in whose reliability about mathematical matters you (justifiably) have equal trust. On a specific occasion you want to know whether a given formula is a number-theoretic theorem, and you ask both friends, who give different answers, one saying that it is and the other that it is not. You have, let us assume, no further evidence on the matter, and in particular you do not yourself go through the purported proof (perhaps because it is too complicated for your mathematical abilities). What should you believe about the formula’s purported theoremhood? It seems rather clear that you have no (overall) reason to believe it is a theorem or that it is not one. Your (justified) prior probabilities14 that each mathematician would be right (conditional on everything you have learned about the circumstances of their disagreement) are the same for both, and so you are no more justified in relying on one than on the other. It goes without saying that none of this changes if you first find out about the result of just one of the mathematicians, form your belief accordingly, and only then find out about the result of the other. In such a case you should, upon finding out about the other’s result, update your degrees of belief about the formula’s theoremhood so that symmetry is restored.
In such a case, then, it seems clear that you should treat Adam and Tom as perfectly analogous to thermometers — as truthometers. Whatever else they are (and whatever else they are to you), they are each a mechanism with a certain probability of issuing a true ‘reading’ of theoremhood (or whatever), and the way to take their views into account — the way, that is, to revise your own views given the evidence of theirs — is exactly the way to take the reading of thermometers into account. That the underlying mechanism of your friend-truthometers is somewhat different from that of your thermometers seems neither here nor there.
But, of course, you yourself are — whatever else you are — yet another truthometer of this sort. Just as Adam and Tom — your mathematician friends — have a certain track record with regard to such matters, and just as you have a view on how likely each of them is to be right on such matters, so too you have such a track record, and indeed you have a view on how likely you are to be right on such matters. If your prior probabilities that Adam and Tom would be right are equal, you should give their views (in a case of disagreement) equal weight. Well, what is different in the case of a disagreement between Adam and you, given that your (justified) prior probabilities that Adam and you would be right are equal?15 If you give extra weight to your own view in the case of a disagreement between you and Adam, is this not like giving Tom’s view extra weight in a case of disagreement between Adam and Tom simply because you heard his advice first, or because he is closer to you? (See Feldman 2006, p. 223.) What is so special about your own view, if you take yourself to be just as likely as Adam to be right about such things? If you should treat — for epistemic purposes — Adam and Tom as truthometers, should you not also treat yourself as one?
3. On the ineliminability of the first-person perspective (or: why the truthometer view must be false)
Yes, you should treat yourself as a truthometer, but you should not treat yourself merely as a truthometer.
Here is a first hint at why this is so. Suppose we accept the Equal Weight View. Then, to repeat, ‘upon finding out that an advisor disagrees, your probability that you are right should equal your prior conditional probability that you would be right.’ But, of course, the prior conditional probability mentioned here is your prior conditional probability. And here too you may be wrong. Indeed, you may have views on how likely it is that your prior conditional probability is right (or that your belief about these probabilities is true),16 and how likely it is that, say, Adam’s prior probability is right. Perhaps, for instance, you think both of you are equally likely to be right about such matters. So if you and Adam differ on the relevant prior conditional probability, the Equal Weight View requires that you give both your views equal weight. But of course what does the work here is your prior conditional probability that you or Adam would be right about prior conditional probabilities. And here too you may have views about how likely you and others are to get it right, but here too this view will be your view, and so on, perhaps ad infinitum.17
Now this is not a vicious regress exactly: for one thing, you may want to apply this kind of reasoning on a case-by-case basis, only, as it were, when you have to, or when the relevant questions arise. And they need not all arise, certainly not simultaneously. But what the previous paragraph does show is that in forming and revising your beliefs, you have a unique and ineliminable role. You cannot treat yourself as just one truthometer among many, because even if you decide to do so, it will be very much you — the full, not merely the one-truthometer-among-many, you — who so decides. So the case in which Adam and Tom differ is after all different — different for you, that is — from the case in which you and Adam differ. The point is naturally put Nagelianly:18 even though from a third-person perspective — which you can take towards yourself — we are all truthometers, still the first-person perspective — from which, when it is your beliefs which are being revised, it is you doing the revisions — is ineliminable.19
It is important to distinguish here between mere seemings and full-blooded beliefs. Suppose that both of us dip our hands in two bowls of water, and that to me one seems warmer than the other, while to you both seem equally warm. In such a case I think there is no problem in treating oneself merely as a truthometer (a thermometer, really, though not a very good one), and so there is no problem — not this problem, anyway20 — in applying the Equal Weight View here. But we should not over-generalize from such cases. In this case the seemings are both unreflective, a-rational immediate seemings. And it seems to me with regard to those we can perfectly happily settle for the third-personal point of view. With regard to such seemings, in other words, and even though they are still very much one’s own seemings, still the first-person perspective is unproblematically eliminable: we can epistemically distance ourselves from our seemings in a way we cannot distance ourselves from our full-blooded rational (as opposed to a-rational, not to irrational) beliefs, those that are based on a reflective consideration of the evidence, those in which the believing self is fully engaged.21 Once you reflect on a question, asking yourself, as it were, what is the truth of the matter, and so what is to be believed — once the believing self is fully engaged — you can no longer eliminate yourself and your reflection in the way apparently called for by the truthometer view. Of course, the distinction between full-blooded beliefs and seemings is not sharp or fully clear. While perceptual cases seem to me to be rather clear seemings-cases, other cases are not as clear. Borderline cases may include (some cases of) reliance on memory, or perhaps cases of more reflective reliance on perception. But however we go regarding such borderline cases, all that is needed for my point here is that there are some paradigmatic cases on both sides of this vague distinction. And cases where the believing, reasoning self is fully engaged, cases in which our response to the evidence is reflective, are different from seemings-cases.22 While the truthometer view may work with regard to the latter, it does not work with regard to the former.23
Thus, neither can we treat our believing selves merely as truthometers, nor (consequently) is it the case that we should. But my point is not just an ought-implies-can kind of point. To see this, consider that some ideals are such that, upon realizing that they cannot be fully or globally applied, they retain their initial appeal, and so we reduce our aspirations from fully applying them to approximating them (for instance, upon realizing that you cannot help all those in need of help, the ideal of helping those in need does not lose its appeal; it is just that you proceed to approximate it to the extent that you can). But other ideals are such that, upon realizing that they cannot be fully or globally applied, they do lose their initial appeal, and so we proceed to question them more generally, we take their lack of full or global applicability to be reason to question them more locally as well. Consider, for instance, the sometimes-given advice to always define the terms you use in asking a question before proceeding to answer it. Upon realizing that this piece of advice cannot be globally adhered to (because of the imminent infinite regress), we do not retreat to the claim that we should always define our terms as much as possible, or some such. Rather, we take the global inapplicability to be reason to reconsider the goodness of the advice even where it can be applied: we take it, in other words, as evidence that the advice was altogether confused. The case of the truthometer view seems to me to be of this latter kind. Once it is clear that the truthometer view’s requirement cannot be universally complied with — at least, that is, if the most radical of scepticisms is to be avoided — this view loses much of its appeal even for the cases in which it can be complied with. With this fact in mind, in other words, it becomes clear that there is some deep confusion underlying the truthometer view, and so that it is not even the kind of impossible ideal that should be aspired to whenever possible. The truthometer view is, quite simply, false. Of course, perhaps we should still, in some circumstances, treat ourselves as merely truthometers (analogously: perhaps we should still define some of our terms, some of the time). But this does not make the truthometer view — as a general view — any more plausible.
What follows from all this for the question of peer disagreement? At this point, not much. In particular, that the truthometer view cannot be true in general does not entail that the Equal Weight View is false. It was, after all, I who argued that the Equal Weight View’s philosophical appeal comes from the truthometer view, and it is open to an adherent of the Equal Weight View to base it on other considerations.24 Furthermore, that the first-person perspective cannot be completely eliminated does not entail that it cannot be eliminated from a focused discussion here or there: we do, after all, have evidence, and also views, regarding our own reliability on many matters, and it would be foolish to ignore them when forming and revising our beliefs. At least sometimes, then, we do — and should — take such a third-person perspective towards our beliefs (certainly of our past and future selves, and perhaps sometimes also of our present selves). It is open to the proponent of the Equal Weight View to argue that this is exactly so in cases of peer disagreement.25 So the ineliminability of the first-person perspective does not — all by itself — spell doom for the Equal Weight View. But it is not without significance here either. For once it is clear that we cannot consistently treat ourselves as truthometers across the board, if it can be shown that there is no more reason to treat ourselves as truthometers in cases of peer disagreement than elsewhere, the Equal Weight View loses, it seems, much of its appeal.
4. An interlude: against Kelly’s Right-Reasons and Total Evidence Views26
I will get back to the Equal Weight View shortly. But let me pause to comment on two related alternative views, both from Thomas Kelly.27 This interlude is justified both because it is of interest in its own right (or so at least I think), and because some of the lessons learned here will prove useful in the following sections.
In his earlier treatment of the issue, Kelly (2005) seemed to flirt with what I will call the I Don’t Care View, according to which the disagreement itself is epistemically irrelevant. If you have carefully considered the evidence and have come to the conclusion that p, then the contingent fact that others differ should have no effect on you. Of course, if you know that someone equally rational (etc.) can understand all your evidence and still believe not-p, this is epistemically important. But what does the work here is not the disagreement, but rather the weakness of the evidence (as witnessed by the possibility of a perfectly rational thinker not being convinced by it). This is why, Kelly (2005, pp. 181 ff.) argues, the actual disagreement (as opposed to possible rational disagreement) is epistemically irrelevant.
I take it even Kelly no longer believes the I Don’t Care View (if he ever did), and so we can be quick here. The problem is not just that the I Don’t Care View yields highly implausible consequences (should you really remain as confident in your calculation even when another differs? When two others differ? When many, many more differ?).28 The deeper problem is that this way of viewing (actual) disagreement ignores the fact that the discussion of peer disagreement is located in the wider context of epistemic imperfection. We are here in the business of taking our own fallibility into account, and peer disagreement may very well be a relevant corrective. True, if we had a god’s eye view of the evidence — infallibly knowing what it supports, and infallibly knowing that we infallibly know that — actual disagreement would be epistemically irrelevant. But we do not, and it is not.29
Kelly (2005) also defends a rather strong asymmetry between the differing peers. Assuming — as we do here — that there is no epistemic permissiveness, at least one of the peers is epistemically malfunctioning on this occasion, not responding to the evidence in the (uniquely) right way. So some asymmetry is already built into the situation of the disagreement. Kelly takes advantage of the opportunity this asymmetry opens up, and argues that the right answer to our question — how to revise one’s degrees of belief given peer disagreement — is different for the two peers. The one who responded rightly to the evidence should do nothing in the face of disagreement. The one who responded wrongly should take the disagreement as (further) reason to revise his degree of belief. But this view — the Right Reasons View — is flawed in more than one way.
First, to repeat, it is highly implausible that peer disagreement is epistemically irrelevant even to the one who responded correctly to the initial evidence.
Second, our question, as you will recall, was the focused one about the epistemic significance of the disagreement itself. The question was not that of the overall epistemic evaluation of the beliefs of the disagreeing peers. Kelly is right, of course, that in terms of overall epistemic evaluation (and barring epistemic permissiveness) no symmetry holds. But from this it does not follow that the significance of the disagreement itself is likewise asymmetrical. Indeed, it is here that the symmetry is so compelling.30 The disagreement itself, after all, plays a role similar to that of an omniscient referee who tells two thinkers ‘one of you is mistaken with regard to p’. It is very hard to believe that the epistemically responsible way to respond to such a referee differs between the two parties. And so it is very hard to believe that the epistemic significance of the disagreement itself is asymmetrical in anything like the way Kelly suggests.
Third, and relatedly, imagine a concerned thinker who asks her friendly neighbourhood epistemologist for advice about the proper way of taking into account peer disagreement. Kelly responds: ‘well, it depends. If you have responded to the initial evidence rationally, do nothing; if you have not, revise your degrees of beliefs so that they are closer to that of the peer you are in disagreement with.’ But this is very disappointing advice indeed. To be in a position to benefit from this advice, our concerned thinker must know whether she has responded rightly to the initial evidence. But, of course, had she known that, she would not have needed the advice of an epistemologist in the first place.31 Perhaps this is not a conclusive objection to Kelly’s view: it is not, after all, immediately obvious that epistemic truths of the kind at stake here have to be able to play the role of epistemic advice. But at the very least this result places a further burden on the Right Reasons View.
In his more recent treatment of peer disagreement, Kelly (forthcoming) defends a somewhat different view, the Total Evidence View, according to which ‘what it is reasonable to believe [in a case of peer disagreement] depends on both the original, first-order evidence as well as on the higher-order evidence that is afforded by the fact that one’s peers believe as they do’ (Kelly forthcoming, p. 32). Perhaps the appeal of this view is best appreciated through the following point (which I offer here in a somewhat tentative tone): according to the Equal Weight View we are epistemically required to ignore some evidence.32 According to the Equal Weight View, you are required — after having evaluated the evidence, having come to confidently believe p (based on this evidence), and having come to realize that Adam confidently believes not-p — to ‘split the difference’, and update your degree of belief so that it will now be the average of the two initial degrees of belief (yours and Adam’s). The rationale for that is that now your evidence with regard to p consists of the reading of the two truthometers (you and Adam). And unless we are to endorse the I Don’t Care view, we already agree that the truthometers’ readings are indeed relevant evidence with regard to p.33 But where has all the other evidence gone? The Equal Weight View insists not just on the epistemic relevance of the peers’ beliefs, but also that — at this stage at least — their beliefs (or the truthometers’ readings) are the only relevant evidence. As Elga (2007, p. 489) insists, for instance,34 in updating your degrees of belief given the disagreement you are allowed to conditionalize on everything you have learned about the disagreement, except what depends on your initial reasoning to p (indeed, if this point is not insisted on, the Equal Weight View borders on vacuity, a point to which I will return). So the Equal Weight View requires that in the face of peer disagreement we ignore our first-stage evidence altogether. And this does not seem to be a virtue in an epistemological theory. Surely, even if others’ beliefs are relevant evidence, such evidence should be weighed together with all the other evidence we have, should it not? Should we not base our beliefs on the total evidence available to us? And once all the evidence is taken into account, it is not in general true that the disagreement-evidence will always dominate the first-stage evidence.35
There is, I think, something importantly right about this line of thought,36 but as it stands it cannot withstand criticism. Sometimes ignoring evidence is the epistemically right thing to do. Kelly (2005, p. 188) himself gives examples: if one piece of evidence statistically screens off some other piece of evidence, then in considering the former we should ignore the latter on pain of double counting. (Kelly offers the example of an insurance company evaluating the risks involved in a certain person’s driving: if the insurance company has rather precise information about the individual, weighing with it also the most general information — say, based on the person’s age or gender — may amount to such double counting.) But this, after all, is precisely what the proponent of the Equal Weight View should say about the suggestion to still consider — at the second stage — all of the initial evidence. All of this evidence was considered by you in coming to believe p (and by Adam, in coming to believe not-p). If at the second stage we think that you believe p (and that Adam believes not-p) to be evidence, this evidence arguably screens off the evidence that was already taken into account in the first stage. The line of thought suggested in the previous paragraph as motivating the total evidence view is thus guilty of double counting. (Again, the obvious way to avoid double counting would be to endorse the I Don’t Care View, but I take it we already have sufficient reason to reject it.)
Furthermore, it is not completely clear whether the Total Evidence View avoids the asymmetrical features that were so troubling in the Right Reasons View. It is clear, of course, that the overall epistemic evaluation of the two disagreeing parties will not be symmetrical, because of the sensitivity of the Total Evidence View to the initial evidence, to which — ex hypothesi — just one of the disagreeing parties responded rationally. But it is unclear whether the disagreement itself has, on this view, different epistemic effects on the two disagreeing parties, depending on who (roughly speaking) got things right.37 To the extent that the Total Evidence View retains the asymmetry present in the Right Reasons View, then, it is vulnerable to the relevant objections mentioned above.38
The Total Evidence View too, then, is not without problems. And though what I end up saying will resemble it in some important ways, we must not forget its problems. That the Total Evidence View (and the Right Reasons View) is so problematic may seem to lend further support to the Equal Weight View, to which I now return.
5. On being a peer, being believed to be a peer, and being justifiably believed to be a peer
Our question — how to revise our beliefs in the face of peer disagreement — is actually ambiguous between at least three readings. It is high time to disambiguate it.
We can ask, first, how to revise our beliefs in the face of disagreement with someone who is in fact our peer (that is, someone who is in fact equally likely as we are to get things right here). Or, second, we can ask how to revise our beliefs in the face of disagreement with someone whom we take to be our peer. Or, third, we can ask how to revise our beliefs in the face of disagreement with someone whom we justifiably take to be our peer.39 So far, I have been assuming that the relevant peer satisfies all these descriptions — Adam is in fact your peer, you believe as much, and furthermore you justifiably believe as much (one is tempted then to say — you know that he is your peer). But it will now prove useful to distinguish between these descriptions.
I will put to one side the question of how to revise one’s beliefs given a disagreement with someone who is in fact — perhaps unbeknownst to one — one’s peer. Though this question may be of some interest — especially, perhaps, to those whose views about epistemic justification are (I would say implausibly) externalist — it is not this question I am primarily interested in (nor is it the question the literature on peer disagreement seems interested in). The more interesting distinction, then, is that between how we should revise our beliefs given a disagreement with someone we take to be our peer, on the one hand, versus how we should revise our beliefs given a disagreement with someone we justifiably take to be our peer, on the other.
Christensen is not clear about this distinction (perhaps because he implicitly restricts the scope of his discussion to those justifiably believed to be peers). But Elga is rather clear on this point, so let us focus on his claims here. Throughout his discussion of peer disagreement, Elga (2007) talks just about what your prior conditional probability is that you (and others) would be right. Nowhere does he speak of what that prior probability should be. Indeed, Elga attempts to conduct the whole discussion while abstracting from questions of precisely that sort: Elga (2007, p. 483) says he has nothing to say (here) about when we should trust whom and to what extent. But this nonchalance is not, I now want to argue, something the Equal Weight View can afford.
If your prior conditional probability that you would be right (on a given topic, in case of disagreement with Adam) is, say, 1, but if you are not justified in having this prior conditional probability that you would be right (say, because your and Adam’s track records on this topic are equally good), then upon finding out about the disagreement with Adam you are most certainly not justified in completely discarding his opinion. In such a case, then, your probability that you are right should not be your probability that you would be right (that is, 1); rather, it should depend — to an extent, at least, even if not completely as the rationale of the Equal Weight View seems to require — on the probability that you should have had that you would be right. The point here is a particular instance of a common (if not entirely uncontroversial) one: that you believe p and if p then q cannot confer justification on your belief q (even if formed by inference from the previous two) unless your beliefs p and if p then q are themselves justified. Similarly, updating your degrees of belief according to a prior probability you have cannot render your updated degree of belief justified unless your prior probability is itself justified.40 So we have here a counterexample to (at least Elga’s official presentation of) the Equal Weight View.
Thus, the Equal Weight View should be revised.41 The question ‘How should we revise our beliefs in the face of disagreement with someone we believe to be our peer?’ is problematic, we have just seen, for if (for instance) we unjustifiably refuse to take someone to be our peer, what we should do in the face of disagreement is first come to believe that she is our peer and then treat her epistemically as one. The cleaner question, then, is that of how we should respond to disagreement with those we justifiably take to be our peers.42 And this is the question the revised version of the Equal Weight View43 — which from now on I will just call the Equal Weight View — answers.44
But if this is really the more interesting question, then answering it cannot be isolated in the way Elga wants from questions regarding the justification of trust. My point here is not just that without an answer to this question there is a sense in which the Equal Weight View is incomplete. The problem runs deeper than that, because once this question is raised, it seems to me clear that any plausible answer will undermine the Equal Weight View (even in its revised version). Here is why.
Of the many factors that go into the justification of the degree of trust you have in others, some surely have to do with how often they were right about the relevant matters. Not all — and it is an interesting question what other factors are relevant and how. And perhaps there are cases in which this is not a relevant factor at all. (I am not sure, but perhaps some guru cases (see Elga 2007, p. 479) where one completely defers to another — are of this weird sort.) But in most cases, a significant part of your evidence as to someone’s reliability on some topic is her track record (or that of the relevant set of people of which she is a member) on that topic, that is, how often she got things right, that is, how often she — as you believe — got things right. This is not exactly the same thing as how often she agreed with you. Perhaps, for instance, you now believe you were mistaken at the time, and only with the help of hindsight can you now see that back then she was right (and you wrong). But still, the fact that her view on these things is often (as you now believe) true is certainly a relevant factor in determining how likely she is to be right on the next question, the one about which you differ with her. It would be absurd, after all, to require that in determining the degree of epistemic trust we should accord someone we ignore her track record on the relevant matter (see Kelly 2005, p. 179).
This trivial observation supports three relevant conclusions here. First, the ineliminability point from section 3 is reinforced, for even if you can treat yourself as a truthometer if you just ask what is your prior probability that Adam (or you) would be right, you can no longer do so when you ask what is the justified prior probability that Adam (or you) would be right. Here you can no longer abstract from the question of what you take to be the truth of the relevant matters, that is, of what you take to be the truth of the relevant matters.45
Second, recall another point from section 3 above, namely that seeing that we cannot universally treat ourselves as truthometers, what we are really looking for — as support for the Equal Weight View — are reasons that are at least somewhat peculiar to the case of peer disagreement. In order to philosophically motivate the Equal Weight View, in other words, its proponent has to show what it is specifically about the context of peer disagreement that makes an application of the truthometer view plausible. But now that we know that the Equal Weight View is better put in terms of one’s justified trust in others, and furthermore that in fleshing out the details of such justification you are not going to be able to treat yourself as a truthometer, it seems highly unlikely that our context is one where the (restricted) truthometer view should be applied.
And third, the fact that what counts is justified trust and not merely trust, together with the trivial observation about how such trust can gain justification, has implications for another key question in the peer-disagreement debate — that of the possible role of the disagreement itself as evidence against counting one’s interlocutor as peer, or as reason to demote him from his status as a peer. It is to this issue that I now turn.
6. Is the disagreement itself reason for demoting?
In the context of finding out how we should respond to peer disagreement, it is a key issue whether the disagreement itself can be sufficient reason to demote your interlocutor from the status of peer. Assume that the answer is ‘yes’. If so, then even if you take Adam to be your peer prior to the whole unpleasant business regarding p, once you find out about the disagreement, you can justifiably demote him from the status of peerhood, and stick to your own judgement about p (after all, that someone who is your epistemic inferior disagrees with you is not a strong reason to change your mind). Or perhaps — still under the assumption that the disagreement itself is reason for demoting — the right thing to do is not to split the difference exactly (as the Equal Weight View seems to require), and not to demote Adam completely and stubbornly stand your ground, but rather to reduce your confidence in p somewhat, and also demote Adam somewhat. But if the disagreement itself is somehow barred from counting as evidence as to Adam’s epistemic status, then the Equal Weight View seems to trivially follow from any plausible conditionalization principle46 (a point I return to below): after all, you believed that he would be just as likely as you are to be right in a case of disagreement, and now we have a case of disagreement, and this disagreement itself is no reason to change your mind about how likely Adam is to be right in such cases, so should you not now believe that it is equally likely that he is right as it is that you are? That, of course, would require endorsing the Equal Weight View.
Unsurprisingly, then, philosophers on all sides acknowledge the central role in this debate of the evidential status of the disagreement itself. Kelly’s position here (as articulated at e.g. Kelly forthcoming, p. 54) follows naturally from the Total Evidence View and what motivates it: if we should take all the evidence into account, there does not seem to be any reason to exclude the disagreement itself as relevant evidence too, both for the relevant reliability claims, and for the first-order claims as well.47Christensen (2007, p. 196) agrees that the disagreement may be evidence that counts against your interlocutor’s reliability, but he insists it counts equally as evidence against your own reliability, so that symmetry is restored (I argue against this move in the next section). And both Christensen and Elga allow as evidence with regard to your interlocutor’s reliability information about the disagreement (such that you feel tired, or that Adam looks a little drunk, or that he apparently did not use the right reasoning procedures in this case, or that you find his conclusion utterly crazy and not just false),48 but they are very clear about disallowing the disagreement itself — the mere fact that Adam believes not-p while you believe p — as (asymmetrical) evidence against Adam’s reliability. Christensen (2007, p. 198) here insists on the reason to demote being independent of the specific disagreement under discussion, and Elga (2007, p. 489) insists that the relevant conditional probability (that you would be right, given disagreement) is prior ‘to your thinking through the disputed issue, and finding out what the advisor [in particular, your peer] thinks of it’.
It is thus (perhaps also) common ground that the fate of the peer-disagreement issue (and in particular, that of the Equal Weight View) is pretty much determined by the answer to the question we are now considering, namely, whether the disagreement itself can count as evidence that your interlocutor is less than fully your peer.49
Surprisingly, though, it is not at all clear how this is reflected in the official versions of the Equal Weight View. Again focus on Elga’s statement of the view of which the Equal Weight View is a particular instance, which reads:
To see the problem, start by noting that your probability that Adam would be right on a certain issue — Pr(R) — need not be the same as your probability that Adam would be right on that issue given that there is a disagreement between you two on it — Pr(R | D). Indeed, often these two should be different. Suppose, for instance, that you think both you and Adam are very good philosophers, and that both of you are highly likely to give the right answer to a philosophical question. And suppose further that you are justified in this trust in Adam and in yourself. Because you are so confident that Adam will get (philosophical) things right, your Pr(R) should be fairly close to 1, as is your probability that you would be right. Should you two disagree about a philosophical question, you would find this fact very surprising. After all, if you are almost always right, and Adam is almost always right, then you two are (almost) almost always in agreement. The surprising disagreement should give you pause. And I take it that given the disagreement, you should now be far less confident that Adam got things right (and similarly for you). And you know all this in advance, of course, so your prior conditional probabilities should reflect this fact. In other words, in this case Pr(R) ≫ Pr(R | D). The supporters of the Equal Weight View will, I am sure, agree.
Upon finding out that an advisor disagrees, your probability that you are right should equal your prior conditional probability that you would be right. Prior to what? Prior to your thinking through the disputed issue, and finding out what the advisor thinks of it. Conditional on what? On whatever you have learned about the circumstances of the disagreement. (Elga 2007, p. 490)
When Elga argues that in a case of disagreement your probability that you are right should equal your prior conditional probability that you would be right, he means (and can only mean) Pr(R | D), not Pr(R). Had he meant Pr(R), then in the case described his view would entail — incoherently — that you place almost full confidence both in Adam and in yourself, even in the case in which you believe p and Adam not-p.
So Elga argues that your posterior probability that Adam (or you) are right (Prf(R)) after finding out that you two disagree (D) should be the prior conditional probability that Adam (or you) would be right in such a case (Pr(R | D)). If this is what the Equal Weight View comes to, though, it borders on triviality, being just an instance of the most basic conditionalization principle — your posterior probability having found out a piece of evidence should equal your prior conditional probability — conditioned, that is, on that same piece of evidence. And indeed, at times it does seem that the Equal Weight View has (almost) nothing interesting, non-trivial to say regarding the question it was supposed to be an answer to — namely, how to take the disagreement of others into account. At one point, for instance, Elga (MS, n. 9) notes that the Equal Weight View is entirely consistent with just having degree of confidence 1 in oneself in all cases, thus completely discarding the views of others.
The only thing standing between the Equal Weight View and vacuity — but also, I now want to argue, the thing that renders it rather clearly false — is the explicit requirement to exclude from one’s conditionalization process the disagreement itself as reason for demoting (and more generally, the first-stage evidence). In Elga’s probabilistic framework, taking the disagreement itself as reason for demoting Adam amounts to taking the disagreement as reason for revising one’s conditional probability that Adam would be right (in a case of disagreement about a proposition of the relevant kind). And Elga — perhaps like other proponents of the Equal Weight View — assumes that this is never rationally permissible. The thought seems to be built into the very structure of conditionalization: if your conditional probability P(p | q) = x , and you find out that q, then your posterior probability P(p) should be x. The conditional probability is taken as given, not something that can be changed in view of new evidence. But why should that be so? In the case of non-probabilistic Modus Ponens arguments, for instance, the point is often made that if you (justifiably) believe if p then q, and then (justifiably) come to believe p, you may have two rationally permissible options: to come to believe q, or to take back your commitment to at least one of the premisses, for instance, to the conditional if p then q. Why not say, then, that when your (justified) conditional probability P(p | q) = x , and you find out q, then either your posterior probability P(p) should be x , or you should revise your prior conditional probability so that now it is P(p | q) = y (where y ≠ x), and then come to have degree of belief P(p) = y?50 In our context, given the disagreement with Adam, why think that your only acceptable way of restoring probabilistic coherence is by according Adam’s view equal weight, rather than by (at least partly) demoting him from his peer status?
There may be larger issues involved — larger than I can hope to address adequately here. It is not clear to me whether the point from the previous paragraph is the beginning of a criticism of this Bayesian way of doing epistemology in general: this may very well just be a particularly powerful instance of what is sometimes called ‘the problem of rigid conditional probabilities’.51 If this is so, then the Equal Weight View may just immediately follow from a highly implausible version of a conditionalization principle (or it may not-so-immediately follow from a more plausible conditionalization principle, together with some implausible auxiliary premisses). But whether this is so or not, the crucial point for my purpose is that hidden here there is a substantive epistemological principle — the one barring revising conditional probabilities given new evidence, or perhaps given the new evidence that the conditional probability is conditioned on, or perhaps more particularly just the one barring revising your attitudes towards Adam’s reliability given the new evidence (that he is wrong this time). It is this principle that underlies the Equal Weight View’s insistence on not taking the disagreement as a reason for demoting the relevant peer.
But once this is noticed — that there is here a substantive, normative, epistemological principle underlying the Equal Weight View — the point from the previous section, it seems to me, applies. Especially given this substantive commitment — the commitment, namely, to the claim that the disagreement itself does not justify revising one’s degree of trust in oneself and others — the Equal Weight View cannot afford to be nonchalant about (other parts of) what justifies degrees of trust in oneself and in others. In order to avoid the kind of vacuity above, it must rule out the disagreement itself as possible justification for demoting. But in order to do that, it cannot settle (as Elga does) for just talking about the disagreement between you and those you take to be your peers. It must have something to say about whom you are justified in taking as your peers, and in particular, about you not being justified in demoting others from peerhood status based on the disagreement alone. If, for instance, you take Adam to be right very often in general, but only very rarely in cases of disagreement with you, the Equal Weight View — if it is to have anything interesting to say about how the views of others are to be taken into account — must have something to say regarding which stories could justify you in thinking that (for instance, the story ‘because Adam is wrong on this occasion’ must be ruled out by the Equal Weight View).
The Equal Weight View must, then, argue that the disagreement itself — the mere fact that Adam believes not-p when you take p to be true — is not legitimate asymmetrical evidence against his reliability. And if the discussion in the last few paragraphs is right, then the proponents of the Equal Weight View must defend this point independently from the formalities of conditionalization.52 They must, in other words, argue on substantive grounds that the disagreement itself is not relevant evidence for Adam’s reliability. Those of us who reject the Equal Weight View must argue to the contrary conclusion.
But given the ineliminability of the first-person perspective and the (at least moderate) self-trust that comes with it, why on Earth should you not see Adam’s belief not-p as reason to believe he is less reliable than you otherwise would take him to be?53 After all, when you believe p, you do not just entertain the thought p or wonder whether p. Rather, you really believe p, you take p to be true. And so you take Adam’s belief in not-p to be a mistake. And, of course, each mistake someone makes (on the relevant topic) makes him somewhat less reliable (on the relevant topic) and makes you somewhat more justified in treating him as less reliable (on the relevant topic).54 Why should this mistake, then, be any different? Why should it count — against Adam’s reliability — less than Adam’s previous mistakes?55 True, all of this is, as it were, from your own perspective, but it is precisely such an objection that is rendered irrelevant by the ineliminability point from section 3.
But wait, would this not beg the question against Adam (or against not-p)? You are trying to determine whether or not to believe p, and in the process you are trying to determine how much epistemic trust to place in Adam’s view on p (so that you can factor in the probative force of his view in your own relevant degrees of belief). So does taking p to be true, and using it as a premiss in an argument for demoting Adam from peerhood status, not simply amount to begging the question? No, it does not, or at least not in a problematic way. The crucial point to note is that there is really nothing unique going on here. Whenever you try to decide how much trust to place in someone, or indeed, when deliberating epistemically about anything at all, your starting point is and cannot but be your own beliefs, degrees of beliefs, conditional probabilities, epistemic procedures and habits, and so on. If this is a cause for concern, it is a cause for much more general concern (indeed, if this fact undermines justification, the most radical of scepticisms seems to follow, a point to which I return below). But if at least sometimes justification can be had despite the fact that your starting point is your starting point, if starting there does not amount to begging the (or a) question in any objectionable way, then it is very hard to see why the particular instance present in the case of disagreement should be especially worrying.56 The point, then, quite simply, is this: perhaps there is something suspicious in your taking the disagreement itself as evidence that Adam is less reliable than you may have thought, indeed as stronger evidence for his unreliability than for your own. But there is nothing more suspicious in this piece of evidence compared to pretty much all others. Hoping for the kind of justification that avoids this difficulty is a hope most of us have come to resist, perhaps a part of epistemically growing up. The mere disagreement, I conclude, is in general a perfectly legitimate piece of evidence against Adam’s reliability (in general, and in this case), and so often a good enough reason to demote him from the status of a peer.57
7. What is your reason for belief? That you believe that p, or that p?
This is not enough, though, for Christensen’s reply is still in play. ‘OK then’, the proponent of the Equal Weight View may now say, ‘I concede that the disagreement itself is a legitimate piece of evidence against Adam’s reliability. But it is just as legitimate as evidence against your reliability. So we are still stuck with the kind of epistemic symmetry only the Equal Weight View can accommodate.’58 It is important to see that this line of thought is confused.
What precisely is your reason for demoting Adam, or for revising your view of his reliability? Crucially, I now want to argue, your reason for changing your mind — your epistemic reason to demote Adam, the feature of the circumstances that in your mind makes demoting him the epistemically appropriate response — is not that he believes not-p whereas you believe p. Had this been your reason for demoting Adam, Christensen would have been right, and the symmetry preserved, for this piece of evidence counts equally against Adam’s reliability and against yours. Rather, your reason for demoting Adam — the feature of the circumstances that in your mind justifies demoting him — is that he believes not-p whereas p. The epistemically relevant feature of his belief that not-p is not that it differs from yours, but rather that it is false. To see that this is the feature of the situation you take to be of normative epistemic significance — what your reason is for changing your mind about Adam’s reliability — we can use the following counterfactual test: imagine a possible situation in which Adam truly believes not-p, and you are wrong in believing p ; do you — as you actually are, thinking about this counterfactual situation — take this to be reason to decrease Adam’s reliability? Surely not. Now imagine a situation in which Adam falsely believes not-p, and you agree with him; do you — as you actually are, thinking about this counterfactual situation — take this to be reason to decrease Adam’s reliability? Of course. What this counterfactual test shows, then, is that what you take to be the epistemically relevant feature of the situation is that Adam is wrong, not that Adam and you differ. True, it is your judgement expressed by the claim that Adam is wrong, that his belief is false. We can put this by saying that your reason to change your mind about Adam’s reliability is — together with his belief that not-p — not that you believe that p, but rather that p (as you believe).59 But to insist that the ‘as you believe’ qualifier rules out that p as a reason for belief is precisely to ignore the ineliminability point, and to insist on the impossibly high standard that leads to scepticism more generally. Let us not do that, then. Your reason to change your mind about Adam’s reliability is that p (not that you believe that p). And this epistemic reason — namely, that Adam is wrong — is not at all symmetrical (for you take him to be wrong, but you do not take yourself to be wrong). So Christensen’s suggestion here fails.
This point — that often one’s reason for belief in this sense is that p rather than that one believes that p — is easily and often missed, so let me spend some more time explaining it. Talk of reasons is, of course, dangerously ambiguous. When claiming that your reason for demoting Adam is that Adam is wrong regarding p (rather than that you two differ regarding p) I do not mean that your motivating reason — what causally leads you to demote him — is that he is wrong; my point is a normative, not a causal one. But nor do I mean that that p is a normative, or a justifying reason for your belief, because, after all, even if you are wrong about p (and so, that p cannot justify anything) still your reason for demoting Adam is that he is wrong, not that you think he is wrong (as is evidenced by the counterfactual test). The reasons that are relevant here are your reasons, in the sense of what you take to be the relevant normative reasons, the features of the circumstances that in your mind epistemically justify the relevant response.60 This is consistent, of course, with them not being genuine reasons at all — if and only if you are wrong in what you take to be the relevant normative reasons. The observation that your reason for demoting Adam — in this sense — is that he is wrong (as you believe), rather than that he and you differ, should be fairly uncontroversial, as the counterfactual test above shows. In particular, I do not need to take sides in the controversy over whether it is only true propositions that can be reasons for belief (in the more straightforward normative sense), or whether false beliefs (or their content) can also qualify. However we go on that question, when it comes to your reasons, or what you take to be the epistemically relevant features of the circumstances, it is quite clear — as the counterfactual test above shows — that your reason is that Adam is wrong, not that you believe that he is, or that the two of you differ.
The case of explanations is precisely analogous. Suppose that I offer the following explanation of the collapse of the Soviet Union: the Soviet Union collapsed, I say, because it was politically unjust. We can apply the counterfactual test (Would the Soviet Union have collapsed — you can ask me — had it remained unjust but had you believed that it was just? Would it have collapsed had it not been unjust, but had you continued to believe that it was unjust?) to determine that what I take to explain the collapse of the Soviet Union — what I take to be the explanatorily relevant features of the circumstances — is not that I believe that it was unjust, but rather that it was unjust. Of course, it is my own judgement about the injustices in the Soviet Union that I am here expressing. We can put this by saying that what explains (I think) the collapse of the Soviet Union is not that I believe that it was unjust, but rather that it was unjust (as I believe). And here too, the relevant sense of ‘explanation’ is not motivating or causal, nor is it the factive sense of explanation (after all, the counterfactual test yields the same result even if I am wrong, and the Soviet Union was not in fact unjust). Rather, what we are talking about here is what I take to be the explanatorily relevant feature of the situation — and that is that the Soviet Union was unjust, not that I believe that it was.
Your reason for demoting Adam, then, is that he is wrong (as you believe). And this reason is not factive — this can be your reason (what you take to be the normatively relevant feature of the circumstances) even if in fact Adam is not wrong. This means that Adam can likewise demote you, and his reason (in the same sense) for doing so is that you are wrong (as he believes). So in this way, something of the symmetry remains: but this is precisely as it should be. For we already know — from the discussion of the Right Reasons View — that the appropriate epistemic response to peer disagreement cannot fully depend on who is right. What the discussion in this section establishes, then, is that whether you are right or wrong about p, you can take that p as legitimate evidence against Adam’s reliability (and he can take that not-p as legitimate evidence against yours). And what this means is that — whether you are right or wrong — the disagreement itself can be sufficient reason to demote Adam from his peerhood status.
Now, in the explanatory case, if your relevant belief (that the Soviet Union was unjust in a way that led to instability) is false, we may want to say that your suggested explanation is no explanation at all. Explanations seem to be factive in this way (whether this is unqualifiedly true is not something I need to comment on here). But if we are not in the business of explaining the collapse of the Soviet Union, but rather in the business of understanding your explanatory commitments, still the (purported) injustice of the Soviet Union is relevant, even if in fact the Soviet Union was not unjust. And Christensen’s reply (discussed throughout this section) is analogous precisely to the business of understanding your explanatory commitments. This is so, because Christensen’s reply — and the Equal Weight View more generally — derive whatever interest they have from the fact that the prior conditional probabilities they employ (that you or your interlocutor would be right in a case of disagreement) are your prior conditional probabilities, conditional probabilities you are (or should be) committed to. The point seems to be that given that you (perhaps justifiably) take Adam to be your peer, there is some incoherence in your credences if you refuse to give equal weight to your and Adam’s views in a case of disagreement. And Christensen’s claim — that the disagreement can serve as reason for demoting Adam only if it can equally serve as reason for demoting you — is initially interesting precisely because it seems to flesh out an implication of one of your relevant commitments (namely, that, antecedently to this disagreement, Adam is your peer). You seem to be committed to this symmetry, and so you seem to be committed to Christensen’s reply. So if you asymmetrically demote Adam, the thought seems to be, there is a tension — perhaps an incoherence — within your own commitments.
But it is the upshot of the discussion in this section that no such tension exists. This is so, because your own reason for asymmetrically demoting Adam — the feature you take to epistemically justify doing so — does not violate the symmetry to which you are committed. You, after all, are committed (to an extent) to the symmetry between your own views and Adam’s. You are not committed to a symmetry between p and not-p, when you take p to be true. So given that your reason — in the sense specified above — for demoting Adam is that p (as you believe), and not that you believe that p, you are not at all committed to demoting yourself in a similar way. And notice that this point — the point about the absence of tension within your commitments — holds whether or not p is true (as you believe). If p is false, then you are wrong — you were, after all, committed to p’s truth. But the epistemic possibility of p’s falsehood (which comes down to the fact that you rightly take yourself to be fallible) does not suffice to save Christensen’s reply. Your reason — in the specified, non-factive sense — for demoting Adam is that he believes not-p whereas p, and your commitment to this reason in no way commits you to equally demote yourself.
Of course, none of this applies to thermometers, or indeed to (mere) truthometers. If Adam and Tom disagree, and you think of them as equally reliable truthometers, then you should not take the disagreement itself as any asymmetrical evidence about, say, Adam’s reliability. But this is precisely where a disagreement in which you are one of the disagreeing parties is different (to you). For you cannot, do not, and are not epistemically required to treat yourself merely as a truthometer.
There is thus no general reason to rule out the disagreement itself as (asymmetrical) evidence61 against your interlocutor’s reliability. And this means that the Equal Weight View is false.
8. But is it the Extra Weight View?
If I am right, then, you should, in a sense, treat yourself differently from others, even when you take them to be in general just as good truthometers as you are. You should, in a sense, treat a disagreement between you and Adam differently from a disagreement between Tom and Adam.62 But this gives rise to the worry that underneath the not-just-a-truthometer rhetoric hides the Extra Weight View, the view according to which you should, in cases of disagreement, give extra weight to your view simply because, well, it is your view. But this view seems objectionable right off the bat — the epistemological analogue of chauvinism, or perhaps nepotism.
I agree that it is unreasonable to give your own view extra weight simply because it is yours (when Adam is just as reliable on these matters as you are). That it is yours seems epistemically irrelevant — just as the fact that one of two ‘disagreeing’ thermometers is yours is epistemically irrelevant. But I do not think that refusing to treat yourself as a truthometer entails the Extra Weight View.
To see why, return to what I had to say on the question whether you should treat the disagreement itself as a reason for demoting your interlocutor. There I insisted that your reason for demoting him was not that you believed that p but rather that p (as you believed). Had your reason for demoting been that you believed that p, then refusing to take that he believes that not-p as equally strong evidence for demoting yourself would indeed amount to epistemic chauvinism. But this is precisely not what I suggested. Taking that p as a reason for demoting your interlocutor is not chauvinistic in the same way. Similarly, and more generally, your reason for not ‘splitting the difference’ in cases of peer disagreement is not that your view counts for more because it is your view. Rather, it is that the credence you end up with seems (to you) best supported by the non-chauvinistic evidence.63
A worry remains. Even if on my view your reason for believing as you do is not that one of the views is your view, still my suggestion recommends an epistemic policy (that of not treating oneself merely as a truthometer) which will in fact result in your (initial) view affecting more than others’ the credences you end up with. Indeed, the consequences of the not-treating-yourself-merely-as-a-truthometer strategy will be precisely similar to those of (one version of ) the Extra Weight View. Is this not bad enough?
Yes, my suggested strategy will end up recommending epistemic consequences similar to those of the Extra Weight View. But no, this is not bad enough. Let me rely here on a kind of epistemic analogue of the intending–foreseeing distinction. By refusing to treat yourself merely as one truthometer among many, you can foresee that your view will in effect have been given extra weight. But you do not thereby intend to give your view extra weight.64 The distinction between intentionally giving one’s view extra weight on one side, and refusing to treat oneself merely as a truthometer while foreseeing that one’s view will in effect be given extra weight on the other side, seems to me to be normatively relevant.65 The former is objectionable. The latter is not, perhaps at least partly because it is inevitable.66
You may object, though, along the following lines:67 suppose that Tweedle Dee follows my (vague) instructions as to how to update his beliefs in a case of peer disagreement. Tweedle Dum, on the other hand, follows the Extra Weight View. And of course, all other things are equal between them. Then after updating their beliefs in a case of peer disagreement (with Adam) about p, Tweedle Dee’s and Tweedle Dum’s degrees of belief in p will be identical. Tweedle Dum, I insist, is not epistemically justified. Does it not follow, then, that neither is Tweedle Dee? There is, after all, no difference between them when it comes to evidence, or to past track record and reliability. Well, there may be no difference in the evidence available to them. But there is a difference in the evidence they use, or in their reason for having a certain degree of belief in p. It is, after all, a part of Tweedle Dum’s reason for (degree of) belief that his view should count for more, and this makes his degree of belief unjustified, even if it is the same degree of belief reached by Tweedle Dee based on only legitimate considerations (his reason for his degree of belief, remember, is not based on according extra weight to his own view). And we know that in general, whether you are justified in believing p may depend on your epistemic history. If you deduced p by inference to the best explanation from the epistemically justified q and r, your believing p may very well be justified. If you believe p on astrological grounds, you are not justified in so believing. The justificatory status of your believing p thus depends — among other factors, no doubt — on what your reasons for believing p were. So there is nothing problematic or ad-hoc-ish about proclaiming Tweedle Dee justified and Tweedle Dum unjustified: their epistemic histories are very different, different in a way that makes a justificatory difference. The point is again analogous to one from the moral discussions of the intending–foreseeing distinction: if this is indeed a morally relevant distinction, then it is possible that two people will perform the same bodily movement, with the same known consequences, where the action of one will be morally permissible (because, say, a certain harm is merely foreseen) and the other’s impermissible (because the same harm is intended). Analogously, then, it is possible that Tweedle Dee is justified in his degree of belief (because what he takes to be reasons for belief really are reasons for belief) and Tweedle Dum is not (because his reason for belief is that his belief should count for more, and this is a poor reason for belief in his situation), even though both reach the same degree of belief.
This is a bit sketchy, of course. And it is not as if in the practical domain the normative significance of the intending–foreseeing distinction is uncontroversial.68 Let me just note here, then, that to me it seems intuitively plausible that in the epistemological case — much more clearly than in the moral case69 — the intending–foreseeing distinction (or something close to it) is of normative significance: it just does seem to make a difference — regarding whether or not a belief of yours is justified — what your reason for your belief (in the sense above) is. So refusing to treat oneself merely as a truthometer does not amount to an endorsement of the Extra Weight View.70
Before concluding, let me address one initially powerful objection to the Extra Weight View, an objection that my emerging view is also subject to.
The objection comes from Elga (2007, pp. 486–8), and it is that of bootstrapping:71 assume for reductio that in cases of disagreement you should give more weight to your own view than to Adam’s. If so, you are justified (to an extent, at least) in believing that you were right here and Adam wrong. But if so, you can take this very fact as at least some evidence that you are more reliable on these matters than Adam. So next time you can assign even more weight to your opinion over Adam’s. And so on. But that the view you take to be right nicely fits with, well, the view you take to be right is no evidence at all for your reliability. So the Extra Weight View is false.
Now, in the previous section I insisted that the not-merely-a-truthometer strategy does not commit me to the Extra Weight View. But note that this will not get me off the bootstrapping hook. For this objection applies even if I just foresee that by employing the strategy my view will in effect be given extra weight — nothing here depends on my intentionally giving my view extra weight.72 So the objection applies.
I think Elga is right that the Extra Weight View opens the door for such bootstrapping. But I think this is a result we are going to have to learn to live with.
Remember, at this point we already know that the interesting question about peer disagreement is how to proceed given disagreement with someone we justifiably take to be our peer, and that therefore the question is not divorced from the question of how to justify judgements about others’ (and one’s own) reliability. We know, in other words, that here we are going to have to utilize pretty much everything that is epistemically available to us, including our judgements about past track records, both of ourselves and of others. And we also know that if scepticism is to be avoided, it cannot count conclusively against the justification of so doing that we are going to do all of this from the starting point of our own initial beliefs and epistemic dispositions. So we know that bootstrapping cannot be ruled out from the start.
In this way, the relation between the Equal Weight View and scepticism is actually more intimate than is often noticed. The point is often made that if the Equal Weight View is true, we may be getting closer to scepticism because we must reduce our confidence in many of our (controversial) beliefs, perhaps to the point of suspension of judgement.73 And, of course, if some sceptics walk among us as peers, the route from the Equal Weight View to scepticism may be quicker still. But thinking about bootstrapping shows a deeper (and non-contingent) connection between the Equal Weight View and scepticism. Some of the assumptions needed to make the Equal Weight View plausible underlie, if pursued consistently, some (more or less) traditional sceptical worries. The bootstrapping worry is after all — as Elga (2007, p. 488) notes — if not a particular instance then a close analogue of a very general worry (the one sometimes referred to as ‘the problem of easy knowledge’ — see Cohen 2002). And the underlying thought that we are not entitled to trust our own epistemic abilities to a degree greater than that their track-record calls for seems a close relative of the claim that we are not entitled to employ a belief-forming method without first having an independently justified belief in its reliability.74 But this thought, of course, naturally leads to scepticism. If I am right here, and if the Equal Weight View ultimately rests on assumptions that naturally lead to scepticism, it follows that the Equal Weight View is — even worse than being false — quite uninteresting.
Now, unfortunately I do not know what to say about the problem of easy knowledge more generally, or about related sceptical worries.75 Perhaps there is a general way to avoid bootstrapping and easy knowledge. If so, we can safely expect such a general way to apply to the case of Elga’s bootstrapping objection as well.76 Or perhaps we are going to have to live with the possibility of some forms of bootstrapping.77 If so, biting the bullet on Elga’s bootstrapping objection should not be unacceptable — though, again, I would have loved to be able to say more here, and in particular to say when something like bootstrapping is acceptable and when it is not; filling in the details here will depend on the general way of dealing with the problem of easy knowledge.78 Or perhaps bootstrapping and easy knowledge are unacceptable, and cannot be avoided short of scepticism. In such a case scepticism is the way to go, and Elga’s bootstrapping argument — together with the whole topic of peer disagreement — is uninteresting. But even without having more to say, placing Elga’s bootstrapping objection in the context of the larger sceptical problem of which it is an instance is not without value, for it shows that in all likelihood, there is no special problem here for the not-merely-a-truthometer strategy. And so even if I do not know how exactly to solve it, I think I can be reasonably confident that (if scepticism can be avoided) it can be solved.
If my arguments work, then, the Equal Weight View is false. So are the I Don’t Care View, the Right Reasons View, the Total Evidence View, and the Extra Weight View. Is there anything more positive, then, that follows from my arguments? What is the right thing to say about the way to take peer disagreement into account?
Because we apparently need a name for it, let me call the view emerging here the Common Sense View. According to this Common Sense View, that someone you (justifiably) take to be your peer disagrees with you about p should usually reduce your confidence in p. It is among your relevant evidence regarding p, and in most cases it would be foolish to ignore it. That you yourself believe p, however, will hardly ever be (for you) relevant evidence regarding p.79 (I am not sure, but this may be another point of departure from the Total Evidence View.) Also, that someone believes not-p when p is true (as you believe) will usually be some evidence against her reliability on matters such as p. Often, then, in cases of peer disagreement your way of maintaining (or retaining) probabilistic coherence will be by simultaneously reducing your confidence in the controversial claim and in the reliability of both you and your (supposed) peer, though reducing it more sharply regarding your (supposed) peer. And notice that on this view — unlike on the Right Reasons View, and perhaps also unlike the Total Evidence View — the disagreement itself is usually evidence against the controversial proposition for both disagreeing peers. In the face of what seems to be peer disagreement, we should all lower our confidence, though not as much as the Equal Weight View would have us do.
The response to peer disagreement recommended by the Common Sense View, then, is symmetrical, at least in outline and in most cases. But the final justified degree of belief is not at all symmetrical: if you have responded correctly to the (first-stage) evidence regarding p and Adam has not, then you should both reduce your confidence once you learn about the peer disagreement between you two. But it is not as if you should now both have the same degree of belief in p, nor does it follow that if you do not (if you still tend towards believing p and Adam towards believing not-p) then both of you are equally (un)justified in your respective degrees of belief. But this, of course, is precisely as it should be. For it is another one of the oddities of the Equal Weight View that it goes the other way here. On the Equal Weight View, if you have responded correctly to the first-stage evidence and Adam has not, and then, facing peer disagreement, you both ‘split the difference’ and suspend judgement, you are both equally justified. But why expect such symmetry, given that you have responded correctly to the first-stage evidence, and Adam has not? The compelling symmetry-related idea is that both peers should respond similarly to the disagreement itself, not that both should end up (after so doing) with the same degrees of belief or the same epistemic status for the degree of belief they do in fact have. The Common Sense View, then, captures the compelling idea about the symmetrical response to the disagreement itself (unlike the Right Reasons View, and perhaps also unlike the Total Evidence View), without neglecting the significance of the asymmetrical nature of the first-stage evidence (as the Equal Weight View does).
How much weight, then, should you give peer disagreement in revising your degree of belief in the relevant controversial claim? The Common Sense View has nothing interesting and general to say in reply. Indeed, it asserts that there is no interesting general reply that can be given here.80 For the answer depends on too many factors that differ from one case of peer disagreement to another. Depending on other things you (justifiably) believe, on other evidence you have, on the epistemic methods you are justified in employing, on the (perhaps known) track records of both you and Adam, for some ps in some circumstances you should reduce your confidence in p more, for others less. For some, you should take the disagreement as reason to demote Adam more significantly from the status of a peer, for others less. Indeed, perhaps there are even circumstances in which you should accord no weight at all to Adam’s view, and not reduce your confidence in p. And perhaps sometimes you should split the difference, as the Equal Weight View requires.81 And — to return to a point that was set aside very early on — perhaps at times the right thing to do in the face of peer disagreement is to reduce your confidence in the claim that all evidence is indeed shared between you and your peer.82 Also, perhaps there are some more pragmatic, compromise strategies, the availability of which depends on yet further features of the specific case of disagreement — for instance, in some cases, but not in others, the epistemically right thing to do would be temporarily to suspend judgement while going through — perhaps together — the considerations that led each party to his or her belief. Perhaps the arithmetical calculation example is of this kind. But perhaps the philosophical one is not.
But the central point here is that there is no strategy — none, that is, that is more specific than the strategy of believing what is best supported by your evidence — that is in general justified. There is no general and more informative answer to the question ‘How should we proceed epistemically when we encounter peer disagreement?’, any more than there are more general or more informative answers to questions like ‘How should we proceed epistemically when we encounter circumstances in which we are only partly reliable?’ or ‘How should we proceed epistemically in cases in which some of our judgements are in tension with each other?’ or ‘How should we proceed epistemically with regard to p when someone tells us that p?’ All of these questions radically underdescribe the epistemically relevant features of the circumstances, and so none of these questions can be answered in a general and very informative way. Peer disagreement is not special in this regard.
In this way, then, the Common Sense View — my view — is rather messy, and offers less than you may have hoped for. But it still seems like the most we can sensibly say about peer disagreement.83