Abstract

How should you update your (degrees of ) belief about a proposition when you find out that someone else — as reliable as you are in these matters — disagrees with you about its truth value? There are now several different answers to this question — the question of ‘peer disagreement’ — in the literature, but none, I think, is plausible. Even more importantly, none of the answers in the literature places the peer-disagreement debate in its natural place among the most general traditional concerns of normative epistemology. In this paper I try to do better. I start by emphasizing how we cannot and should not treat ourselves as ‘truthometers’ — merely devices with a certain probability of tracking the truth. I argue that the truthometer view is the main motivation for the Equal Weight View in the context of peer disagreement. With this fact in mind, the discussion of peer disagreement becomes more complicated, sensitive to the justification of the relevant background degrees of belief (including the conditional ones), and to some of the most general points that arise in the context of discussions of scepticism. I argue that thus understood, peer disagreement is less special as an epistemic phenomenon than may be thought, and so that there is very little by way of positive theory that we can give about peer disagreement in general.

1. The question, and some preliminaries

Suppose you trust someone — call him Adam — to be your epistemic peer with regard to a certain topic, for instance philosophy. If asked to evaluate the probability of you giving a correct answer to an unspecified philosophical question and the probability of Adam doing so, you give roughly the same answer. You treat Adam as your philosophical peer (and for now we can safely assume that he is indeed your peer, and that you are justified in so treating him). You then find out that you disagree with Adam about a given philosophical question — for some philosophical p, you believe p, and Adam believes not-p. How should you update your belief with regard to p given this further evidence (Adam’s view regarding p)? Should you be less confident now in p than you were before finding out about Adam’s view? If so, how much less confident?

One natural reply is the Equal Weight View, according to which you should give equal weight to your belief and to that of the one you take to be your peer, and so in our case suspend judgement about p. Here, for instance, is Adam Elga’s official presentation of the view of which the Equal Weight View is a particular instance:1

Upon finding out that an advisor disagrees, your probability that you are right should equal your prior conditional probability that you would be right. Prior to what? Prior to your thinking through the disputed issue, and finding out what the advisor thinks of it. Conditional on what? On whatever you have learned about the circumstances of the disagreement.2 (Elga 2007, p. 490)

If you treat Adam as your peer prior to the disagreement, your prior conditional probability that you would be right in a case of disagreement is .5. And this, according to Elga’s view, should be your probability that you are right when the disagreement enters the scene. That is, you should suspend judgement about p. It cannot seriously be denied, I think, that the Equal Weight View has considerable appeal (more on this appeal shortly).

The Equal Weight View, however, seems to give rise to highly implausible consequences. Perhaps chief among them is the one Elga (2007, p. 484) calls ‘spinelessness’: for it seems to follow from the Equal Weight View, in conjunction with plausible assumptions about the extent of disagreement among those you (even justifiably) take to be your philosophical peers, that you should be far less confident in your philosophical views than you actually are, indeed perhaps to the point of suspension of judgement. And it follows from the Equal Weight View, in conjunction with plausible assumptions about the extent of disagreement among those you take to be your moral and political peers, that you should be far less confident in your moral and political views, perhaps to the point of suspension of judgement. And so on. If the Equal Weight View does entail the requirement to be epistemically spineless, this seems to count heavily against it. But what would be an acceptable alternative view? The Extra Weight View — according to which, roughly, the fact that one of the two competing views is mine gives me reason to prefer it — seems just as suspicious, perhaps the epistemic analogue of some kind of chauvinism.

What should we say, then, about cases of peer disagreement? Given the fact that Adam rejects a philosophical p, and that you (even justifiably, and rightly) take him to be your philosophical peer, how if at all should you revise your degree of belief in p?

Before proceeding, though, we need to get some preliminaries out of our way. First, our question is of course entirely normative. The question is how we should revise our degrees of belief given peer disagreement, not the psychological question of how we in fact respond to such disagreement. (Psychological questions may be relevant to the normative one, but such relevance has to be argued for.)

Second, the phenomenon of disagreement is sometimes used in arguments that are supposed to establish a metaphysical rather than an epistemological conclusion. In ethics, for instance, there are many attempts to show that the phenomenon of moral disagreement supports some less-than-fully-realist metaethical view. I am highly sceptical of such arguments,3 but we can safely ignore all this here. Our concern here is with cases in which some metaphysical non-factualism, or relativism of some sort, is just not a relevant option (perhaps because we have strong independent reasons to rule it out). Our question, then, is entirely epistemological.4

Third, I will put things in terms of degrees of belief rather than all-or-nothing belief. In this I follow most of the literature focusing on peer disagreement (though Feldman (2006) conducts his discussion in terms of all-or-nothing beliefs). Indeed, as Kelly (forthcoming, p. 6) notes, there is in our context special reason to focus (at least initially) on degrees of belief. The point relevant to my concerns here is that it is quite natural to ask to what degree our confidence in a belief should be sensitive to peer disagreement. I suspect that much of what I am about to say can be applied, suitably modified, to all-or-nothing beliefs as well, but I will not do so here. When I speak of beliefs as if they were all-or-nothing below, then, I do this just as shorthand for degrees of belief.

Fourth, by your ‘peer’ I will understand someone who is, somewhat roughly, antecedently as likely as you are to get things right (on matters of the relevant kind). This may be due to the fact that she is as smart, rational, sensitive, imaginative, etc. as you are. But whether this is so is not to the point here — what is relevant here is just that she is (and is taken by you to be) as likely as you are to get things right.5 Notice also that your taking Adam to be your peer amounts to your having some positive attitude — a belief, perhaps, or a conditional probability — to the effect that you take him to be your peer. The absence of an attitude — your failing to have a belief that he is more likely than you are to get things right, and your failing to have a belief that he is less likely than you are to get things right — does not suffice for your taking him to be your peer, in the sense that I will (following the literature) be interested in.6

Fifth, and again following the literature here, I will focus on cases where the disagreeing peers share all the relevant evidence, and indeed where this very fact is a matter of common knowledge between them. Typical examples include a simple arithmetical calculation (what evidence could anyone possibly lack here?), philosophical debates where all concerned know of all the relevant arguments, and perhaps also moral debates of a similar nature.7 Such a restriction can simplify matters (you do not have to worry, for instance, about the possibility that Adam’s disagreeing with you is some evidence that there is further evidence — evidence you lack — that not-p), and as the examples above show, this restriction does not make things unrealistically simple. Nevertheless, given the nature of the debate over peer disagreement, and its general epistemological context — namely, that of considering evidence for one’s own fallibility in general, and for a specific error one is making in particular8 — this simplifying assumption is not unproblematic: we are, after all, no less fallible with regard to the question whether our peer has some evidence we lack than with regard to any other relevant judgement. Ideally, one would want an answer to our question (how if at all to revise one’s beliefs given peer disagreement) without relying on such an assumption. Again, I suspect much of what I say below can be applied to the more generally realistic cases as well, but I will not argue the point here, and will for the most part ignore this complication (though I return to it briefly in the final section below).

Sixth, our epistemological question is a rather focused one. The question is not what you should — all things considered — believe regarding p. The question is, rather, what pro tanto epistemic reason is given to you — if any — by the disagreement with Adam; whether, in other words, the disagreement itself gives you epistemic reason to lower your confidence in p, and by how much. As will become clear later on (in discussing Kelly’s relevant views), this distinction is not without importance.

Seventh, I will be assuming that for any given state of evidence, there is a unique degree of belief that it warrants. I will, in other words, assume the Uniqueness Thesis (see Feldman 2007), that is, that there is no epistemic permissiveness. This is not because I am convinced that the Uniqueness Thesis is true.9 Rather, I think that what is interesting about peer disagreement does not depend on what we end up saying about epistemic permissiveness.10 Be that as it may, the discussion that follows assumes Uniqueness.11

Finally, the question with which I will be concerned here is not practical in any straightforward sense. I will be discussing the relevance of peer disagreement to epistemic, not pragmatic, justification. Christensen (2007, p. 215) is right, then, when insisting that even if we can show that (say) philosophical discussion is best promoted if disagreeing peers stand their respective grounds (in a kind of efficient marketplace-of-arguments), still nothing follows from this with regard to the fate of the Equal Weight View. Just as importantly, though, Christensen (2007, p. 204) is wrong in (partly) relying on intuitive judgements about what should be done in cases of disagreement (his most powerful example is that of disagreement between physicians about a possible treatment). The considerations relevant to answering such practical questions are presumably varied, and they include more than just the purely epistemological ones in which we are interested here.12 Of course, the epistemic considerations may be relevant to these practical questions as well, and so such practical examples need not be entirely irrelevant. But their relevance can at best be indirect, and needs argumentative support (like the claim that what best explains some practical judgement is some epistemic one). Now, from time to time I will myself resort to analogies with more practical questions, but the analogies will hold (or so I shall claim) on a much more abstract level.

2. The truthometer view (or: more on the appeal of the Equal Weight View)

Suppose you have two thermometers in the reliability of which you (justifiably) have equal trust.13 On a specific occasion you want to know the temperature outside, and you use both thermometers, which give different readings, say one indicating it is 65 degrees Fahrenheit and the other 70. You have, let us assume, no further evidence on the matter, and in particular it does not ‘feel’ to you more like 70 than like 65, or the other way around. What should you believe about the temperature? Presumably, you have no (overall) reason to believe it is 65 degrees rather than 70, or 70 rather than 65. (You may be justified in believing it is either 65 or 70, or perhaps between 65 and 70, or perhaps between 66 and 69, or between 62 and 73, but none of this is relevant for our purposes.) Your (justified) prior probabilities that each thermometer would be right (conditional on everything you have learned about the circumstances of their ‘disagreement’) are the same for both, and so you are no more justified in relying on one than on the other. It goes without saying that none of this changes if you first find out about the reading of just one of the thermometers, form your belief accordingly, and only then find out about the reading of the other. In such a case you should, upon finding out about the other reading, update your beliefs about the temperatures so that symmetry is restored.

Now suppose you have two friends, Adam and Tom. Adam and Tom are mathematicians in whose reliability about mathematical matters you (justifiably) have equal trust. On a specific occasion you want to know whether a given formula is a number-theoretic theorem, and you ask both friends, who give different answers, one saying that it is and the other that it is not. You have, let us assume, no further evidence on the matter, and in particular you do not yourself go through the purported proof (perhaps because it is too complicated for your mathematical abilities). What should you believe about the formula’s purported theoremhood? It seems rather clear that you have no (overall) reason to believe it is a theorem or that it is not one. Your (justified) prior probabilities14 that each mathematician would be right (conditional on everything you have learned about the circumstances of their disagreement) are the same for both, and so you are no more justified in relying on one than on the other. It goes without saying that none of this changes if you first find out about the result of just one of the mathematicians, form your belief accordingly, and only then find out about the result of the other. In such a case you should, upon finding out about the other’s result, update your degrees of belief about the formula’s theoremhood so that symmetry is restored.

In such a case, then, it seems clear that you should treat Adam and Tom as perfectly analogous to thermometers — as truthometers. Whatever else they are (and whatever else they are to you), they are each a mechanism with a certain probability of issuing a true ‘reading’ of theoremhood (or whatever), and the way to take their views into account — the way, that is, to revise your own views given the evidence of theirs — is exactly the way to take the reading of thermometers into account. That the underlying mechanism of your friend-truthometers is somewhat different from that of your thermometers seems neither here nor there.

But, of course, you yourself are — whatever else you are — yet another truthometer of this sort. Just as Adam and Tom — your mathematician friends — have a certain track record with regard to such matters, and just as you have a view on how likely each of them is to be right on such matters, so too you have such a track record, and indeed you have a view on how likely you are to be right on such matters. If your prior probabilities that Adam and Tom would be right are equal, you should give their views (in a case of disagreement) equal weight. Well, what is different in the case of a disagreement between Adam and you, given that your (justified) prior probabilities that Adam and you would be right are equal?15 If you give extra weight to your own view in the case of a disagreement between you and Adam, is this not like giving Tom’s view extra weight in a case of disagreement between Adam and Tom simply because you heard his advice first, or because he is closer to you? (See Feldman 2006, p. 223.) What is so special about your own view, if you take yourself to be just as likely as Adam to be right about such things? If you should treat — for epistemic purposes — Adam and Tom as truthometers, should you not also treat yourself as one?

3. On the ineliminability of the first-person perspective (or: why the truthometer view must be false)

Yes, you should treat yourself as a truthometer, but you should not treat yourself merely as a truthometer.

Here is a first hint at why this is so. Suppose we accept the Equal Weight View. Then, to repeat, ‘upon finding out that an advisor disagrees, your probability that you are right should equal your prior conditional probability that you would be right.’ But, of course, the prior conditional probability mentioned here is your prior conditional probability. And here too you may be wrong. Indeed, you may have views on how likely it is that your prior conditional probability is right (or that your belief about these probabilities is true),16 and how likely it is that, say, Adam’s prior probability is right. Perhaps, for instance, you think both of you are equally likely to be right about such matters. So if you and Adam differ on the relevant prior conditional probability, the Equal Weight View requires that you give both your views equal weight. But of course what does the work here is your prior conditional probability that you or Adam would be right about prior conditional probabilities. And here too you may have views about how likely you and others are to get it right, but here too this view will be your view, and so on, perhaps ad infinitum.17

Now this is not a vicious regress exactly: for one thing, you may want to apply this kind of reasoning on a case-by-case basis, only, as it were, when you have to, or when the relevant questions arise. And they need not all arise, certainly not simultaneously. But what the previous paragraph does show is that in forming and revising your beliefs, you have a unique and ineliminable role. You cannot treat yourself as just one truthometer among many, because even if you decide to do so, it will be very much you — the full, not merely the one-truthometer-among-many, you — who so decides. So the case in which Adam and Tom differ is after all different — different for you, that is — from the case in which you and Adam differ. The point is naturally put Nagelianly:18 even though from a third-person perspective — which you can take towards yourself — we are all truthometers, still the first-person perspective — from which, when it is your beliefs which are being revised, it is you doing the revisions — is ineliminable.19

It is important to distinguish here between mere seemings and full-blooded beliefs. Suppose that both of us dip our hands in two bowls of water, and that to me one seems warmer than the other, while to you both seem equally warm. In such a case I think there is no problem in treating oneself merely as a truthometer (a thermometer, really, though not a very good one), and so there is no problem — not this problem, anyway20 — in applying the Equal Weight View here. But we should not over-generalize from such cases. In this case the seemings are both unreflective, a-rational immediate seemings. And it seems to me with regard to those we can perfectly happily settle for the third-personal point of view. With regard to such seemings, in other words, and even though they are still very much one’s own seemings, still the first-person perspective is unproblematically eliminable: we can epistemically distance ourselves from our seemings in a way we cannot distance ourselves from our full-blooded rational (as opposed to a-rational, not to irrational) beliefs, those that are based on a reflective consideration of the evidence, those in which the believing self is fully engaged.21 Once you reflect on a question, asking yourself, as it were, what is the truth of the matter, and so what is to be believed — once the believing self is fully engaged — you can no longer eliminate yourself and your reflection in the way apparently called for by the truthometer view. Of course, the distinction between full-blooded beliefs and seemings is not sharp or fully clear. While perceptual cases seem to me to be rather clear seemings-cases, other cases are not as clear. Borderline cases may include (some cases of) reliance on memory, or perhaps cases of more reflective reliance on perception. But however we go regarding such borderline cases, all that is needed for my point here is that there are some paradigmatic cases on both sides of this vague distinction. And cases where the believing, reasoning self is fully engaged, cases in which our response to the evidence is reflective, are different from seemings-cases.22 While the truthometer view may work with regard to the latter, it does not work with regard to the former.23

Thus, neither can we treat our believing selves merely as truthometers, nor (consequently) is it the case that we should. But my point is not just an ought-implies-can kind of point. To see this, consider that some ideals are such that, upon realizing that they cannot be fully or globally applied, they retain their initial appeal, and so we reduce our aspirations from fully applying them to approximating them (for instance, upon realizing that you cannot help all those in need of help, the ideal of helping those in need does not lose its appeal; it is just that you proceed to approximate it to the extent that you can). But other ideals are such that, upon realizing that they cannot be fully or globally applied, they do lose their initial appeal, and so we proceed to question them more generally, we take their lack of full or global applicability to be reason to question them more locally as well. Consider, for instance, the sometimes-given advice to always define the terms you use in asking a question before proceeding to answer it. Upon realizing that this piece of advice cannot be globally adhered to (because of the imminent infinite regress), we do not retreat to the claim that we should always define our terms as much as possible, or some such. Rather, we take the global inapplicability to be reason to reconsider the goodness of the advice even where it can be applied: we take it, in other words, as evidence that the advice was altogether confused. The case of the truthometer view seems to me to be of this latter kind. Once it is clear that the truthometer view’s requirement cannot be universally complied with — at least, that is, if the most radical of scepticisms is to be avoided — this view loses much of its appeal even for the cases in which it can be complied with. With this fact in mind, in other words, it becomes clear that there is some deep confusion underlying the truthometer view, and so that it is not even the kind of impossible ideal that should be aspired to whenever possible. The truthometer view is, quite simply, false. Of course, perhaps we should still, in some circumstances, treat ourselves as merely truthometers (analogously: perhaps we should still define some of our terms, some of the time). But this does not make the truthometer view — as a general view — any more plausible.

What follows from all this for the question of peer disagreement? At this point, not much. In particular, that the truthometer view cannot be true in general does not entail that the Equal Weight View is false. It was, after all, I who argued that the Equal Weight View’s philosophical appeal comes from the truthometer view, and it is open to an adherent of the Equal Weight View to base it on other considerations.24 Furthermore, that the first-person perspective cannot be completely eliminated does not entail that it cannot be eliminated from a focused discussion here or there: we do, after all, have evidence, and also views, regarding our own reliability on many matters, and it would be foolish to ignore them when forming and revising our beliefs. At least sometimes, then, we do — and should — take such a third-person perspective towards our beliefs (certainly of our past and future selves, and perhaps sometimes also of our present selves). It is open to the proponent of the Equal Weight View to argue that this is exactly so in cases of peer disagreement.25 So the ineliminability of the first-person perspective does not — all by itself — spell doom for the Equal Weight View. But it is not without significance here either. For once it is clear that we cannot consistently treat ourselves as truthometers across the board, if it can be shown that there is no more reason to treat ourselves as truthometers in cases of peer disagreement than elsewhere, the Equal Weight View loses, it seems, much of its appeal.

4. An interlude: against Kelly’s Right-Reasons and Total Evidence Views26

I will get back to the Equal Weight View shortly. But let me pause to comment on two related alternative views, both from Thomas Kelly.27 This interlude is justified both because it is of interest in its own right (or so at least I think), and because some of the lessons learned here will prove useful in the following sections.

In his earlier treatment of the issue, Kelly (2005) seemed to flirt with what I will call the I Don’t Care View, according to which the disagreement itself is epistemically irrelevant. If you have carefully considered the evidence and have come to the conclusion that p, then the contingent fact that others differ should have no effect on you. Of course, if you know that someone equally rational (etc.) can understand all your evidence and still believe not-p, this is epistemically important. But what does the work here is not the disagreement, but rather the weakness of the evidence (as witnessed by the possibility of a perfectly rational thinker not being convinced by it). This is why, Kelly (2005, pp. 181 ff.) argues, the actual disagreement (as opposed to possible rational disagreement) is epistemically irrelevant.

I take it even Kelly no longer believes the I Don’t Care View (if he ever did), and so we can be quick here. The problem is not just that the I Don’t Care View yields highly implausible consequences (should you really remain as confident in your calculation even when another differs? When two others differ? When many, many more differ?).28 The deeper problem is that this way of viewing (actual) disagreement ignores the fact that the discussion of peer disagreement is located in the wider context of epistemic imperfection. We are here in the business of taking our own fallibility into account, and peer disagreement may very well be a relevant corrective. True, if we had a god’s eye view of the evidence — infallibly knowing what it supports, and infallibly knowing that we infallibly know that — actual disagreement would be epistemically irrelevant. But we do not, and it is not.29

Kelly (2005) also defends a rather strong asymmetry between the differing peers. Assuming — as we do here — that there is no epistemic permissiveness, at least one of the peers is epistemically malfunctioning on this occasion, not responding to the evidence in the (uniquely) right way. So some asymmetry is already built into the situation of the disagreement. Kelly takes advantage of the opportunity this asymmetry opens up, and argues that the right answer to our question — how to revise one’s degrees of belief given peer disagreement — is different for the two peers. The one who responded rightly to the evidence should do nothing in the face of disagreement. The one who responded wrongly should take the disagreement as (further) reason to revise his degree of belief. But this view — the Right Reasons View — is flawed in more than one way.

First, to repeat, it is highly implausible that peer disagreement is epistemically irrelevant even to the one who responded correctly to the initial evidence.

Second, our question, as you will recall, was the focused one about the epistemic significance of the disagreement itself. The question was not that of the overall epistemic evaluation of the beliefs of the disagreeing peers. Kelly is right, of course, that in terms of overall epistemic evaluation (and barring epistemic permissiveness) no symmetry holds. But from this it does not follow that the significance of the disagreement itself is likewise asymmetrical. Indeed, it is here that the symmetry is so compelling.30 The disagreement itself, after all, plays a role similar to that of an omniscient referee who tells two thinkers ‘one of you is mistaken with regard to p’. It is very hard to believe that the epistemically responsible way to respond to such a referee differs between the two parties. And so it is very hard to believe that the epistemic significance of the disagreement itself is asymmetrical in anything like the way Kelly suggests.

Third, and relatedly, imagine a concerned thinker who asks her friendly neighbourhood epistemologist for advice about the proper way of taking into account peer disagreement. Kelly responds: ‘well, it depends. If you have responded to the initial evidence rationally, do nothing; if you have not, revise your degrees of beliefs so that they are closer to that of the peer you are in disagreement with.’ But this is very disappointing advice indeed. To be in a position to benefit from this advice, our concerned thinker must know whether she has responded rightly to the initial evidence. But, of course, had she known that, she would not have needed the advice of an epistemologist in the first place.31 Perhaps this is not a conclusive objection to Kelly’s view: it is not, after all, immediately obvious that epistemic truths of the kind at stake here have to be able to play the role of epistemic advice. But at the very least this result places a further burden on the Right Reasons View.

In his more recent treatment of peer disagreement, Kelly (forthcoming) defends a somewhat different view, the Total Evidence View, according to which ‘what it is reasonable to believe [in a case of peer disagreement] depends on both the original, first-order evidence as well as on the higher-order evidence that is afforded by the fact that one’s peers believe as they do’ (Kelly forthcoming, p. 32). Perhaps the appeal of this view is best appreciated through the following point (which I offer here in a somewhat tentative tone): according to the Equal Weight View we are epistemically required to ignore some evidence.32 According to the Equal Weight View, you are required — after having evaluated the evidence, having come to confidently believe p (based on this evidence), and having come to realize that Adam confidently believes not-p — to ‘split the difference’, and update your degree of belief so that it will now be the average of the two initial degrees of belief (yours and Adam’s). The rationale for that is that now your evidence with regard to p consists of the reading of the two truthometers (you and Adam). And unless we are to endorse the I Don’t Care view, we already agree that the truthometers’ readings are indeed relevant evidence with regard to p.33 But where has all the other evidence gone? The Equal Weight View insists not just on the epistemic relevance of the peers’ beliefs, but also that — at this stage at least — their beliefs (or the truthometers’ readings) are the only relevant evidence. As Elga (2007, p. 489) insists, for instance,34 in updating your degrees of belief given the disagreement you are allowed to conditionalize on everything you have learned about the disagreement, except what depends on your initial reasoning to p (indeed, if this point is not insisted on, the Equal Weight View borders on vacuity, a point to which I will return). So the Equal Weight View requires that in the face of peer disagreement we ignore our first-stage evidence altogether. And this does not seem to be a virtue in an epistemological theory. Surely, even if others’ beliefs are relevant evidence, such evidence should be weighed together with all the other evidence we have, should it not? Should we not base our beliefs on the total evidence available to us? And once all the evidence is taken into account, it is not in general true that the disagreement-evidence will always dominate the first-stage evidence.35

There is, I think, something importantly right about this line of thought,36 but as it stands it cannot withstand criticism. Sometimes ignoring evidence is the epistemically right thing to do. Kelly (2005, p. 188) himself gives examples: if one piece of evidence statistically screens off some other piece of evidence, then in considering the former we should ignore the latter on pain of double counting. (Kelly offers the example of an insurance company evaluating the risks involved in a certain person’s driving: if the insurance company has rather precise information about the individual, weighing with it also the most general information — say, based on the person’s age or gender — may amount to such double counting.) But this, after all, is precisely what the proponent of the Equal Weight View should say about the suggestion to still consider — at the second stage — all of the initial evidence. All of this evidence was considered by you in coming to believe p (and by Adam, in coming to believe not-p). If at the second stage we think that you believe p (and that Adam believes not-p) to be evidence, this evidence arguably screens off the evidence that was already taken into account in the first stage. The line of thought suggested in the previous paragraph as motivating the total evidence view is thus guilty of double counting. (Again, the obvious way to avoid double counting would be to endorse the I Don’t Care View, but I take it we already have sufficient reason to reject it.)

Furthermore, it is not completely clear whether the Total Evidence View avoids the asymmetrical features that were so troubling in the Right Reasons View. It is clear, of course, that the overall epistemic evaluation of the two disagreeing parties will not be symmetrical, because of the sensitivity of the Total Evidence View to the initial evidence, to which — ex hypothesi — just one of the disagreeing parties responded rationally. But it is unclear whether the disagreement itself has, on this view, different epistemic effects on the two disagreeing parties, depending on who (roughly speaking) got things right.37 To the extent that the Total Evidence View retains the asymmetry present in the Right Reasons View, then, it is vulnerable to the relevant objections mentioned above.38

The Total Evidence View too, then, is not without problems. And though what I end up saying will resemble it in some important ways, we must not forget its problems. That the Total Evidence View (and the Right Reasons View) is so problematic may seem to lend further support to the Equal Weight View, to which I now return.

5. On being a peer, being believed to be a peer, and being justifiably believed to be a peer

Our question — how to revise our beliefs in the face of peer disagreement — is actually ambiguous between at least three readings. It is high time to disambiguate it.

We can ask, first, how to revise our beliefs in the face of disagreement with someone who is in fact our peer (that is, someone who is in fact equally likely as we are to get things right here). Or, second, we can ask how to revise our beliefs in the face of disagreement with someone whom we take to be our peer. Or, third, we can ask how to revise our beliefs in the face of disagreement with someone whom we justifiably take to be our peer.39 So far, I have been assuming that the relevant peer satisfies all these descriptions — Adam is in fact your peer, you believe as much, and furthermore you justifiably believe as much (one is tempted then to say — you know that he is your peer). But it will now prove useful to distinguish between these descriptions.

I will put to one side the question of how to revise one’s beliefs given a disagreement with someone who is in fact — perhaps unbeknownst to one — one’s peer. Though this question may be of some interest — especially, perhaps, to those whose views about epistemic justification are (I would say implausibly) externalist — it is not this question I am primarily interested in (nor is it the question the literature on peer disagreement seems interested in). The more interesting distinction, then, is that between how we should revise our beliefs given a disagreement with someone we take to be our peer, on the one hand, versus how we should revise our beliefs given a disagreement with someone we justifiably take to be our peer, on the other.

Christensen is not clear about this distinction (perhaps because he implicitly restricts the scope of his discussion to those justifiably believed to be peers). But Elga is rather clear on this point, so let us focus on his claims here. Throughout his discussion of peer disagreement, Elga (2007) talks just about what your prior conditional probability is that you (and others) would be right. Nowhere does he speak of what that prior probability should be. Indeed, Elga attempts to conduct the whole discussion while abstracting from questions of precisely that sort: Elga (2007, p. 483) says he has nothing to say (here) about when we should trust whom and to what extent. But this nonchalance is not, I now want to argue, something the Equal Weight View can afford.

If your prior conditional probability that you would be right (on a given topic, in case of disagreement with Adam) is, say, 1, but if you are not justified in having this prior conditional probability that you would be right (say, because your and Adam’s track records on this topic are equally good), then upon finding out about the disagreement with Adam you are most certainly not justified in completely discarding his opinion. In such a case, then, your probability that you are right should not be your probability that you would be right (that is, 1); rather, it should depend — to an extent, at least, even if not completely as the rationale of the Equal Weight View seems to require — on the probability that you should have had that you would be right. The point here is a particular instance of a common (if not entirely uncontroversial) one: that you believe p and if p then q cannot confer justification on your belief q (even if formed by inference from the previous two) unless your beliefs p and if p then q are themselves justified. Similarly, updating your degrees of belief according to a prior probability you have cannot render your updated degree of belief justified unless your prior probability is itself justified.40 So we have here a counterexample to (at least Elga’s official presentation of) the Equal Weight View.

Thus, the Equal Weight View should be revised.41 The question ‘How should we revise our beliefs in the face of disagreement with someone we believe to be our peer?’ is problematic, we have just seen, for if (for instance) we unjustifiably refuse to take someone to be our peer, what we should do in the face of disagreement is first come to believe that she is our peer and then treat her epistemically as one. The cleaner question, then, is that of how we should respond to disagreement with those we justifiably take to be our peers.42 And this is the question the revised version of the Equal Weight View43 — which from now on I will just call the Equal Weight View — answers.44

But if this is really the more interesting question, then answering it cannot be isolated in the way Elga wants from questions regarding the justification of trust. My point here is not just that without an answer to this question there is a sense in which the Equal Weight View is incomplete. The problem runs deeper than that, because once this question is raised, it seems to me clear that any plausible answer will undermine the Equal Weight View (even in its revised version). Here is why.

Of the many factors that go into the justification of the degree of trust you have in others, some surely have to do with how often they were right about the relevant matters. Not all — and it is an interesting question what other factors are relevant and how. And perhaps there are cases in which this is not a relevant factor at all. (I am not sure, but perhaps some guru cases (see Elga 2007, p. 479) where one completely defers to another — are of this weird sort.) But in most cases, a significant part of your evidence as to someone’s reliability on some topic is her track record (or that of the relevant set of people of which she is a member) on that topic, that is, how often she got things right, that is, how often she — as you believe — got things right. This is not exactly the same thing as how often she agreed with you. Perhaps, for instance, you now believe you were mistaken at the time, and only with the help of hindsight can you now see that back then she was right (and you wrong). But still, the fact that her view on these things is often (as you now believe) true is certainly a relevant factor in determining how likely she is to be right on the next question, the one about which you differ with her. It would be absurd, after all, to require that in determining the degree of epistemic trust we should accord someone we ignore her track record on the relevant matter (see Kelly 2005, p. 179).

This trivial observation supports three relevant conclusions here. First, the ineliminability point from section 3 is reinforced, for even if you can treat yourself as a truthometer if you just ask what is your prior probability that Adam (or you) would be right, you can no longer do so when you ask what is the justified prior probability that Adam (or you) would be right. Here you can no longer abstract from the question of what you take to be the truth of the relevant matters, that is, of what you take to be the truth of the relevant matters.45

Second, recall another point from section 3 above, namely that seeing that we cannot universally treat ourselves as truthometers, what we are really looking for — as support for the Equal Weight View — are reasons that are at least somewhat peculiar to the case of peer disagreement. In order to philosophically motivate the Equal Weight View, in other words, its proponent has to show what it is specifically about the context of peer disagreement that makes an application of the truthometer view plausible. But now that we know that the Equal Weight View is better put in terms of one’s justified trust in others, and furthermore that in fleshing out the details of such justification you are not going to be able to treat yourself as a truthometer, it seems highly unlikely that our context is one where the (restricted) truthometer view should be applied.

And third, the fact that what counts is justified trust and not merely trust, together with the trivial observation about how such trust can gain justification, has implications for another key question in the peer-disagreement debate — that of the possible role of the disagreement itself as evidence against counting one’s interlocutor as peer, or as reason to demote him from his status as a peer. It is to this issue that I now turn.

6. Is the disagreement itself reason for demoting?

In the context of finding out how we should respond to peer disagreement, it is a key issue whether the disagreement itself can be sufficient reason to demote your interlocutor from the status of peer. Assume that the answer is ‘yes’. If so, then even if you take Adam to be your peer prior to the whole unpleasant business regarding p, once you find out about the disagreement, you can justifiably demote him from the status of peerhood, and stick to your own judgement about p (after all, that someone who is your epistemic inferior disagrees with you is not a strong reason to change your mind). Or perhaps — still under the assumption that the disagreement itself is reason for demoting — the right thing to do is not to split the difference exactly (as the Equal Weight View seems to require), and not to demote Adam completely and stubbornly stand your ground, but rather to reduce your confidence in p somewhat, and also demote Adam somewhat. But if the disagreement itself is somehow barred from counting as evidence as to Adam’s epistemic status, then the Equal Weight View seems to trivially follow from any plausible conditionalization principle46 (a point I return to below): after all, you believed that he would be just as likely as you are to be right in a case of disagreement, and now we have a case of disagreement, and this disagreement itself is no reason to change your mind about how likely Adam is to be right in such cases, so should you not now believe that it is equally likely that he is right as it is that you are? That, of course, would require endorsing the Equal Weight View.

Unsurprisingly, then, philosophers on all sides acknowledge the central role in this debate of the evidential status of the disagreement itself. Kelly’s position here (as articulated at e.g. Kelly forthcoming, p. 54) follows naturally from the Total Evidence View and what motivates it: if we should take all the evidence into account, there does not seem to be any reason to exclude the disagreement itself as relevant evidence too, both for the relevant reliability claims, and for the first-order claims as well.47Christensen (2007, p. 196) agrees that the disagreement may be evidence that counts against your interlocutor’s reliability, but he insists it counts equally as evidence against your own reliability, so that symmetry is restored (I argue against this move in the next section). And both Christensen and Elga allow as evidence with regard to your interlocutor’s reliability information about the disagreement (such that you feel tired, or that Adam looks a little drunk, or that he apparently did not use the right reasoning procedures in this case, or that you find his conclusion utterly crazy and not just false),48 but they are very clear about disallowing the disagreement itself — the mere fact that Adam believes not-p while you believe p — as (asymmetrical) evidence against Adam’s reliability. Christensen (2007, p. 198) here insists on the reason to demote being independent of the specific disagreement under discussion, and Elga (2007, p. 489) insists that the relevant conditional probability (that you would be right, given disagreement) is prior ‘to your thinking through the disputed issue, and finding out what the advisor [in particular, your peer] thinks of it’.

It is thus (perhaps also) common ground that the fate of the peer-disagreement issue (and in particular, that of the Equal Weight View) is pretty much determined by the answer to the question we are now considering, namely, whether the disagreement itself can count as evidence that your interlocutor is less than fully your peer.49

Surprisingly, though, it is not at all clear how this is reflected in the official versions of the Equal Weight View. Again focus on Elga’s statement of the view of which the Equal Weight View is a particular instance, which reads:

Upon finding out that an advisor disagrees, your probability that you are right should equal your prior conditional probability that you would be right. Prior to what? Prior to your thinking through the disputed issue, and finding out what the advisor thinks of it. Conditional on what? On whatever you have learned about the circumstances of the disagreement. (Elga 2007, p. 490)

To see the problem, start by noting that your probability that Adam would be right on a certain issue — Pr(R) — need not be the same as your probability that Adam would be right on that issue given that there is a disagreement between you two on it — Pr(R | D). Indeed, often these two should be different. Suppose, for instance, that you think both you and Adam are very good philosophers, and that both of you are highly likely to give the right answer to a philosophical question. And suppose further that you are justified in this trust in Adam and in yourself. Because you are so confident that Adam will get (philosophical) things right, your Pr(R) should be fairly close to 1, as is your probability that you would be right. Should you two disagree about a philosophical question, you would find this fact very surprising. After all, if you are almost always right, and Adam is almost always right, then you two are (almost) almost always in agreement. The surprising disagreement should give you pause. And I take it that given the disagreement, you should now be far less confident that Adam got things right (and similarly for you). And you know all this in advance, of course, so your prior conditional probabilities should reflect this fact. In other words, in this case Pr(R) ≫ Pr(R | D). The supporters of the Equal Weight View will, I am sure, agree.

When Elga argues that in a case of disagreement your probability that you are right should equal your prior conditional probability that you would be right, he means (and can only mean) Pr(R | D), not Pr(R). Had he meant Pr(R), then in the case described his view would entail — incoherently — that you place almost full confidence both in Adam and in yourself, even in the case in which you believe p and Adam not-p.

So Elga argues that your posterior probability that Adam (or you) are right (Prf(R)) after finding out that you two disagree (D) should be the prior conditional probability that Adam (or you) would be right in such a case (Pr(R | D)). If this is what the Equal Weight View comes to, though, it borders on triviality, being just an instance of the most basic conditionalization principle — your posterior probability having found out a piece of evidence should equal your prior conditional probability — conditioned, that is, on that same piece of evidence. And indeed, at times it does seem that the Equal Weight View has (almost) nothing interesting, non-trivial to say regarding the question it was supposed to be an answer to — namely, how to take the disagreement of others into account. At one point, for instance, Elga (MS, n. 9) notes that the Equal Weight View is entirely consistent with just having degree of confidence 1 in oneself in all cases, thus completely discarding the views of others.

The only thing standing between the Equal Weight View and vacuity — but also, I now want to argue, the thing that renders it rather clearly false — is the explicit requirement to exclude from one’s conditionalization process the disagreement itself as reason for demoting (and more generally, the first-stage evidence). In Elga’s probabilistic framework, taking the disagreement itself as reason for demoting Adam amounts to taking the disagreement as reason for revising one’s conditional probability that Adam would be right (in a case of disagreement about a proposition of the relevant kind). And Elga — perhaps like other proponents of the Equal Weight View — assumes that this is never rationally permissible. The thought seems to be built into the very structure of conditionalization: if your conditional probability P(p | q) = x , and you find out that q, then your posterior probability P(p) should be x. The conditional probability is taken as given, not something that can be changed in view of new evidence. But why should that be so? In the case of non-probabilistic Modus Ponens arguments, for instance, the point is often made that if you (justifiably) believe if p then q, and then (justifiably) come to believe p, you may have two rationally permissible options: to come to believe q, or to take back your commitment to at least one of the premisses, for instance, to the conditional if p then q. Why not say, then, that when your (justified) conditional probability P(p | q) = x , and you find out q, then either your posterior probability P(p) should be x , or you should revise your prior conditional probability so that now it is P(p | q) = y (where y ≠ x), and then come to have degree of belief P(p) = y?50 In our context, given the disagreement with Adam, why think that your only acceptable way of restoring probabilistic coherence is by according Adam’s view equal weight, rather than by (at least partly) demoting him from his peer status?

There may be larger issues involved — larger than I can hope to address adequately here. It is not clear to me whether the point from the previous paragraph is the beginning of a criticism of this Bayesian way of doing epistemology in general: this may very well just be a particularly powerful instance of what is sometimes called ‘the problem of rigid conditional probabilities’.51 If this is so, then the Equal Weight View may just immediately follow from a highly implausible version of a conditionalization principle (or it may not-so-immediately follow from a more plausible conditionalization principle, together with some implausible auxiliary premisses). But whether this is so or not, the crucial point for my purpose is that hidden here there is a substantive epistemological principle — the one barring revising conditional probabilities given new evidence, or perhaps given the new evidence that the conditional probability is conditioned on, or perhaps more particularly just the one barring revising your attitudes towards Adam’s reliability given the new evidence (that he is wrong this time). It is this principle that underlies the Equal Weight View’s insistence on not taking the disagreement as a reason for demoting the relevant peer.

But once this is noticed — that there is here a substantive, normative, epistemological principle underlying the Equal Weight View — the point from the previous section, it seems to me, applies. Especially given this substantive commitment — the commitment, namely, to the claim that the disagreement itself does not justify revising one’s degree of trust in oneself and others — the Equal Weight View cannot afford to be nonchalant about (other parts of) what justifies degrees of trust in oneself and in others. In order to avoid the kind of vacuity above, it must rule out the disagreement itself as possible justification for demoting. But in order to do that, it cannot settle (as Elga does) for just talking about the disagreement between you and those you take to be your peers. It must have something to say about whom you are justified in taking as your peers, and in particular, about you not being justified in demoting others from peerhood status based on the disagreement alone. If, for instance, you take Adam to be right very often in general, but only very rarely in cases of disagreement with you, the Equal Weight View — if it is to have anything interesting to say about how the views of others are to be taken into account — must have something to say regarding which stories could justify you in thinking that (for instance, the story ‘because Adam is wrong on this occasion’ must be ruled out by the Equal Weight View).

The Equal Weight View must, then, argue that the disagreement itself — the mere fact that Adam believes not-p when you take p to be true — is not legitimate asymmetrical evidence against his reliability. And if the discussion in the last few paragraphs is right, then the proponents of the Equal Weight View must defend this point independently from the formalities of conditionalization.52 They must, in other words, argue on substantive grounds that the disagreement itself is not relevant evidence for Adam’s reliability. Those of us who reject the Equal Weight View must argue to the contrary conclusion.

But given the ineliminability of the first-person perspective and the (at least moderate) self-trust that comes with it, why on Earth should you not see Adam’s belief not-p as reason to believe he is less reliable than you otherwise would take him to be?53 After all, when you believe p, you do not just entertain the thought p or wonder whether p. Rather, you really believe p, you take p to be true. And so you take Adam’s belief in not-p to be a mistake. And, of course, each mistake someone makes (on the relevant topic) makes him somewhat less reliable (on the relevant topic) and makes you somewhat more justified in treating him as less reliable (on the relevant topic).54 Why should this mistake, then, be any different? Why should it count — against Adam’s reliability — less than Adam’s previous mistakes?55 True, all of this is, as it were, from your own perspective, but it is precisely such an objection that is rendered irrelevant by the ineliminability point from section 3.

But wait, would this not beg the question against Adam (or against not-p)? You are trying to determine whether or not to believe p, and in the process you are trying to determine how much epistemic trust to place in Adam’s view on p (so that you can factor in the probative force of his view in your own relevant degrees of belief). So does taking p to be true, and using it as a premiss in an argument for demoting Adam from peerhood status, not simply amount to begging the question? No, it does not, or at least not in a problematic way. The crucial point to note is that there is really nothing unique going on here. Whenever you try to decide how much trust to place in someone, or indeed, when deliberating epistemically about anything at all, your starting point is and cannot but be your own beliefs, degrees of beliefs, conditional probabilities, epistemic procedures and habits, and so on. If this is a cause for concern, it is a cause for much more general concern (indeed, if this fact undermines justification, the most radical of scepticisms seems to follow, a point to which I return below). But if at least sometimes justification can be had despite the fact that your starting point is your starting point, if starting there does not amount to begging the (or a) question in any objectionable way, then it is very hard to see why the particular instance present in the case of disagreement should be especially worrying.56 The point, then, quite simply, is this: perhaps there is something suspicious in your taking the disagreement itself as evidence that Adam is less reliable than you may have thought, indeed as stronger evidence for his unreliability than for your own. But there is nothing more suspicious in this piece of evidence compared to pretty much all others. Hoping for the kind of justification that avoids this difficulty is a hope most of us have come to resist, perhaps a part of epistemically growing up. The mere disagreement, I conclude, is in general a perfectly legitimate piece of evidence against Adam’s reliability (in general, and in this case), and so often a good enough reason to demote him from the status of a peer.57

7. What is your reason for belief? That you believe that p, or that p?

This is not enough, though, for Christensen’s reply is still in play. ‘OK then’, the proponent of the Equal Weight View may now say, ‘I concede that the disagreement itself is a legitimate piece of evidence against Adam’s reliability. But it is just as legitimate as evidence against your reliability. So we are still stuck with the kind of epistemic symmetry only the Equal Weight View can accommodate.’58 It is important to see that this line of thought is confused.

What precisely is your reason for demoting Adam, or for revising your view of his reliability? Crucially, I now want to argue, your reason for changing your mind — your epistemic reason to demote Adam, the feature of the circumstances that in your mind makes demoting him the epistemically appropriate response — is not that he believes not-p whereas you believe p. Had this been your reason for demoting Adam, Christensen would have been right, and the symmetry preserved, for this piece of evidence counts equally against Adam’s reliability and against yours. Rather, your reason for demoting Adam — the feature of the circumstances that in your mind justifies demoting him — is that he believes not-p whereas p. The epistemically relevant feature of his belief that not-p is not that it differs from yours, but rather that it is false. To see that this is the feature of the situation you take to be of normative epistemic significance — what your reason is for changing your mind about Adam’s reliability — we can use the following counterfactual test: imagine a possible situation in which Adam truly believes not-p, and you are wrong in believing p ; do you — as you actually are, thinking about this counterfactual situation — take this to be reason to decrease Adam’s reliability? Surely not. Now imagine a situation in which Adam falsely believes not-p, and you agree with him; do you — as you actually are, thinking about this counterfactual situation — take this to be reason to decrease Adam’s reliability? Of course. What this counterfactual test shows, then, is that what you take to be the epistemically relevant feature of the situation is that Adam is wrong, not that Adam and you differ. True, it is your judgement expressed by the claim that Adam is wrong, that his belief is false. We can put this by saying that your reason to change your mind about Adam’s reliability is — together with his belief that not-p — not that you believe that p, but rather that p (as you believe).59 But to insist that the ‘as you believe’ qualifier rules out that p as a reason for belief is precisely to ignore the ineliminability point, and to insist on the impossibly high standard that leads to scepticism more generally. Let us not do that, then. Your reason to change your mind about Adam’s reliability is that p (not that you believe that p). And this epistemic reason — namely, that Adam is wrong — is not at all symmetrical (for you take him to be wrong, but you do not take yourself to be wrong). So Christensen’s suggestion here fails.

This point — that often one’s reason for belief in this sense is that p rather than that one believes that p — is easily and often missed, so let me spend some more time explaining it. Talk of reasons is, of course, dangerously ambiguous. When claiming that your reason for demoting Adam is that Adam is wrong regarding p (rather than that you two differ regarding p) I do not mean that your motivating reason — what causally leads you to demote him — is that he is wrong; my point is a normative, not a causal one. But nor do I mean that that p is a normative, or a justifying reason for your belief, because, after all, even if you are wrong about p (and so, that p cannot justify anything) still your reason for demoting Adam is that he is wrong, not that you think he is wrong (as is evidenced by the counterfactual test). The reasons that are relevant here are your reasons, in the sense of what you take to be the relevant normative reasons, the features of the circumstances that in your mind epistemically justify the relevant response.60 This is consistent, of course, with them not being genuine reasons at all — if and only if you are wrong in what you take to be the relevant normative reasons. The observation that your reason for demoting Adam — in this sense — is that he is wrong (as you believe), rather than that he and you differ, should be fairly uncontroversial, as the counterfactual test above shows. In particular, I do not need to take sides in the controversy over whether it is only true propositions that can be reasons for belief (in the more straightforward normative sense), or whether false beliefs (or their content) can also qualify. However we go on that question, when it comes to your reasons, or what you take to be the epistemically relevant features of the circumstances, it is quite clear — as the counterfactual test above shows — that your reason is that Adam is wrong, not that you believe that he is, or that the two of you differ.

The case of explanations is precisely analogous. Suppose that I offer the following explanation of the collapse of the Soviet Union: the Soviet Union collapsed, I say, because it was politically unjust. We can apply the counterfactual test (Would the Soviet Union have collapsed — you can ask me — had it remained unjust but had you believed that it was just? Would it have collapsed had it not been unjust, but had you continued to believe that it was unjust?) to determine that what I take to explain the collapse of the Soviet Union — what I take to be the explanatorily relevant features of the circumstances — is not that I believe that it was unjust, but rather that it was unjust. Of course, it is my own judgement about the injustices in the Soviet Union that I am here expressing. We can put this by saying that what explains (I think) the collapse of the Soviet Union is not that I believe that it was unjust, but rather that it was unjust (as I believe). And here too, the relevant sense of ‘explanation’ is not motivating or causal, nor is it the factive sense of explanation (after all, the counterfactual test yields the same result even if I am wrong, and the Soviet Union was not in fact unjust). Rather, what we are talking about here is what I take to be the explanatorily relevant feature of the situation — and that is that the Soviet Union was unjust, not that I believe that it was.

Your reason for demoting Adam, then, is that he is wrong (as you believe). And this reason is not factive — this can be your reason (what you take to be the normatively relevant feature of the circumstances) even if in fact Adam is not wrong. This means that Adam can likewise demote you, and his reason (in the same sense) for doing so is that you are wrong (as he believes). So in this way, something of the symmetry remains: but this is precisely as it should be. For we already know — from the discussion of the Right Reasons View — that the appropriate epistemic response to peer disagreement cannot fully depend on who is right. What the discussion in this section establishes, then, is that whether you are right or wrong about p, you can take that p as legitimate evidence against Adam’s reliability (and he can take that not-p as legitimate evidence against yours). And what this means is that — whether you are right or wrong — the disagreement itself can be sufficient reason to demote Adam from his peerhood status.

Now, in the explanatory case, if your relevant belief (that the Soviet Union was unjust in a way that led to instability) is false, we may want to say that your suggested explanation is no explanation at all. Explanations seem to be factive in this way (whether this is unqualifiedly true is not something I need to comment on here). But if we are not in the business of explaining the collapse of the Soviet Union, but rather in the business of understanding your explanatory commitments, still the (purported) injustice of the Soviet Union is relevant, even if in fact the Soviet Union was not unjust. And Christensen’s reply (discussed throughout this section) is analogous precisely to the business of understanding your explanatory commitments. This is so, because Christensen’s reply — and the Equal Weight View more generally — derive whatever interest they have from the fact that the prior conditional probabilities they employ (that you or your interlocutor would be right in a case of disagreement) are your prior conditional probabilities, conditional probabilities you are (or should be) committed to. The point seems to be that given that you (perhaps justifiably) take Adam to be your peer, there is some incoherence in your credences if you refuse to give equal weight to your and Adam’s views in a case of disagreement. And Christensen’s claim — that the disagreement can serve as reason for demoting Adam only if it can equally serve as reason for demoting you — is initially interesting precisely because it seems to flesh out an implication of one of your relevant commitments (namely, that, antecedently to this disagreement, Adam is your peer). You seem to be committed to this symmetry, and so you seem to be committed to Christensen’s reply. So if you asymmetrically demote Adam, the thought seems to be, there is a tension — perhaps an incoherence — within your own commitments.

But it is the upshot of the discussion in this section that no such tension exists. This is so, because your own reason for asymmetrically demoting Adam — the feature you take to epistemically justify doing so — does not violate the symmetry to which you are committed. You, after all, are committed (to an extent) to the symmetry between your own views and Adam’s. You are not committed to a symmetry between p and not-p, when you take p to be true. So given that your reason — in the sense specified above — for demoting Adam is that p (as you believe), and not that you believe that p, you are not at all committed to demoting yourself in a similar way. And notice that this point — the point about the absence of tension within your commitments — holds whether or not p is true (as you believe). If p is false, then you are wrong — you were, after all, committed to p’s truth. But the epistemic possibility of p’s falsehood (which comes down to the fact that you rightly take yourself to be fallible) does not suffice to save Christensen’s reply. Your reason — in the specified, non-factive sense — for demoting Adam is that he believes not-p whereas p, and your commitment to this reason in no way commits you to equally demote yourself.

Of course, none of this applies to thermometers, or indeed to (mere) truthometers. If Adam and Tom disagree, and you think of them as equally reliable truthometers, then you should not take the disagreement itself as any asymmetrical evidence about, say, Adam’s reliability. But this is precisely where a disagreement in which you are one of the disagreeing parties is different (to you). For you cannot, do not, and are not epistemically required to treat yourself merely as a truthometer.

There is thus no general reason to rule out the disagreement itself as (asymmetrical) evidence61 against your interlocutor’s reliability. And this means that the Equal Weight View is false.

8. But is it the Extra Weight View?

If I am right, then, you should, in a sense, treat yourself differently from others, even when you take them to be in general just as good truthometers as you are. You should, in a sense, treat a disagreement between you and Adam differently from a disagreement between Tom and Adam.62 But this gives rise to the worry that underneath the not-just-a-truthometer rhetoric hides the Extra Weight View, the view according to which you should, in cases of disagreement, give extra weight to your view simply because, well, it is your view. But this view seems objectionable right off the bat — the epistemological analogue of chauvinism, or perhaps nepotism.

I agree that it is unreasonable to give your own view extra weight simply because it is yours (when Adam is just as reliable on these matters as you are). That it is yours seems epistemically irrelevant — just as the fact that one of two ‘disagreeing’ thermometers is yours is epistemically irrelevant. But I do not think that refusing to treat yourself as a truthometer entails the Extra Weight View.

To see why, return to what I had to say on the question whether you should treat the disagreement itself as a reason for demoting your interlocutor. There I insisted that your reason for demoting him was not that you believed that p but rather that p (as you believed). Had your reason for demoting been that you believed that p, then refusing to take that he believes that not-p as equally strong evidence for demoting yourself would indeed amount to epistemic chauvinism. But this is precisely not what I suggested. Taking that p as a reason for demoting your interlocutor is not chauvinistic in the same way. Similarly, and more generally, your reason for not ‘splitting the difference’ in cases of peer disagreement is not that your view counts for more because it is your view. Rather, it is that the credence you end up with seems (to you) best supported by the non-chauvinistic evidence.63

A worry remains. Even if on my view your reason for believing as you do is not that one of the views is your view, still my suggestion recommends an epistemic policy (that of not treating oneself merely as a truthometer) which will in fact result in your (initial) view affecting more than others’ the credences you end up with. Indeed, the consequences of the not-treating-yourself-merely-as-a-truthometer strategy will be precisely similar to those of (one version of ) the Extra Weight View. Is this not bad enough?

Yes, my suggested strategy will end up recommending epistemic consequences similar to those of the Extra Weight View. But no, this is not bad enough. Let me rely here on a kind of epistemic analogue of the intending–foreseeing distinction. By refusing to treat yourself merely as one truthometer among many, you can foresee that your view will in effect have been given extra weight. But you do not thereby intend to give your view extra weight.64 The distinction between intentionally giving one’s view extra weight on one side, and refusing to treat oneself merely as a truthometer while foreseeing that one’s view will in effect be given extra weight on the other side, seems to me to be normatively relevant.65 The former is objectionable. The latter is not, perhaps at least partly because it is inevitable.66

You may object, though, along the following lines:67 suppose that Tweedle Dee follows my (vague) instructions as to how to update his beliefs in a case of peer disagreement. Tweedle Dum, on the other hand, follows the Extra Weight View. And of course, all other things are equal between them. Then after updating their beliefs in a case of peer disagreement (with Adam) about p, Tweedle Dee’s and Tweedle Dum’s degrees of belief in p will be identical. Tweedle Dum, I insist, is not epistemically justified. Does it not follow, then, that neither is Tweedle Dee? There is, after all, no difference between them when it comes to evidence, or to past track record and reliability. Well, there may be no difference in the evidence available to them. But there is a difference in the evidence they use, or in their reason for having a certain degree of belief in p. It is, after all, a part of Tweedle Dum’s reason for (degree of) belief that his view should count for more, and this makes his degree of belief unjustified, even if it is the same degree of belief reached by Tweedle Dee based on only legitimate considerations (his reason for his degree of belief, remember, is not based on according extra weight to his own view). And we know that in general, whether you are justified in believing p may depend on your epistemic history. If you deduced p by inference to the best explanation from the epistemically justified q and r, your believing p may very well be justified. If you believe p on astrological grounds, you are not justified in so believing. The justificatory status of your believing p thus depends — among other factors, no doubt — on what your reasons for believing p were. So there is nothing problematic or ad-hoc-ish about proclaiming Tweedle Dee justified and Tweedle Dum unjustified: their epistemic histories are very different, different in a way that makes a justificatory difference. The point is again analogous to one from the moral discussions of the intending–foreseeing distinction: if this is indeed a morally relevant distinction, then it is possible that two people will perform the same bodily movement, with the same known consequences, where the action of one will be morally permissible (because, say, a certain harm is merely foreseen) and the other’s impermissible (because the same harm is intended). Analogously, then, it is possible that Tweedle Dee is justified in his degree of belief (because what he takes to be reasons for belief really are reasons for belief) and Tweedle Dum is not (because his reason for belief is that his belief should count for more, and this is a poor reason for belief in his situation), even though both reach the same degree of belief.

This is a bit sketchy, of course. And it is not as if in the practical domain the normative significance of the intending–foreseeing distinction is uncontroversial.68 Let me just note here, then, that to me it seems intuitively plausible that in the epistemological case — much more clearly than in the moral case69 — the intending–foreseeing distinction (or something close to it) is of normative significance: it just does seem to make a difference — regarding whether or not a belief of yours is justified — what your reason for your belief (in the sense above) is. So refusing to treat oneself merely as a truthometer does not amount to an endorsement of the Extra Weight View.70

9. Bootstrapping

Before concluding, let me address one initially powerful objection to the Extra Weight View, an objection that my emerging view is also subject to.

The objection comes from Elga (2007, pp. 486–8), and it is that of bootstrapping:71 assume for reductio that in cases of disagreement you should give more weight to your own view than to Adam’s. If so, you are justified (to an extent, at least) in believing that you were right here and Adam wrong. But if so, you can take this very fact as at least some evidence that you are more reliable on these matters than Adam. So next time you can assign even more weight to your opinion over Adam’s. And so on. But that the view you take to be right nicely fits with, well, the view you take to be right is no evidence at all for your reliability. So the Extra Weight View is false.

Now, in the previous section I insisted that the not-merely-a-truthometer strategy does not commit me to the Extra Weight View. But note that this will not get me off the bootstrapping hook. For this objection applies even if I just foresee that by employing the strategy my view will in effect be given extra weight — nothing here depends on my intentionally giving my view extra weight.72 So the objection applies.

I think Elga is right that the Extra Weight View opens the door for such bootstrapping. But I think this is a result we are going to have to learn to live with.

Remember, at this point we already know that the interesting question about peer disagreement is how to proceed given disagreement with someone we justifiably take to be our peer, and that therefore the question is not divorced from the question of how to justify judgements about others’ (and one’s own) reliability. We know, in other words, that here we are going to have to utilize pretty much everything that is epistemically available to us, including our judgements about past track records, both of ourselves and of others. And we also know that if scepticism is to be avoided, it cannot count conclusively against the justification of so doing that we are going to do all of this from the starting point of our own initial beliefs and epistemic dispositions. So we know that bootstrapping cannot be ruled out from the start.

In this way, the relation between the Equal Weight View and scepticism is actually more intimate than is often noticed. The point is often made that if the Equal Weight View is true, we may be getting closer to scepticism because we must reduce our confidence in many of our (controversial) beliefs, perhaps to the point of suspension of judgement.73 And, of course, if some sceptics walk among us as peers, the route from the Equal Weight View to scepticism may be quicker still. But thinking about bootstrapping shows a deeper (and non-contingent) connection between the Equal Weight View and scepticism. Some of the assumptions needed to make the Equal Weight View plausible underlie, if pursued consistently, some (more or less) traditional sceptical worries. The bootstrapping worry is after all — as Elga (2007, p. 488) notes — if not a particular instance then a close analogue of a very general worry (the one sometimes referred to as ‘the problem of easy knowledge’ — see Cohen 2002). And the underlying thought that we are not entitled to trust our own epistemic abilities to a degree greater than that their track-record calls for seems a close relative of the claim that we are not entitled to employ a belief-forming method without first having an independently justified belief in its reliability.74 But this thought, of course, naturally leads to scepticism. If I am right here, and if the Equal Weight View ultimately rests on assumptions that naturally lead to scepticism, it follows that the Equal Weight View is — even worse than being false — quite uninteresting.

Now, unfortunately I do not know what to say about the problem of easy knowledge more generally, or about related sceptical worries.75 Perhaps there is a general way to avoid bootstrapping and easy knowledge. If so, we can safely expect such a general way to apply to the case of Elga’s bootstrapping objection as well.76 Or perhaps we are going to have to live with the possibility of some forms of bootstrapping.77 If so, biting the bullet on Elga’s bootstrapping objection should not be unacceptable — though, again, I would have loved to be able to say more here, and in particular to say when something like bootstrapping is acceptable and when it is not; filling in the details here will depend on the general way of dealing with the problem of easy knowledge.78 Or perhaps bootstrapping and easy knowledge are unacceptable, and cannot be avoided short of scepticism. In such a case scepticism is the way to go, and Elga’s bootstrapping argument — together with the whole topic of peer disagreement — is uninteresting. But even without having more to say, placing Elga’s bootstrapping objection in the context of the larger sceptical problem of which it is an instance is not without value, for it shows that in all likelihood, there is no special problem here for the not-merely-a-truthometer strategy. And so even if I do not know how exactly to solve it, I think I can be reasonably confident that (if scepticism can be avoided) it can be solved.

10. Conclusion

If my arguments work, then, the Equal Weight View is false. So are the I Don’t Care View, the Right Reasons View, the Total Evidence View, and the Extra Weight View. Is there anything more positive, then, that follows from my arguments? What is the right thing to say about the way to take peer disagreement into account?

Because we apparently need a name for it, let me call the view emerging here the Common Sense View. According to this Common Sense View, that someone you (justifiably) take to be your peer disagrees with you about p should usually reduce your confidence in p. It is among your relevant evidence regarding p, and in most cases it would be foolish to ignore it. That you yourself believe p, however, will hardly ever be (for you) relevant evidence regarding p.79 (I am not sure, but this may be another point of departure from the Total Evidence View.) Also, that someone believes not-p when p is true (as you believe) will usually be some evidence against her reliability on matters such as p. Often, then, in cases of peer disagreement your way of maintaining (or retaining) probabilistic coherence will be by simultaneously reducing your confidence in the controversial claim and in the reliability of both you and your (supposed) peer, though reducing it more sharply regarding your (supposed) peer. And notice that on this view — unlike on the Right Reasons View, and perhaps also unlike the Total Evidence View — the disagreement itself is usually evidence against the controversial proposition for both disagreeing peers. In the face of what seems to be peer disagreement, we should all lower our confidence, though not as much as the Equal Weight View would have us do.

The response to peer disagreement recommended by the Common Sense View, then, is symmetrical, at least in outline and in most cases. But the final justified degree of belief is not at all symmetrical: if you have responded correctly to the (first-stage) evidence regarding p and Adam has not, then you should both reduce your confidence once you learn about the peer disagreement between you two. But it is not as if you should now both have the same degree of belief in p, nor does it follow that if you do not (if you still tend towards believing p and Adam towards believing not-p) then both of you are equally (un)justified in your respective degrees of belief. But this, of course, is precisely as it should be. For it is another one of the oddities of the Equal Weight View that it goes the other way here. On the Equal Weight View, if you have responded correctly to the first-stage evidence and Adam has not, and then, facing peer disagreement, you both ‘split the difference’ and suspend judgement, you are both equally justified. But why expect such symmetry, given that you have responded correctly to the first-stage evidence, and Adam has not? The compelling symmetry-related idea is that both peers should respond similarly to the disagreement itself, not that both should end up (after so doing) with the same degrees of belief or the same epistemic status for the degree of belief they do in fact have. The Common Sense View, then, captures the compelling idea about the symmetrical response to the disagreement itself (unlike the Right Reasons View, and perhaps also unlike the Total Evidence View), without neglecting the significance of the asymmetrical nature of the first-stage evidence (as the Equal Weight View does).

How much weight, then, should you give peer disagreement in revising your degree of belief in the relevant controversial claim? The Common Sense View has nothing interesting and general to say in reply. Indeed, it asserts that there is no interesting general reply that can be given here.80 For the answer depends on too many factors that differ from one case of peer disagreement to another. Depending on other things you (justifiably) believe, on other evidence you have, on the epistemic methods you are justified in employing, on the (perhaps known) track records of both you and Adam, for some ps in some circumstances you should reduce your confidence in p more, for others less. For some, you should take the disagreement as reason to demote Adam more significantly from the status of a peer, for others less. Indeed, perhaps there are even circumstances in which you should accord no weight at all to Adam’s view, and not reduce your confidence in p. And perhaps sometimes you should split the difference, as the Equal Weight View requires.81 And — to return to a point that was set aside very early on — perhaps at times the right thing to do in the face of peer disagreement is to reduce your confidence in the claim that all evidence is indeed shared between you and your peer.82 Also, perhaps there are some more pragmatic, compromise strategies, the availability of which depends on yet further features of the specific case of disagreement — for instance, in some cases, but not in others, the epistemically right thing to do would be temporarily to suspend judgement while going through — perhaps together — the considerations that led each party to his or her belief. Perhaps the arithmetical calculation example is of this kind. But perhaps the philosophical one is not.

But the central point here is that there is no strategy — none, that is, that is more specific than the strategy of believing what is best supported by your evidence — that is in general justified. There is no general and more informative answer to the question ‘How should we proceed epistemically when we encounter peer disagreement?’, any more than there are more general or more informative answers to questions like ‘How should we proceed epistemically when we encounter circumstances in which we are only partly reliable?’ or ‘How should we proceed epistemically in cases in which some of our judgements are in tension with each other?’ or ‘How should we proceed epistemically with regard to p when someone tells us that p?’ All of these questions radically underdescribe the epistemically relevant features of the circumstances, and so none of these questions can be answered in a general and very informative way. Peer disagreement is not special in this regard.

In this way, then, the Common Sense View — my view — is rather messy, and offers less than you may have hoped for. But it still seems like the most we can sensibly say about peer disagreement.83

1 Later on I will have to take issue with the suggestion that this way of putting things captures the spirit of the Equal Weight View. For now, though, it will do. Notice also that Elga discusses the more general question: How should we take into account the beliefs of others, whatever our estimate of their reliability (compared to ours) is. His answer to this more general question — the one quoted in the text — entails, when applied to the case of peers, the Equal Weight View.
2 Christensen does not put things in quite this way, but for his (perhaps qualified) endorsement of the Elga way of putting the view, see Christensen 2007 (p. 199, n. 15). Let me note here that though at times (e.g. 2007, p. 203) Christensen rather clearly endorses the Equal Weight View, at others (e.g. 2007, pp. 189, 193) he just says that upon learning about the disagreement of your peer you should reduce your confidence in the contested proposition. This, of course, is a much weaker thesis than the Equal Weight View. Nevertheless, I will regard him as a proponent of the Equal Weight View, both because (as noted) at times he seems committed to this stronger view, and because throughout his paper he emphasizes the symmetry consideration in support of his view. But such symmetry considerations — if they support anything at all — support the Equal Weight View and not a more modest claim. (For this observation, see also Weatherson MS.) Feldman (2006) also emphasizes the relevance of symmetry considerations, though see the general change in Feldman’s tone in his 2009.
3 For my reasons, see Enoch 2009.
4Kelly 2005 also starts with the two clarifications I have just made.
The only way to insist that there is something illegitimate about restricting the discussion to just the epistemological question, it seems to me, is to argue that there cannot be cases of disagreement of the relevant kind where we are justifiably metaphysically confident in the status of the relevant subject matter. I do not see why we should believe that this is so.
5 Here I follow Elga 2007 (p. 499, n. 21).
6 In other words, the epistemically appropriate response to disagreement with someone you take to be your peer may not be identical to the epistemically appropriate response to disagreement with someone you do not take to be either your inferior or your superior. I am here only interested in the former. I thank an anonymous referee for pointing out the significance of this distinction.
7 There may be important differences between these examples. I return to this point in the concluding section.
8 A point Christensen (2007) repeatedly emphasizes. Here see also Roush 2009.
9 For arguments against epistemic permissiveness, see White 2005. In fact, I think the discussion of these matters in the literature is misleading in the following way. The Uniqueness Thesis is the claim that, with given evidence, there is a unique degree of belief that is maximally rational. So in order to deny this claim, one has to assert that there is no one degree of belief that is uniquely maximally rational. How about the view, though, according to which there is one degree of belief that is maximally rational, but some other degrees of belief — though less than maximally rational — are still rationally permissible? This is a possible position that as far as I know is not discussed in this context, but it is a rather natural position to hold, and anyway is the one suggested by the analogy with moral permissiveness. Furthermore, it seems like the fact that there is room in logical space for such a position may be relevant for a full treatment of peer disagreement. But for my purposes here I do not need to discuss it further, as here I will be assuming — for the reason given in the text — the Uniqueness Thesis.
10 For a related discussion, see Christensen 2007, pp. 190–2.
11 Kelly (forthcoming, pp. 6 ff.) argues that the Equal Weight View is inconsistent with epistemic permissiveness. I am convinced by his argument, but I find the conclusion almost uninterestingly narrow: for proponents of the Equal Weight View can restrict their view to just those cases where permissiveness does not apply, without losing any of the rationales that make the Equal Weight View initially plausible.
12 Christensen is right, I think, in claiming that if you believe that the right treatment is A, and your colleague — who has a slightly better reliability-track-record on these matters than you do — believes it is treatment B, you should not administer treatment A. But consider another practical case: suppose you are a juror, and you have come to be fairly confident that the accused should not be convicted. You then find out that the other jurors — all of whom you take to be your peers, and privy to the same evidence as you are — vote to convict. Should you revise your vote? I take it that the answer is ‘no’. There is, of course, no serious discrepancy between the two cases here. I take it that the different practical — not epistemological — considerations that apply to them account for the different judgements about what to do. But our question here is rather what to believe.
13 Analogies with measuring instruments — and critiques thereof — are common in the context of discussions of peer disagreement. See, for instance, Christensen 2007 (p. 196), Kelly forthcoming (pp. 3–4), and the references there.
14 The fact that mathematical truths are necessary makes talking about probabilities here somewhat awkward, but not, I take it, too awkward for the example to serve its role in the text.
15 A way of dramatizing this question even further is by imagining a case in which, first, you know of two people who disagree with each other, and later, you find out that one of them is you. See Conee 2009, p. 315. I return to this dramatization below.
16 There is, I take it, a difference between what your prior conditional probability is and what your belief about the relevant probability is. Prior conditional probabilities are not (or at least need not be) beliefs. Indeed, someone can probably have such prior conditional probabilities (perhaps dispositionally understood) without even mastering probabilistic concepts. But in our case we may safely assume, I think, that all concerned are reflective enough to have beliefs about the relevant probabilities, and indeed to have prior conditional probabilities that are in a perfectly natural sense in line with such probabilistic beliefs. With this restriction, no problems should be encountered in extending talk of epistemic justification from beliefs to such prior conditional probabilities. In what follows in the text I ignore this complication.
17 At one point Elga (2007, p. 483) seems to notice something like this point, when noting that even perfect advisors do not deserve full trust, because you should not be perfectly confident in your ability to recognize a perfect advisor. But he does not pursue the point further. Christensen (2007, pp. 196–8) does discuss the first-person perspective, and takes for granted the fact that we are locked in our own beliefs, but does not, I think, appreciate the full strength of the problem this poses for the Equal Weight View. For a related point, see also Weatherson MS.
18 The interaction between our taking a first-person and a third-person perspective towards ourselves is a central theme in Nagel’s thought. See, for instance, Nagel 1986.
19 See, for instance, Foley 2001 (p. 79), Kelly 2005 (p. 179, also quoting Foley).
There is in the vicinity a related worry about possible self-defeat. Assume, as seems likely, that some people reject the Equal Weight View. Further assume — as seems likely, though probably not as likely — that some of these people are people supporters of the Equal Weight View (justifiably) consider to be their epistemic peers about such things. Then the Equal Weight View may very well recommend to suspend judgement about the Equal Weight View itself. Now this is not exactly self-defeat (the Equal Weight View entails not its own negation, but rather the claim that it cannot be believed justifiably), and furthermore the point depends on the assumptions just mentioned. But it is still, I think, a worrying result. For a development of this point in terms of degrees of belief, and the conclusion that the Equal Weight View may issue incoherent recommendations, see Weatherson MS. For the claim that something along these lines is a decisive objection to a global Equal Weight View, see Elga MS. Elga proceeds to defend the surprising claim that restricting the Equal Weight View so that it does not apply to itself can be shown not to be objectionably ad hoc. I differ, but for reasons I do not have to get into here. For some critique of the self-application of (something like) the Equal Weight View, see Frances forthcoming (Sect. 5, ninth argument).
20 Assuming, of course, that our trust in the two thermometers (my hands and yours) remains justifiably equal throughout this little experiment. This assumption may often be false.
21 It seems to me, for example, that we can rather easily distance ourselves from the full-blooded beliefs of our past selves. It is the beliefs of the present self from which we cannot completely distance ourselves in this way. It is an interesting question whether the future self is in this respect like the past self or like the present one.
Notice that the possibility of epistemic distancing is not a matter of degree of confidence. There may be seemings in which you are extremely confident (that there appears to be a text in front of you), and you may have beliefs in which you are not very confident at all. The point is different, though: it is the one put in the text in terms of whether the epistemic self is fully engaged.
22 Notice that the distinction is not drawn in terms of the believer’s success in actually responding to good reasons. Such an understanding would lead to the Right Reasons View, which fails for the reasons I give in Sect. 4 below. The believing, reasoning self may be fully engaged even in cases in which it fails in its reasoning — it can be rational (in the sense in which this is the opposite of a-rational) while being irrational.
23 Interestingly, sometimes proponents of the Equal Weight View talk as if our reflective response to evidence just is a kind of seeming. Feldman (2006, p. 227) describes perceptual cases, and later (2006, p. 231) says he sees no reason to think there is any case that is not relevantly like them. Christensen (2007, p. 194) speaks in terms of ‘reacting to data’, and offers the example of people with ‘Savant Syndrome’ who ‘just see’ theoremhood (2007, pp. 202–3). This way of talking may give the Equal Weight View a more attractive sound than it deserves, for the reason given in the text.
Wedgwood (forthcoming) develops a view of peer disagreement that is in many ways close to the one I develop in this paper. And yet when it comes to seemings we could not be further from each other: Wedgwood thinks — perhaps because of his commitment to a special kind of internalism — that seemings are an especially clear case where we do not have to (in my words) treat ourselves as truthometers. For the reasons given in the text, I differ. But let me note that in other relevant respects, what I say about peer disagreement may apply to seemings as well. For instance, the claim — emphasized in Sect. 6 below — that it may sometimes be epistemically justified to take the disagreement itself as a reason for demoting one’s peer may very well apply to seemings as well. (I thank Ofer Malcai for this observation.)
24 Something like the truthometer view is fairly clearly a motivation both for Feldman’s (2006) and for Christensen’s (2007) endorsements of something like the Equal Weight View. If it is also a motivation for Elga (2007), then he does a good job concealing it. Elga’s major explicit motivations are the way he thinks conditionalization works, and the bootstrapping objection to views other than the Equal Weight View. Bogardus (2009, p. 326) mentions another motivation for the Equal Weight View — that it gets some cases intuitively right. But I am unconvinced by some of the cases he mentions, and anyway this is not a good way of motivating the Equal Weight View as a universal view of the right way to respond to peer disagreement.
25 I take it this is what Christensen (2007, p. 198) does when granting the ineliminability of the first-person perspective here, and still defending what looks like the Equal Weight View.
26 Before criticizing Kelly’s views, let me emphasize that there are obvious and important similarities between his views and arguments and mine, and that I am much indebted to his discussion of peer disagreement.
27Kelly’s views have undergone a change, and so his 2005 and forthcoming present two different — though related — alternatives to the Equal Weight View.
28Christensen’s (2007, p. 207) example. Frances (forthcoming) also argues against Kelly on this point.
29 This point is made by Christensen (2007, pp. 207–8), and by Kelly himself (forthcoming, p. 28), from whom I take the talk of a ‘god’s eye view of the evidence’.
30Christensen (2007, p. 209) makes a related point.
31 Kelly (forthcoming, p. 59) considers an objection to his view according to which he is committed to an implausible externalism about epistemic justification. He rejects this claim for reasons that need not concern us here. Even if he is right about the objection as he puts it, a close worry remains, and I think it is best captured by the point in the text here.
32White (2009, p. 237) emphasizes this point.
33 Or at least, we agree that the reading of the other truthometer is relevant evidence. It is, after all, a very surprising idea that you can (and should) in general treat your own believing p as evidence that p, a point to which I return below. Here too, then, the self–other symmetry the Equal Weight View seems to rely on seems to fail. (An analogy: perhaps you should — when doing politics, say — take other people’s mistaken convictions into account, respecting them as they are. But it makes very little sense to think that you should treat yourself in this way. See Raz 1998, pp. 27–8.)
34 And as Christensen (2007, p. 198) concurs.
35 A central line of thought in Kelly (forthcoming).
36 For one thing, it nicely captures the distinction between seemings and beliefs. In seemings, we do not seem to have any further evidence — our total evidence just is the seeming, and so when there are two disagreeing seemings, we should split the difference between them. But with beliefs that are based on evidence there is always more evidence, and so the Equal Weight View does not follow. See Kelly’s (forthcoming, p. 14) discussion of perceptual cases.
37 Kelly (forthcoming, pp. 43–4) seems to argue that it does. On the other hand, he says (forthcoming, p. 6) he suspects a more symmetrical solution would be true — one according to which both parties to the disagreement are entitled to stand their ground. So it is not completely clear to me what is going on here.
38 Kelly (forthcoming, pp. 52–3) says his theory is not vulnerable to the bootstrapping problem Elga raises as an objection to the Extra Weight View. For reasons that I give below in discussing the bootstrapping objection, I think Kelly is wrong here. If so, this is another worry about the Total Evidence View. But I think my reply to the bootstrapping objection (below) can be utilized by the Total Evidence View as well.
39 And there are more options, like how we should revise our degrees of belief in the face of disagreement with someone we do not but should take to be our peer, and so on. The three options in the text suffice for my purposes. Frances (forthcoming) also notices these three options, but for his purposes he does not have to decide among them.
40 Can Elga take the orthodox Bayesian way out, claiming that prior probabilities (including conditional ones) are not to be evaluated one way or another, but rather to be taken as given? (I thank Pete Graham for this suggestion.) It is hard to see how this would help. First, and as is well known, this rather extreme Bayesian claim faces its own problems. Second, and relatedly, if the Equal Weight View can only be defended on this assumption, this is already an interesting result (and, of course, proponents of the Equal Weight View do not mention this assumption as an explicit premiss in their arguments). And third, it is highly implausible to treat such prior conditional degrees of belief as really, ultimately prior. Rather, your belief that Adam is your peer is much more likely to be itself the product of numerous revisions, in light of considerations of the kind to be mentioned in the text. If so, Elga cannot escape the point in the text even if the orthodox Bayesian assumption is in fact true.
41 Those familiar with the literature on wide-scopism, narrow-scopism, and detachment will see the similarities between the point in the text here and points often made in that context. See, for instance, Schroeder 2004.
42 Wedgwood (forthcoming) also emphasizes the significance of the difference between believing that someone is your peer and rationally believing that he is your peer.
43 In conversation, Elga agreed (if I understood him correctly) that this revision better captures the spirit of the Equal Weight View.
44 Let me just note here without argument that Elga’s reply to the problem of spinelessness collapses if we revise the Equal Weight View in the way suggested in the text. So Elga must choose between the way suggested in the text to avoid the counterexample mentioned here, or his reply to the spinelessness problem. He cannot have both. For a related point, see Kelly forthcoming (p. 56, n. 33).
45 I take this to be a point closely resembling Foley’s (2001) insistence on the ineliminability of (moderate) self-trust.
46 Weatherson (MS, p. 3) makes a similar point.
47 Frances (forthcoming) also seems to think that the disagreement can be a reason for demoting one’s peer.
48 These last two options are the strategies employed by Christensen (2007, pp. 199–201) and Elga (2007, p. 491), respectively, to deal with what Christensen calls the Extreme Restaurant Case, in which your friend and peer comes up with a totally crazy number as his part of the cheque (say, it is greater than the amount of the cheque as a whole). I do not know whether these strategies work, but as Kelly (forthcoming, p. 40) notes, they are rather complicated strategies, designed to yield a result that his theory (and any theory that allows the disagreement itself as evidence against the interlocutor’s reliability) yields much more naturally.
49 Which does not mean that if the disagreement itself is reason for demoting, no one will ever qualify as a peer. Perhaps, for instance, you are — before the disagreement — justified in treating someone as your superior, and after demoting her somewhat (because of the disagreement) you now believe she is your peer.
50 In the text here I talk as if one’s conditional probabilities are beliefs, or perhaps something close to beliefs. This may be imprecise, but not, I think, in ways that should worry us here. See n. 16, above.
51 See Talbott 2008, who also notices that some Bayesians restrict their principle of conditionalization to situations in which one does not (and presumably also should not) change one’s initial conditional probabilities. Given such a restricted principle of conditionalization, the argument in the text amounts to insisting that the case of peer disagreement often lies outside the scope of the appropriately restricted conditionalization principle.
52 Though perhaps in the context of a wider discussion of conditionalization and the problem of rigid conditional probabilities. See the previous note. White (2009, pp. 238–9) emphasizes a similar point, independently of the problem of rigid conditional probabilities (which he does not mention): for what we are presumably required to conditionalize on is the strongest proposition that we learn, and in the case of peer disagreement this includes the first-stage evidence as well. If this evidence is to be excluded from the conditionalization process, this has to be argued for on grounds independent from the conditionalization itself. For the observation that the disagreement-evidence may sometimes be trumped by the first-stage evidence, see Feldman 2009, p. 297.
53Feldman (2006, p. 228) has a quick discussion of self-trust, but one that misses, I think, the point in the text here.
54 This may bring me close to the dogmatism paradox, but not close enough: while it is paradoxical to claim that knowing (or justifiably believing) that p entitles you to ignore all evidence against p, it is not paradoxical to say that knowing (or justifiably believing) that p often entitles you to decrease your trust (for this very reason) in evidence that suggests not-p.
55 The only answer I can think of here is in terms of avoiding bootstrapping. I return to bootstrapping later in the text.
56 As Christensen (2007, p. 191, n. 7) notes in our context the mere fact that an argument or a piece of evidence will not convince an interlocutor does not show that relying on it would be begging the question in any objectionable way. Kelly (forthcoming, pp. 60–2) notices confusion on this point as one of the appeals of the Equal Weight View.
57 Wedgwood (forthcoming) also takes the disagreement itself as reason for demoting one’s interlocutor from his status as a peer, but he does not discuss the complication I discuss in the next section.
Let me remind you that by demoting him from the status of a peer you are not committing yourself to the claim that Adam is less smart or rational than you are, only to the claim that he is less likely to get things right on this topic, or to have gotten things right here. There may be many explanations of his lesser reliability that do not invoke his stupidity or irrationality. (Perhaps, for instance, he went to the wrong graduate school.)
58 See Christensen 2007, p. 196.
59 This is a paraphrase of Raz 1998 (p. 27). Raz discusses the practical, political significance of disagreement. The question is whether disagreement (of some sort, about some things) undermines legitimate state action. And Raz insists that it does not, partly because your reason for action here is not that you believe so-and-so (when others believe otherwise), but rather that so-and-so. This reason for action is not, of course, symmetrical between you and those you disagree with. Now, as I emphasized in the introduction, my question in this paper is epistemological, about what to believe, not practical, about what to do. So Raz’s point does not directly apply to the epistemological discussion of peer disagreement (and there may be other considerations distinguishing between the epistemic and the political significance of disagreement). But my point in the text here is nevertheless closely analogous to Raz’s point, and is naturally put in the terms borrowed from him.
For a point similar to the one in the text in an epistemological context, see Schroeder 2008. For its application in a metaethical context, see Enoch 2010.
60 This is the epistemic analogue of the kind of reason Schroeder (2007, p. 14) classifies as subjective normative reasons.
61 I conducted most of the discussion in this section in terms of reasons for beliefs, not in terms of evidence (so that your reason for believing that Adam is less reliable is that he is wrong about p). Perhaps, then, the shift now back to evidence is unwarranted. Perhaps, in other words, while I was right about your reason for demoting Adam and its asymmetrical nature, it is still true that the evidence regarding reliability is entirely symmetrical here. I am not sure what to say about this, because I am not sure what to make of the distinction between reasons for belief (in the sense in the text in this section) and evidence, or between your reason and your evidence. If you think that evidence really is very different, feel free to rephrase talk of evidence here in terms of reasons. I do not think anything of significance for my argument is lost when you do.
62 If you disagree with Adam, and Tom is aware of this disagreement, he should, of course, treat both of you merely as truthometers. But you know as much. You know that he should give your and Adam’s views equal weight. So are you not being inconsistent in believing that he should give you equal weight, and you should not? No, you are not. For you know that you are privy to the first-order evidence, and Tom is not. Tom should thus just rely on the second-stage evidence — namely, the readings of the two truthometers. You, on the other hand, should not abstract from all of the first-order evidence.
63 A point missed by Feldman (2006, p. 224).
64 For a somewhat similar point, though not in terms of the intending–foreseeing distinction, see Kelly forthcoming (pp. 43–4). Christensen’s (2007, p. 196) wording — asking whether from within a first-person perspective I should ‘give my beliefs … a kind of privileged position’ — is crucially ambiguous between the intending and foreseeing versions.
65 Let me return, then, to a dramatization already mentioned above: What if you first find out about two disagreeing peers, and then find out that one of them is you? Well, the fact that one of them is you is, of course, not relevant evidence at all. It is just that you know — you foresee — that if one of two disagreeing peers is you, you will have behaved epistemically in this case differently from cases where you are the impartial spectator. This may be so, for instance, simply because in the former case you are also privy to the first-stage evidence, and in the latter stage you may not be. Indeed, if you are privy to that evidence too, then in the spectator case too you should not in general end up with a symmetrical view of the disagreement.
66 Let me briefly comment here on an analogy with the issue of agent-relativity in the practical — for instance, moral — domain. When I choose an action from available alternatives, should I ground my choice only in agent-neutral values and reasons, like perhaps that of promoting the net balance of pleasure over pain? Or should I instead acknowledge the weight of some agent-relative values and reasons, like perhaps the reasons I — as someone writing a philosophy paper — have to be as rigorous as I can, or the value to me of my child’s well-being (over and above its place in the general utilitarian calculus)? Should it matter to me in deliberation whether a certain consequence will be the consequence of my action, or the consequence of someone else’s? (And so on.) I am not sure, but it is tempting to think about the discussion in this paper as an analogue of a view in the practical domain that insists that agent-neutral reasons and values do not exhaust the practically normative. The truthometer view, of course, is the analogue of the view according to which all practical normativity is agent-neutral. Furthermore, agent-relativity may very well require the normative significance of the intending–foreseeing distinction, or a distinction rather close to it. Here too, then, the analogy between the epistemological issues discussed in this paper and the agent-relativity issue in ethics is rather striking. Now, I do not know what to think about agent-relativity in the practical domain, but it seems to me that if it can be defended, it has to be defended along lines similar to the ones in the text here: that a consequence will be the consequence of my doing, for instance, does not seem to me to carry moral significance. (Analogously: that a view is mine does not justify me giving it more weight in a case of disagreement.) But perhaps my unique role in my own actions can after all justify agent-relativity of some sort. (Analogously: perhaps my unique role in revising my own beliefs can after all justify some asymmetrical self-trust.)
I do not know whether this analogy is merely anecdotal, or whether there is more to it. I hope to give the matter some more thought in the future.
67 I thank an anonymous referee for this objection.
68 For my own doubts, see Enoch 2007.
69 It is extremely plausible that epistemic justification depends, at least partly, on epistemic history, as explained in the text above. But it is not at all as plausible that the moral status of an action — whether, say, it is morally permissible — depends on the agent’s practical-reasoning-history. Indeed, whether this is so is a matter of considerable scholarly controversy, mostly in the context of attacks on — and defences of — the intending–foreseeing distinction. This is why I think the intending–foreseeing distinction is much more plausible in the context of epistemic justification (in which, surprisingly, it has not received attention): arguably, the epistemic analogue of moral permissibility (to which it is not clear that history is relevant in this way) is not justification, but truth.
70 For a different strategy of avoiding the Extra Weight View, see Wedgwood forthcoming. If it succeeds, then even without the analogue of the intending–foreseeing distinction I can avoid the Extra Weight View. But in fact I think Wedgwood’s strategy fails, for reasons that I cannot discuss here.
71 Kelly (forthcoming) raises a few bootstrapping-related objections to the Equal Weight View. I do not discuss them here.
72 This is also why Kelly (forthcoming, pp. 52 ff.) is wrong to say that the bootstrapping objection does not apply to the Total Evidence View: by revising beliefs in cases of peer disagreement according to the Total Evidence View, you know that your view will in effect be given extra weight compared to that of Adam, and this is enough to get the bootstrapping objection off the ground.
73Feldman (2006, p. 217), for instance, describes his view as ‘contingent real-world scepticism’.
74 For more on — and against — this requirement, see Enoch and Schechter 2008.
76Vogel (2008) suggests that we avoid the problem of easy knowledge — when we want to — by introducing an explicit epistemic prohibition on rule-circularity. This move seems to me to be objectionably ad hoc. But the point I want to make here, of course, is that if some such move is legitimate, it may be legitimate as a way of avoiding bootstrapping here too.
77 For a compelling argument supporting this conclusion (or at least that this conclusion is the only alternative to scepticism), see Van Cleve 2003.
78 The theory Schechter and I develop in Enoch and Schechter 2008 may have interesting implications here: for it may be seen as offering a rationale for a distinction between cases where ‘blind’ reasoning (reasoning without an independently justified belief in the reliability of the relevant kind of reasoning) may be justified, and cases where it is not. The former consist in the cases of forming beliefs using basic belief-forming methods that are indispensable to a rationally obligatory project. Arguably, the moderate self-trust called for by the rejection of the truthometer view satisfies this condition.
79 For some discussion of this asymmetry, see Roush (2009, pp. 258–9).
Sometimes that you have this belief could be relevant evidence for you. One kind of example would be if p is a very special proposition (for instance, the proposition that you have a brain, or that you have beliefs, or that there are some propositions about the truth of which you disagree with Adam). Another kind of example would be if you have forgotten the initial evidence, but you find in yourself the belief p; in such a case you may take this as evidence that p (assuming you are somewhat reliable about these matters, and know as much). But this is a case in which you look at your past self from a third-person perspective.
80 Here the Common Sense View is very similar to Kelly’s Total Evidence View. See Kelly forthcoming, p. 32. And for similar suggestions that there is no general, informative principle regarding the appropriate response to peer disagreement, see Feldman (2009, Sect. 1), and Roush (2009, pp. 261–2), though the routes by which both of these authors get to their closely related conclusions are very different (and both very different from my own).
81 Kelly (forthcoming, pp. 34 ff.) thinks the number of peers disagreeing with you can make all the difference — to the point of meriting full deference. I am not sure that full deference is consistent with the ineliminability of the first-person perspective I (but not Kelly) have been emphasizing. But I can certainly agree that numbers do make a difference here, in something like the way Kelly envisages.
82 A point I owe to Michael Antony.
83 For very helpful comments and discussions, I thank Michael Antony, Hagit Benbaji, Andy Egan, Adam Elga, Pete Graham, Ofer Malcai, Ariel Porat, Joseph Raz, Josh Schechter, Mark Schroeder, Nishi Shah, Uzi Segal, as well as two anonymous referees for Mind, and Mind’s editor. Earlier versions were presented at NYU, Rice, and Northwestern, and I thank the participants for the helpful discussions. I am especially grateful to Matt Kotzen, Andy Egan, and Adam Elga for saving me from an embarrassing mistake.

References

Antony
Louise
Philosophers without Gods: Meditations on Atheism and the Secular Life
 , 
2007
Oxford
Oxford University Press
Bogardus
Tomas
‘A Vindication of the Equal-Weight View’
Episteme
 , 
2009
, vol. 
6
 (pg. 
324
-
35
)
Christensen
David
‘Epistemology and Disagreement: The Good News’
Philosophical Review
 , 
2007
, vol. 
116
 (pg. 
187
-
217
)
Cohen
Stewart
‘Basic Knowledge and the Problem of Easy Knowledge’
Philosophy and Phenomenological Research
 , 
2002
, vol. 
65
 (pg. 
309
-
28
)
Conee
Earl
‘Peerage’
Episteme
 , 
2009
, vol. 
6
 (pg. 
313
-
23
)
Elga
Adam
‘Reflection and Disagreement’
Nous
 , 
2007
, vol. 
41
 (pg. 
478
-
502
)
Elga
Adam
 
MS: ‘How to Disagree about How to Disagree’
Enoch
David
‘Intending, Foreseeing, and the State’
Legal Theory
 , 
2007
, vol. 
13
 (pg. 
69
-
99
)
Enoch
David
‘How Is Moral Disagreement a Problem for Realism?’
Journal of Ethics
 , 
2009
, vol. 
13
 (pg. 
15
-
50
)
Enoch
David
‘How Objectivity Matters’
Oxford Studies in Metaethics
 , 
2010
, vol. 
5
 (pg. 
111
-
52
)
Enoch
David
Schechter
Joshua
‘How Are Basic Belief-forming Methods Justified?’
Philosophy and Phenomenological Research
 , 
2008
, vol. 
76
 (pg. 
547
-
79
)
Feldman
Richard
‘Epistemological Puzzles about Disagreement’
2006
 
In Hetherington 2006, pp. 216–36
Feldman
Richard
‘Reasonable Religious Disagreement’
2007
 
In Antony 2007, pp. 194–214
Feldman
Richard
Warfield
Ted
Disagreement
 , 
forthcoming
Oxford
Oxford University Press
Foley
Richard
Intellectual Trust in Oneself and Others
 , 
2001
Cambridge
Cambridge University Press
Frances
Bryan
‘The Reflective Epistemic Renegade’
Philosophy and Phenomenological Research
 , 
forthcoming
Hawthorne
John
Gednler-Szabo
Tamar
Oxford Studies in Epistemology
 , 
2005
, vol. 
Vol. 1
 
Oxford
Oxford University Press
Hetherington
Stephen
Epistemology Futures
 , 
2006
Oxford
Oxford University Press
Kelly
Thomas
‘The Epistemic Significance of Disagreement’
2005
 
In Hawthorne and Gendler-Szabo 2005, pp. 167–96
Kelly
Thomas
‘Evidentialism, Higher-Order Evidence, and Disagreement’
Episteme
 , 
2009
, vol. 
6
 (pg. 
294
-
312
)
Kelly
Thomas
‘Peer Disagreement and Higher Order Evidence’
Forthcoming
 
In Feldman and Warfield forthcoming. Page references are to the version at: <http://www.princeton.edu/∼tkelly/papers/Peer%20Disagreement%20and%20Higher%20Order%20Evidenc1.pdf>
Luper
Steven
The Sceptics: Contemporary Essays
 , 
2003
Burlington
Ashgate Publishing
Nagel
Thomas
The View From Nowhere
 , 
1986
Oxford
Oxford University Press
Raz
Joseph
‘Disagreement in Politics’
American Journal of Jurisprudence
 , 
1998
, vol. 
43
 (pg. 
25
-
52
)
Roush
Sherrilyn
‘Second Guessing: A Self-Help Manual’
Episteme
 , 
2009
, vol. 
6
 (pg. 
251
-
68
)
Schroeder
Mark
‘The Scope of Instrumental Reason’
Philosophical Perspectives
 , 
2004
, vol. 
18
 (pg. 
337
-
64
)
Schroeder
Mark
Slaves of the Passions
 , 
2007
Oxford
Oxford University Press
Schroeder
Mark
‘Having Reasons’
Philosophical Studies
 , 
2008
, vol. 
139
 (pg. 
57
-
71
)
Talbott
William
‘Bayesian Epistemology’
Stanford Encyclopedia of Philosophy
 , 
2008
 
Van Cleve
James
‘Is Knowledge Easy — or Impossible? Externalism as the Only Alternative to Scepticism’
2003
 
In Luper 2003, pp. 45–60
Vogel
Jonathan
‘Epistemic Bootstrapping’
Journal of Philosophy
 , 
2008
, vol. 
105
 (pg. 
518
-
39
)
Weatherson
Brian
MS: ‘Disagreeing about Disagreement’
 
Wedgwood
Ralph
‘The Moral Evil Demons’
forthcoming
 
In Feldman and Warfield forthcoming
White
Roger
‘Epistemic Permissiveness’
Philosophical Perspectives
 , 
2005
, vol. 
19
 (pg. 
445
-
59
)
White
Roger
‘On Treating Oneself and Others as Thermometers’
Episteme
 , 
2009
, vol. 
6
 (pg. 
233
-
50
)