Decision Theory Without Luminosity (with Benjamin Levinstein), Mind
Our decision-theoretic states are not luminous. We are imperfectly reliable at identifying our own credences, utilities, and available acts, and thus can never be more than imperfectly reliable at identifying the prescriptions of decision theory. The lack of luminosity affords decision theory a remarkable opportunity—to issue guidance on the basis of epistemically inaccessible facts. We show how a decision theory can guarantee action in accordance with contingent truths about which an agent is arbitrarily uncertain. It may seem that such advantages would require dubiously adverting to externalist facts that go beyond the internalism of traditional decision theory, but this is not so. Using only the standard repertoire of decision-theoretic tools, we show how to modify existing decision theories to take advantage of this opportunity.
These improved decision theories require agents to maximize conditional expected utility—expected utility conditional upon an agent’s actual decision- situation. We call such modified decision theories “self-confident”. These self-confident decision theories have a distinct advantage over standard decision theories—their prescriptions are better.
Absolutism and its Limits (with John Hawthorne and Clayton Littlejohn), Journal of Moral Philosophy
Many philosophers think that given the choice between saving the life of an innocent person and averting many minor ailments or inconveniences, it would be better to save the life. Similarly, when confronted with the choice between securing many minor or trifling goods whilst violating someone’s rights or respecting this person’s rights and missing out on these many minor goods, it would be better to forego these minor goods. These intuitions concern cases where stakes are certain. How can they be extended to cases where stakes are uncertain?
Lazar and Lee-Stronach (2019) contend that the value produced by trifling things is bounded. More is better, but only up to a particular limit. While we allow that their approach enjoys some advantages, we maintain that it suffers from some significant problems. We present a series of objections, and explore the prospects for revising this sort of model in light of them.
Evidential Decision Theory and the Ostrich (with Benjamin Levinstein), Philosophers' Imprint
Evidential Decision Theory is flawed, but its flaws are not fully understood. David Lewis (1981) famously charged that EDT recommends an irrational policy of managing the news and “commends the ostrich as rational”. Lewis was right, but the case he appealed to — NEWCOMB — does not demonstrate his conclusion. Indeed, decision theories other than EDT, such as Cohesive Decision Theory and Functional Decision Theory, agree with EDT's verdicts in NEWCOMB, but their flaws, whatever they may be, do not stem from any ostrich-like recommendations.
We offer a new case which shows that EDT mismanages the news, thus vindicating Lewis’s original charge. We argue that this case reveals a flaw in the “Why ain’cha rich?” defense of EDT. We argue further that this case is an advance on extant putative counterexamples to EDT
Infelicitous Conditionals and KK (with John Hawthorne), Mind
Kevin Dorst (2019) uses the ‘manifest unassertability’ of conditionals of the form ‘If I don’t know p, then p’ as a new motivation for the KK thesis. In this paper we argue that his argumentation is misguided. Plausible heuristics offer compelling and nuanced explanation of the relevant infelicity data. Meanwhile, Dorst relies on tools that, quite independently of KK, turn out to be rather poor predictors of the infelicity of indicative conditionals.
Counting Your Chickens (with Adam Lerner and Jeffrey Sanford Russell), Australian Journal of Philosophy
Suppose that, for reasons of animal welfare, it would be better if everyone stopped eating chicken. Does it follow that you should stop eating chicken? Proponents of the “inefficacy objection” argue that, due to the scale and complexity of markets, the expected effects of your chicken purchases are negligible. So the expected effects of eating chicken do not make it wrong.
We argue that this objection does not succeed, in two steps. First, empirical data about chicken production tells us that the expected effects of consuming *many* chickens are not negligible. Second, this implies that the expected effect of consuming one chicken is ordinarily not negligible. *Parity* between your purchase and other counterfactual purchases and *uncertainty* about others’ consumption behavior each tend to pull the expected effect of a single purchase toward the average large scale effect. While some purchases do have negligible expected effects, many do not.
Updating Without Evidence (with Jeffrey Sanford Russell), Noûs
Sometimes you are unreliable at fulfilling your doxastic plans: for example, if you plan to be fully confident in all truths, probably you will end up being fully confident in some falsehoods by mistake. In some cases, there is information that plays the classical role of *evidence*—your beliefs are perfectly discriminating with respect to some possible facts about the world—and there is a standard expected-accuracy-based justification for planning to *conditionalize* on this evidence. This planning-oriented justification extends to some cases where you do not have transparent evidence, in the sense that your beliefs are not perfectly discriminating with respect to any non-trivial facts. In other cases, accuracy considerations do not tell you to plan to conditionalize on any information at all, but rather to plan to follow a different updating rule. Even in the absence of evidence, accuracy considerations can guide your doxastic plan.
Multiple Universes and Self-Locating Evidence (with John Hawthorne and Jeffrey Sanford Russell), Philosophical Review
Is the fact that our universe contains fine-tuned life evidence that we live in a multiverse? Hacking (1987) and White (2000) influentially argue that it is not. We approach this question through a systematic framework for self-locating epistemology. As it turns out, leading approaches to self-locating evidence agree that the fact that our own universe contains fine-tuned life indeed confirms the existence of a multiverse (at least in a suitably idealized setting). This convergence is no accident: we present two theorems showing that in this setting, any updating rule that satisfies a few reasonable conditions will have the same feature. The conclusion that fine-tuned life provides evidence for a multiverse is hard to escape.
The Rationality of Epistemic Akrasia (with John Hawthorne and Maria Lasonen-Aarnio), Philosophical Perspectives
The epistemic akratic either possesses a belief which she believes it is rationally forbidden to possess, or lacks a belief which she believes it is rationally forbidden to lack. The thesis that all epistemically akratic states are irrational is subject to counterexamples. We shall, in part one below, present various families of counterexamples. Some proponents of anti-akrasia principles concede the existence of isolated counterexamples that they hope to circumscribe, thereby preserving the irrationality of epistemic akrasia in all but a certain special class of cases. Against this background, the point we want to make is that counterexamples are pervasive, and have various distinct sources. In part two, we look at two positive lines of argument for anti-akrasia principles. In part three, we look at a strategy for keeping the anti-akratic sensibility alive in the light of the examples presented in section one, a strategy that appeals to idealization. All told, the case against anti-akratic principles is surprisingly strong and the case for anti-akratic principles is surprisingly weak.
Non-Measurability, Imprecise Credences, and Imprecise Chances (with Alan Hájek and John Hawthorne), Mind
Orthodox Bayesianism models an agent's epistemic state with a probability function. Each proposition to which the agent has a doxastic attitude is assigned a real number in the unit interval. But a heterodox probabilistic epistemology has also been developed in which each proposition to which the agent has a doxastic attitude is assigned a range of real numbers. Orthodox Bayesianism requires agents to have precise credences, while heterodox probabilistic epistemology allows agents to have imprecise credences.
This paper offers a new motivation for imprecise credences, one based on the mathematical phenomenon of non-measurable sets. Given natural constraints, not all propositions can receive precise credences. So if precise credences are the only credences allowed, some propositions will be left out. Given analogous constraints, not all propositions can receive precise chances. So if precise chances are the only chances allowed, some propositions will be left out. And leaving propositions out in this way poses myriad difficulties. But if imprecise credences and imprecise chances are allowed, then no propositions need be left out. The framework of imprecise credences and imprecise chances thus has a major, heretofore unappreciated advantage.
Statistical Evidence and Incentives in the Law (with John Hawthorne and Vishnu Sridharan), Nous-Supplement: Philosophical Issues
Is it bad practice to use statistical evidence as the basis for a finding of guilt or liability? And if it is bad practice, why is it bad practice? This paper focuses on the incentivizing aspects of candidate legal systems in civil cases. Incentives clearly matter––a legal system which encouraged bad behavior could easily be bad, even if it reliably penalized bad behavior and did not penalize good behavior. And the use of statistical evidence does have implications for incentives.
In Section 1, we explain in some detail what incentives there are in various classic cases from the statistical evidence literature. In Section 2, we examine the relationship between incentives and epistemological sensitivity. In Sections 3 we discuss an important class of cases – toxic torts – and explain why they provide a strong incentive based case for findings of liability based on statistical evidence.
A Probabilistic Analysis of Title IX Reforms (with Jason Iuliano), Journal of Political Philosophy
In 2011, the Office for Civil Rights made substantial changes to the regulations governing campus sexual assault investigations. These changes were the subject of significant controversy, and in 2017 the Department of Education issued further guidance, contravening some—but not all—of the 2011 reforms. In light of this action, regulations governing campus sexual assault investigations continue to be the focus of intense debate, and their future is far from certain. Despite this sharp disagreement between supporters and opponents of the reforms, a general consensus has emerged on one key aspect: The 2011 reforms unequivocally benefited sexual assault victims and unequivocally harmed sexual assault perpetrators. In this Article, we challenge that consensus.
Drawing upon insights from Bayesian epistemology, we argue that the true effects of the 2011 reforms were far from uniform. In certain situations, accusers benefitted, but in other situations, those accused of sexual assault benefitted. Although this result may seem evenly balanced, the precise distribution of benefits and harms is concerning. Specifically, our analysis reveals that the benefits were most likely to accrue to guilty defendants and lying accusers and that the harms were most likely to fall upon innocent defendants and truth-telling accusers. This outcome runs counter to the goals of any just or reasonable adjudicatory system and calls into question the efficacy of the campus sexual assault reforms. In addition, these same findings indicate that certain aspects of the 2011 reforms left in place—and in some cases accentuated—by the 2017 guidance may make it even more difficult for victims to obtain justice.
Solving a Paradox of Evidential Equivalence (with Cian Dorr and John Hawthorne), Mind
David Builes presents a paradox concerning how confident you should be that any given member of an infinite collection of fair coins landed heads, conditional on the information that they were all flipped and only finitely many of them landed heads. We argue that if you should have any conditional credence at all, it should be 1/2.
Infinite Prospects (with Jeff Russell), Philosophy and Phenomenological Research
People with the kind of preferences that give rise to the St. Petersburg paradox are problematic—but not because there is anything wrong with infinite utilities. Rather, such people cannot assign the St. Petersburg gamble any value that any kind of outcome could possibly have. Their preferences also violate an infinitary generalization of Savage’s Sure Thing Principle, which we call the Countable Sure Thing Principle, as well as an infinitary generalization of von Neumann and Morgenstern’s Independence axiom, which we call Countable Independence. In violating these principles, they display foibles like those of people who deviate from standard expected utility theory in more mundane cases: they choose dominated strategies, pay to avoid information, and reject expert advice. We precisely characterize the preference relations that satisfy Countable Independence in several equivalent ways: a structural constraint on preferences, a representation theorem, and the principle we began with, that every prospect has a value that some outcome could have.
The Fallacy of Calibrationism, Philosophy and Phenomenological Research
How should an agent respond to information about the reliability of her judgments? Various philosophers have argued for various versions of calibrationism, a view according to which an agent's credences should correspond to the (suitably defined) expected reliabilities of her judgments. Calibrationism gives intuitively reasonable verdicts, and it applies straightforwardly even when an agent is worried that her judgments may be flawed. Because of these advantages, even philosophers who don't want to endorse calibrationism in full generality are often inclined to endorse its verdicts in a wide array of cases.But calibrationism is misguided. Calibrationism relies on the base-rate fallacy, a classic mistake in probabilistic epistemology. Thus while calibrationism is intuitive, it cannot be correct.
Permissivism, Margin-for-Error, and Dominance (with John Hawthorne), Philosophical Studies
Ginger Schultheis offers a novel and interesting argument against epistemic permissivism. While we think that her argument is ultimately uncompelling, we think its faults are instructive. Her thought-provoking discussion points to a range of interesting issues that are eminently worthy of attention. The aim of this paper is not simply to point out issues with her paper; it is to explore the territory opened up by her reasoning.
The Problems of Transformative Experience, Philosophical Studies
Laurie Paul has recently argued that transformative experiences pose a problem for decision theory. According to Paul, agents facing transformative experiences do not possess the states required for decision theory to formulate its prescriptions. Agents facing transformative experiences are impoverished relative to their decision problems, and decision theory doesn't know what to do with impoverished agents.
Richard Pettigrew takes Paul's challenge seriously. He grants that decision theory (in its traditional state) cannot handle decision problems involving transformative experiences. To deal with the problems posed by transformative experiences, Pettigrew proposes two alterations to decision theory. The first alteration is meant to handle the problem posed by epistemically transformative experiences, and the second alteration is meant to handle the problem posed by personally transformative experiences.
I argue that Pettigrew's proposed alterations are untenable. Pettigrew's novel decision theory faces both formal and philosophical problems. It is doubtful that Pettigrew can formulate the sort of decision theory he wants, and further doubtful that he should want such a decision theory in the first place. Moreover, the issues with Pettigrew's proposed alterations help reveal issues with Paul's initial challenge to decision theory. I suggest that transformative experiences should not be taken to pose a problem for decision theory, but should instead be taken to pose a topic for ethics.
A Patchwork Epistemology of Disagreement?, Philosophical Studies The epistemology of disagreement standardly divides conciliationist views from steadfast views. But both sorts of views are subject to counterexample--indeed, both sorts of views are subject to the same counterexample. After presenting this counterexample, I explore how the epistemology of disagreement should be reconceptualized in light of it.
Accounting for Intrinsic Values in the Federal Student Loan System (with Jason Iuliano), in The Handbook of Philosophy and Public Policy If student loans were given so that lenders could make a profit, then a borrower's expected ability to repay a loan would be the sole determinant of what kind of loan the borrower would be offered. But this is not so. Instead, the US government issues student loans with the intention of benefiting society––and in particular, of benefitting the loan recipients themselves. While some of this benefit is expressed in higher earning potential some of it is not. We argue that there are sorts of work that are humbly paid not because they are of little value, but instead because they are of so much value. We contend that student loans should be orchestrated so as to facilitate such noble endeavors.
Fine-Tuning Fine-Tuning (with John Hawthorne), in Knowledge, Belief, and God The laws of physics are unexpectedly inhospitable to life. This fine-tuning of the fundamental constants is substantially more likely given the existence of God than it is given the non-existence of God, and is thus strong evidence that there is a God. Thus the basic idea of the fine-tuning argument is simple and legitimate; its status is more controversial than it ought to be.
We formulate the fine-tuning argument using the machinery of Bayesian probability theory. After some scene setting, we will sketch what we take to be a promising way of developing the fine-tuning argument, which we dub the “core argument”. Additional detail and explanation are supplied as we engage with a series of potential concerns about the argument so sketched. Along the way, we rebut a recent critique of the fine-tuning argument from Jonathan Weisberg and also rebut a range of critiques that are common in the popular and scientific literature. We finally turn to atheistic replies that concede the lessons of the core argument, but which attempt to find a rational home for atheism with its scope. We believe this to be the most promising approach for the atheist.
Misapprehensions About the Fine-Tuning Argument, (with John Hawthorne), Royal Institute of Philosophy Supplement
The fine-tuning argument purports to show that particular aspects of fundamental physics provide evidence for the existence of God. This argument is legitimate, yet there are numerous doubts about its legitimacy. There are various misgivings about the fine-tuning argument which are based on misunderstandings. In this paper we will go over several major misapprehensions (from both popular and philosophical sources), and explain why they do not undermine the basic cogency of the fine-tuning argument.
Probabilities Cannot be Rationally Neglected,Mind I argue that probabilities cannot be rationally neglected. I show that Nicholas Smith’s proposal for ignoring low probability outcomes must, on pain of violating dominance reasoning, license taking enormous risks for arbitrarily little reward.
Evil and Evidence (with Matthew Benton and John Hawthorne), Oxford Studies in Philosophy of Religion The problem of evil is the most prominent argument against the existence of God. Skeptical theists contend that it is not a good argument. Their reasons for this contention vary widely, involving such notions as CORNEA, epistemic appearances, 'gratuitous' evils, 'levering' evidence, and the representativeness of goods. We aim to dispel some confusions about these notions, in particular by clarifying their roles within a probabilistic epistemology. In addition, we develop new responses to the problem of evil from both the phenomenal conception of evidence and the knowledge-first view of evidence.
Duty and Knowledge,Philosophical Perspectives Deontological ethics needs a decision theory. But traditional decision theory is unsuitable for specifying when an agent may perform an action which may or may not violate a deontological scruple. I show that a novel decision theory--one that is based on knowledge-first epistemology––suits deontological ethics much better.
A New Prospect for Epistemic Aggregation (with Daniel Berntson), Episteme How should the opinion of a group be related to the opinions of the group members? In this article, we will defend a package of four norms pairs of prior probabilities and evidence. We show that there is a method of aggregating credal pairs that possesses all four virtues