Statistical Evidence and Incentives in the Law (with John Hawthorne and Vishnu Sridharan), Nous-Supplement: Philosophical Issues
Is it bad practice to use statistical evidence as the basis for a finding of guilt or liability? And if it is bad practice, why is it bad practice? This paper focuses on the incentivizing aspects of candidate legal systems in civil cases. Incentives clearly matter––a legal system which encouraged bad behavior could easily be bad, even if it reliably penalized bad behavior and did not penalize good behavior. And the use of statistical evidence does have implications for incentives.
In Section 1, we explain in some detail what incentives there are in various classic cases from the statistical evidence literature. In Section 2, we examine the relationship between incentives and epistemological sensitivity. In Sections 3 we discuss an important class of cases – toxic torts – and explain why they provide a strong incentive based case for findings of liability based on statistical evidence.
A Probabilistic Analysis of Title IX Reforms (with Jason Iuliano), Journal of Political Philosophy
In 2011, the Office for Civil Rights made substantial changes to the regulations governing campus sexual assault investigations. These changes were the subject of significant controversy, and in 2017 the Department of Education issued further guidance, contravening some—but not all—of the 2011 reforms. In light of this action, regulations governing campus sexual assault investigations continue to be the focus of intense debate, and their future is far from certain. Despite this sharp disagreement between supporters and opponents of the reforms, a general consensus has emerged on one key aspect: The 2011 reforms unequivocally benefited sexual assault victims and unequivocally harmed sexual assault perpetrators. In this Article, we challenge that consensus.
Drawing upon insights from Bayesian epistemology, we argue that the true effects of the 2011 reforms were far from uniform. In certain situations, accusers benefitted, but in other situations, those accused of sexual assault benefitted. Although this result may seem evenly balanced, the precise distribution of benefits and harms is concerning. Specifically, our analysis reveals that the benefits were most likely to accrue to guilty defendants and lying accusers and that the harms were most likely to fall upon innocent defendants and truth-telling accusers. This outcome runs counter to the goals of any just or reasonable adjudicatory system and calls into question the efficacy of the campus sexual assault reforms. In addition, these same findings indicate that certain aspects of the 2011 reforms left in place—and in some cases accentuated—by the 2017 guidance may make it even more difficult for victims to obtain justice.
Solving a Paradox of Evidential Equivalence (with Cian Dorr and John Hawthorne), Mind
David Builes presents a paradox concerning how confident you should be that any given member of an infinite collection of fair coins landed heads, conditional on the information that they were all flipped and only finitely many of them landed heads. We argue that if you should have any conditional credence at all, it should be 1/2.
Infinite Prospects (with Jeff Russell), Philosophy and Phenomenological Research
People with the kind of preferences that give rise to the St. Petersburg paradox are problematic—but not because there is anything wrong with infinite utilities. Rather, such people cannot assign the St. Petersburg gamble any value that any kind of outcome could possibly have. Their preferences also violate an infinitary generalization of Savage’s Sure Thing Principle, which we call the Countable Sure Thing Principle, as well as an infinitary generalization of von Neumann and Morgenstern’s Independence axiom, which we call Countable Independence. In violating these principles, they display foibles like those of people who deviate from standard expected utility theory in more mundane cases: they choose dominated strategies, pay to avoid information, and reject expert advice. We precisely characterize the preference relations that satisfy Countable Independence in several equivalent ways: a structural constraint on preferences, a representation theorem, and the principle we began with, that every prospect has a value that some outcome could have.
The Fallacy of Calibrationism, Philosophy and Phenomenological Research
How should an agent respond to information about the reliability of her judgments? Various philosophers have argued for various versions of calibrationism, a view according to which an agent's credences should correspond to the (suitably defined) expected reliabilities of her judgments. Calibrationism gives intuitively reasonable verdicts, and it applies straightforwardly even when an agent is worried that her judgments may be flawed. Because of these advantages, even philosophers who don't want to endorse calibrationism in full generality are often inclined to endorse its verdicts in a wide array of cases.But calibrationism is misguided. Calibrationism relies on the base-rate fallacy, a classic mistake in probabilistic epistemology. Thus while calibrationism is intuitive, it cannot be correct.
Permissivism, Margin-for-Error, and Dominance (with John Hawthorne), Philosophical Studies
Ginger Schultheis offers a novel and interesting argument against epistemic permissivism. While we think that her argument is ultimately uncompelling, we think its faults are instructive. Her thought-provoking discussion points to a range of interesting issues that are eminently worthy of attention. The aim of this paper is not simply to point out issues with her paper; it is to explore the territory opened up by her reasoning.
The Problems of Transformative Experience, Philosophical Studies
Laurie Paul has recently argued that transformative experiences pose a problem for decision theory. According to Paul, agents facing transformative experiences do not possess the states required for decision theory to formulate its prescriptions. Agents facing transformative experiences are impoverished relative to their decision problems, and decision theory doesn't know what to do with impoverished agents.
Richard Pettigrew takes Paul's challenge seriously. He grants that decision theory (in its traditional state) cannot handle decision problems involving transformative experiences. To deal with the problems posed by transformative experiences, Pettigrew proposes two alterations to decision theory. The first alteration is meant to handle the problem posed by epistemically transformative experiences, and the second alteration is meant to handle the problem posed by personally transformative experiences.
I argue that Pettigrew's proposed alterations are untenable. Pettigrew's novel decision theory faces both formal and philosophical problems. It is doubtful that Pettigrew can formulate the sort of decision theory he wants, and further doubtful that he should want such a decision theory in the first place. Moreover, the issues with Pettigrew's proposed alterations help reveal issues with Paul's initial challenge to decision theory. I suggest that transformative experiences should not be taken to pose a problem for decision theory, but should instead be taken to pose a topic for ethics.
A Patchwork Epistemology of Disagreement?, Philosophical Studies The epistemology of disagreement standardly divides conciliationist views from steadfast views. But both sorts of views are subject to counterexample--indeed, both sorts of views are subject to the same counterexample. After presenting this counterexample, I explore how the epistemology of disagreement should be reconceptualized in light of it.
Accounting for Intrinsic Values in the Federal Student Loan System (with Jason Iuliano), in The Handbook of Philosophy and Public Policy If student loans were given so that lenders could make a profit, then a borrower's expected ability to repay a loan would be the sole determinant of what kind of loan the borrower would be offered. But this is not so. Instead, the US government issues student loans with the intention of benefiting society––and in particular, of benefitting the loan recipients themselves. While some of this benefit is expressed in higher earning potential some of it is not. We argue that there are sorts of work that are humbly paid not because they are of little value, but instead because they are of so much value. We contend that student loans should be orchestrated so as to facilitate such noble endeavors.
Fine-Tuning Fine-Tuning (with John Hawthorne), in Knowledge, Belief, and God The laws of physics are unexpectedly inhospitable to life. This fine-tuning of the fundamental constants is substantially more likely given the existence of God than it is given the non-existence of God, and is thus strong evidence that there is a God. Thus the basic idea of the fine-tuning argument is simple and legitimate; its status is more controversial than it ought to be.
We formulate the fine-tuning argument using the machinery of Bayesian probability theory. After some scene setting, we will sketch what we take to be a promising way of developing the fine-tuning argument, which we dub the “core argument”. Additional detail and explanation are supplied as we engage with a series of potential concerns about the argument so sketched. Along the way, we rebut a recent critique of the fine-tuning argument from Jonathan Weisberg and also rebut a range of critiques that are common in the popular and scientific literature. We finally turn to atheistic replies that concede the lessons of the core argument, but which attempt to find a rational home for atheism with its scope. We believe this to be the most promising approach for the atheist.
Misapprehensions About the Fine-Tuning Argument, (with John Hawthorne), Philosophy
The fine-tuning argument purports to show that particular aspects of fundamental physics provide evidence for the existence of God. This argument is legitimate, yet there are numerous doubts about its legitimacy. There are various misgivings about the fine-tuning argument which are based on misunderstandings. In this paper we will go over several major misapprehensions (from both popular and philosophical sources), and explain why they do not undermine the basic cogency of the fine-tuning argument.
Probabilities Cannot be Rationally Neglected,Mind I argue that probabilities cannot be rationally neglected. I show that Nicholas Smith’s proposal for ignoring low probability outcomes must, on pain of violating dominance reasoning, license taking enormous risks for arbitrarily little reward.
Evil and Evidence (with Matthew Benton and John Hawthorne), Oxford Studies in Philosophy of Religion The problem of evil is the most prominent argument against the existence of God. Skeptical theists contend that it is not a good argument. Their reasons for this contention vary widely, involving such notions as CORNEA, epistemic appearances, 'gratuitous' evils, 'levering' evidence, and the representativeness of goods. We aim to dispel some confusions about these notions, in particular by clarifying their roles within a probabilistic epistemology. In addition, we develop new responses to the problem of evil from both the phenomenal conception of evidence and the knowledge-first view of evidence.
Duty and Knowledge,Philosophical Perspectives Deontological ethics needs a decision theory. But traditional decision theory is unsuitable for specifying when an agent may perform an action which may or may not violate a deontological scruple. I show that a novel decision theory--one that is based on knowledge-first epistemology––suits deontological ethics much better.
A New Prospect for Epistemic Aggregation (with Daniel Berntson), Episteme How should the opinion of a group be related to the opinions of the group members? In this article, we will defend a package of four norms pairs of prior probabilities and evidence. We show that there is a method of aggregating credal pairs that possesses all four virtues