Assistant Professor of Philosophy, Northeastern University

Published Research

Published Research

Morality, Uncertainty

 

Proceed with Caution

 

The Problem of Ignorance

Philosophical Quarterly (2021) 71 (2): 334-358.

Non-Consequentialist moral theories posit the existence of moral constraints: prohibitions on performing particular kinds of wrongful acts, regardless of the good those acts could produce. Many believe that such theories cannot give satisfactory verdicts about what we morally ought to do when there is some probability that we will violate a moral constraint. In this article, I defend Non-Consequentialist theories from this critique. Using a general choice-theoretic framework, I identify various types of Non-Consequentialism that have otherwise been conflated in the debate. I then prove a number of formal possibility and impossibility results establishing which types of Non-Consequentialism can -- and which cannot -- give us adequate guidance through through a risky world.

 

Canadian Journal of Philosophy (Special Issue: Political Philosophy and Artificial Intelligence)

It is becoming more common that the decision-makers in private and public institutions are predictive algorithmic systems, not humans. This article argues that relying on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. Under such nonideal conditions, algorithmic systems, if left to their own devices, cannot meet a necessary condition of procedural justice, because they fail to provide a sufficiently nuanced model of which cases count as relevantly similar. Resolving this problem requires deliberative capacities uniquely available to human agents. After exploring the limitations of existing formal algorithmic fairness strategies, the article argues that procedural justice requires that human agents relying wholly or in part on algorithmic systems proceed with caution: by avoiding doxastic negligence about algorithmic outputs, by exercising deliberative capacities when making similarity judgments, and by suspending belief and gathering additional information in light of higher-order uncertainty.

Published as: Zimmermann, A. & Lee-Stronach, C. ‘Proceed with Caution’, Canadian Journal of Philosophy (2021) [early view, open access].

 

Ethics (2020) 130 (2): 211-227.

Holly Smith (2014) contends that subjective deontological theories – those that hold that our moral duties are sensitive to our beliefs about our situation – cannot correctly determine whether one ought to gather more information before acting. Against this contention, I argue that deontological theories can use a decision-theoretic approach to evaluating the moral importance of information. I then argue that this approach compares favourably with an alternative approach proposed by Philip Swenson (2016).


Axiological Absolutism and Risk

Noûs (2019) 53 (1): 97-113. (co-author: Seth Lazar)

Consider the following claim: given the choice between saving a life and preventing any number of people from temporarily experiencing a mild headache, you should always save the life. Many moral theorists accept this claim. In doing so, they commit themselves to some form of ‘moral absolutism’: the view that there are some moral considerations that cannot be outweighed by any number of lesser moral considerations. In contexts of certainty, it is clear what moral absolutism requires of you. However, what does it require of you when deciding under risk? What ought you to do when there is a chance that, say, you will not succeed in saving the life? In recent years, various critics have argued that moral absolutism cannot satisfactorily deal with risk and should, therefore, be abandoned. In this paper, we show that moral absolutism can answer its critics by drawing on—of all things—orthodox expected utility theory.


Moral Priorities Under Risk

Canadian Journal of Philosophy (2018) 48 (6): 793-811.

Many moral theories are committed to the idea that some kinds of moral considerations should be respected, whatever the cost to ‘lesser’ types of considerations. A person’s life, for instance, should not be sacrificed for the trivial pleasures of others, no matter how many would benefit. However, according to the decision-theoretic critique of lexical priority theories, accepting lexical priorities inevitably leads us to make unacceptable decisions in risky situations. It seems that to operate in a risky world, we must reject lexical priorities altogether. This paper argues that lexical priority theories can, in fact, offer satisfactory guidance in risky situations. It does so by equipping lexical priority theories with overlooked resources from decision theory.