Assistant Professor of Philosophy, Northeastern University

Current Research

Current Research

 

A paper about statistical evidence

This essay defends the thesis that legal standards of proof are reducible to thresholds of probability. Many reject this thesis because it appears to permit finding defendants liable solely on the basis of statistical, rather than individualised, evidence. My argument combines Thomson's (1986) causal analysis of evidence with recent work in formal theories of causal inference. I show that legal standards of proof can be reduced to probabilities, but deriving these probabilities involves more than just statistics.


A paper about the right to privacy

Why does privacy matter? This essay defends the controversial thesis, put forward by Thomson (1975), that privacy as such does not matter. I explain how this thesis has distinctive normative implications for the design and regulation of information technologies we routinely rely on.


A paper about obedience to authority

How should we respond to the orders of legitimate practical authorities? Standard theories of legitimate authority hold that obedience should be an all-or-nothing affair. Upon being issued an order from an authority who we are certain is legitimate, we are rationally required to screen-off our judgement and do as we are told. In this talk, I explore whether such theories can adequately account for situations where we are uncertain about an authority’s legitimacy. I bring decision-theoretic concepts to bear on the problem, arguing that traditional conceptions of obedience to legitimate authority are untenable in cases of uncertainty. I suggest an alternative, decision-theoretic account of rational obedience, one that vindicates the so-called philosophical anarchist position of William Godwin (1793), who held that there is but one power to whom we can yield obedience: the decision of own own understanding, the dictate of our own conscience.


A paper about game theory and social structures (co-author: Rory Smead)

Avoiding existential threats, like climate chance, requires solving massive collective problems involving many interdependent social systems. Standard theories of collective action ignore this interdependence and focus on identifying stable solutions to the collective action problem. In this paper, we depart from this approach in both respects. First, we present a novel game theoretic model of interdependent collective action scenarios. Second, we both identify stable solutions to these scenarios and we use evolutionary models to estimate the probability that such which solutions can actually be reached. We show that taking into account these interdependencies reveals that the path to stable, collective improvements can be even more narrow and fraught than traditional game-theoretic analyses suggest. Our game-theoretic analysis is intended to be a partial analysis of the challenges underlying real-world collective action, such as the climate crisis. While we offer no solutions, we hope to be contributing to a clearer understanding of the problem.

Presented at ‘Small Acts, Big Harms Workshop on Individual Responsibility in Collectively Caused Harms’, June 2021, University of Helsinki (peer-reviewed).


A paper about moral aggregation

A major concern about Utilitarian ethics is that it ignores wrongdoing that does not undermine aggregate social utility. This essay explores a way to incorporate deontic considerations into social utility aggregation, via Harsanyi’s Theorem (1955).

Presented at the first ‘Varieties of Risk’ Workshop, September 2021, hosted by the University of Edinburgh and University of Stirling (Invited, online)


Proceed with Caution

(co-author: Dr Annette Zimmermann)

It is becoming more common that the decision-makers in private and public institutions are predictive algorithmic systems, not humans. This article argues that relying on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. Under such nonideal conditions, algorithmic systems, if left to their own devices, cannot meet a necessary condition of procedural justice, because they fail to provide a sufficiently nuanced model of which cases count as relevantly similar. Resolving this problem requires deliberative capacities uniquely available to human agents. After exploring the limitations of existing formal algorithmic fairness strategies, the article argues that procedural justice requires that human agents relying wholly or in part on algorithmic systems proceed with caution: by avoiding doxastic negligence about algorithmic outputs, by exercising deliberative capacities when making similarity judgments, and by suspending belief and gathering additional information in light of higher-order uncertainty.

Published as: Zimmermann, A. & Lee-Stronach, C. ‘Proceed with Caution’, Canadian Journal of Philosophy (2021) [early view, open access].