Published

  1. “A Computable von Neumann-Morgenstern Representation Theorem” (2025). Forthcoming at Synthese.

Abstract: Real agents have computational limitations. Various disciplines from bounded rationality to cognitive science represent these limitations mathematically via some computational model, such as a Turing machine. But our commonly accepted foundations for decision theory, such as representation theorems, do not say under what conditions an agent can be represented as having utility and probability functions that are computable via Turing machine. This paper gives such conditions by supplementing von Neumann and Morgenstern’s axioms; I also prove that there is an algorithm which, when given an agent’s preference relation as input, computes a utility function that represents the agent.

  1. “Cartesian Frames”, with Scott Garrabrant and Daniel Herrmann (2021). arXiv.

Under Review

  1. “Computable Qualitative Probability”

Abstract: Theories of qualitative probability provide a justification for the use of numerical probabilities to represent an agent’s degrees of belief. If a qualitative probability relation satisfies a set of well-known axioms then there is a probability measure that is compatible with that relation. In the particular case of subjective probability this means that we have sufficient conditions for representing an agent as having probabilistic beliefs. But the classical results are not constructive; there is in no general method for calculating the compatible measure from the qualitative relation. To address this problem this paper introduces the theory of computable qualitative probability. I show that there is an algorithm that computes a probability measure from a qualitative relation in highly general circumstances. Moreover I show that given a natural computability requirement on the qualitative relation the resulting probability measure is also computable. Since computable probability is a growing interest in Bayesian epistemology this result provides a valuable interpretation of that notion.

  1. “Computable Bayesian Epistemology”

Abstract: Bayesian epistemology is broadly concerned with providing norms for rational belief and learning using the mathematics of probability theory. But many authors have worried that the theory is too idealized to accurately describe real agents. In this paper I argue that Bayesian epistemology can describe more realistic agents while retaining sufficient generality by introducing ideas from a branch of mathematics called computable analysis. I call this program computable Bayesian epistemology. I situate this program by contrasting it with an ongoing debate about ideal versus bounded rationality. I then present foundational ideas from computable analysis and demonstrate their usefulness by proving the main result: on countably generated spaces there are no computable, finitely additive probability measures. On this basis I argue that bounded agents cannot have finitely additive credences, and so countable additivity is the appropriate norm of rationality. I conclude by discussing prospects for this research program.

In Preparation – Drafts available on request

  1. “Randomness without Points or Measures”

Abstract: Algorithmic randomness studies the properties of “typical” points in a probability space, where typicality is defined with respect to the probability measure. Randomness allows for a “pointwise” approach to probability theory by identifying a measure-one set of random points on which a given theorem—the Law of Large Numbers, for example—holds. On the other hand, one can prove these theorems in structures that do not have points, such as measure algebras. Rute has shown that nevertheless one can define Schnorr random “points” in measure algebras. In this paper I extend this work in two directions. First, I provide definitions of randomness notions other than Schnorr randomness which apply to measure algebras. I prove an effective Loomis-Sikorski representation theorem which shows that these definitions correspond exactly to the random points in the representing measure space. Second, I show that these definitions can be formulated purely qualitatively, without reference to a numerical probability measure.

2. “A Survey of Generalized Algorithmic Fractal Dimensions”

Abstract: Algorithmic fractal dimensions are defined as a limiting per-bit Kolmogorov complexity of a sequence. Remarkably, these notions coincide with classical fractal dimensions — Hausdorff and packing dimensions, for example — in general settings. For this reason recent work has used algorithmic fractal dimensions to prove classical results in fractal geometry which do not mention computability. I extend this line of work by defining families of generalized algorithmic fractal dimensions which correspond to so-called Rényi and Henschel-Procaccia dimensions. These dimensions concepts are commonly used in dynamical systems, complex systems, and network science. I show that in a general setting the algorithmic dimensions coincide with the classical dimensions.

Contact

Department of Logic and Philosophy of Science
University of California, Irvine
3151 Social Science Plaza A
Irvine, CA 92697-5100

My Google Scholar profile
My PhilPeople profile
My ResearchGate profile