Davide Leonessi, MSc MFoCS, Oxford, September 2021

Mr. Davide Leonessi successfully defended his dissertation for the Masters of Science degree in Mathematics and Foundations of Computer Science, entitled “Transfinite game values in infinite games,” on 15 September 2021. Davide earned a distinction for his thesis, an outstanding result.

Davide Leonessi | Google scholar | Dissertation | arXiv

Abstract. The object of this study are countably infinite games with perfect information that allow players to choose among arbitrarily many moves in a turn; in particular, we focus on the generalisations of the finite board games of Hex and Draughts.

In Chapter 1 we develop the theory of transfinite ordinal game values for open infinite games following [Evans-Hamkins 2014], and we focus on the properties of the omega one, that is the supremum of the possible game values, of classes of open games; we moreover design the class of climbing-through-$T$ games as a tool to study the omega one of given game classes.

The original contributions of this research are presented in the following two chapters.

In Chapter 2 we prove classical results about finite Hex and present Infinite Hex, a well-defined infinite generalisation of Hex.

We then introduce the class of stone-placing games, which captures the key features of Infinite Hex and further generalises the class of positional games already studied in the literature within the finite setting of Combinatorial Game Theory.

The main result of this research is the characterization of open stone-placing games in terms of the property of essential locality, which leads to the conclusion that the omega one of any class of open stone-placing games is at most $\omega$. In particular, we obtain that the class of open games of Infinite Hex has the smallest infinite omega one, that is $\omega_1^{\rm Hex}=\omega$.

In Chapter 3 we show a dual result; we define the class of games of Infinite Draughts and explicitly construct open games of arbitrarily high game value with the tools of Chapter 1, concluding that the omega one of the class of open games of Infinite Draughts is as high as possible, that is $\omega_1^{\rm Draughts}=\omega_1$.

The full dissertation is available:

A deflationary account of Fregean abstraction in Zermelo-Fraenkel ZF set theory, Oxford, November 2021

This will be a talk for the Oxford Seminar in the Philosophy of Mathematics, 1 November, 4:30-6:30 GMT. The talk will be held on Zoom (contact the seminar organizers for the Zoom link).

Abstract. The standard treatment of sets and classes in Zermelo-Fraenkel set theory instantiates in many respects the Fregean foundational distinction between objects and concepts, for in set theory we commonly take the sets as objects to be considered under the guise of diverse concepts, the definable classes, each serving as a predicate on that domain of individuals. Although it is often asserted that there can be no association of classes with objects in a way that fulfills Frege’s Basic Law V, nevertheless, in the ZF framework I have described, it turns out that Basic Law V does hold, and provably so, along with other various Fregean abstraction principles. These principles are consequences of Zermelo-Fraenkel ZF set theory in the context of all its definable classes. Namely, there is an injective mapping from classes to objects, definable in senses I shall explain, associating every first-order parametrically definable class $F$ with a set object $\varepsilon F$, in such a way that Basic Law V is fulfilled: $$\varepsilon F =\varepsilon G\iff\forall x\ (Fx\leftrightarrow Gx).$$ Russell’s elementary refutation of the general comprehension axiom, therefore, is improperly described as a refutation of Basic Law V itself, but rather refutes Basic Law V only when augmented with powerful class comprehension principles going strictly beyond ZF. The main result leads also to a proof of Tarski’s theorem on the nondefinability of truth as a corollary to Russell’s argument.

My favorite theorem

What a pleasure it was to be interviewed by Evelyn Lamb and Kevin Knudson for their wonderful podcast series, My Favorite Theorem, available on Apple, Spotify, and any number of other aggregators.

I had a chance to talk about one my most favorite theorems, the fundamental theorem of finite games.

Theorem.(Zermelo 1913) In any two-player finite game of perfect information, one of the players has a winning strategy, or both players have drawing strategies.

Listen to the podcast here: My Favorite Theorem. A transcript is also available.

Lectures on the Philosophy of Mathematics, Michaelmas term 2021, Oxford

Philosophy of Mathematics, Exam Paper 122, Oxford University

Wednesdays 12-1 during term, Radcliffe Humanities Lecture Room

Joel David Hamkins, Professor of Logic

Lucy, Charles — Personifications of Mathematics and Geometry

This series of self-contained lectures on the philosophy of mathematics is intended for students preparing for Oxford Philosophy exam paper 122. All interested parties from the Oxford University community, however, are welcome to attend, whether or not they intend to sit the exam. The lectures will be organized loosely around mathematical themes, in such a way that brings various philosophical issues naturally to light. Lectures will loosely follow the instructor’s book Lectures on the Philosophy of Mathematics (MIT Press 2021), with supplemental suggested readings each week.

Previously recorded lectures from last year are available on the lecturer’s YouTube channel, below.

In light of the earlier lectures being available online, the plan for the lectures this year will be to feel somewhat more free occasionally to focus on narrower topics, and also to entertain at times a discussion format. Therefore kindly bring questions and well-thought-out opinions to the lecture.

The lectures this term will be held in person. The lecturer requests that students be vaccinated, wear masks, and observe social distancing as practicable. If this proves impossible or unsustainable, we shall regretably revert to online lectures on short notice.

Lecture 1. Numbers

Numbers are perhaps the essential mathematical idea, but what are numbers? There are many kinds of numbers—natural numbers, integers, rational numbers, real numbers, complex numbers, hyperreal numbers, surreal numbers, ordinal numbers, and more—and these number systems provide a fruitful background for classical arguments on incommensurability and transcendentality, while setting the stage for discussions of platonism, logicism, the nature of abstraction, the significance of categoricity, and structuralism.

Lecture 2. Rigour

Let us consider the problem of mathematical rigor in the development of the calculus. Informal continuity concepts and the use of infinitesimals ultimately gave way to the epsilon-delta limit concept, which secured a more rigorous foundation while also enlarging our conceptual vocabulary, enabling us to express more refined notions, such as uniform continuity, equicontinuity, and uniform convergence. Nonstandard analysis resurrected the infinitesimals on a more secure foundation, providing a parallel development of the subject. Meanwhile, increasing abstraction emerged in the function concept, which we shall illustrate with the Devil’s staircase, space-filling curves, and the Conway base 13 function. Finally, does the indispensability of mathematics for science ground mathematical truth? Fictionalism puts this in question.

Lecture 3. Infinity

We shall follow the allegory of Hilbert’s hotel and the paradox of Galileo to the equinumerosity relation and the notion of countability. Cantor’s diagonal arguments, meanwhile, reveal uncountability and a vast hierarchy of different orders of infinity; some arguments give rise to the distinction between constructive and nonconstructive proof. Zeno’s paradox highlights classical ideas on potential versus actual infinity. Furthermore, we shall count into the transfinite ordinals.

Lecture 4. Geometry

Classical Euclidean geometry is the archetype of a mathematical deductive process. Yet the impossibility of certain constructions by straightedge and compass, such as doubling the cube, trisecting the angle, or squaring the circle, hints at geometric realms beyond Euclid. The rise of non-Euclidean geometry, especially in light of scientific theories and observations suggesting that physical reality is not Euclidean, challenges previous accounts of what geometry is about. New formalizations, such as those of David Hilbert and Alfred Tarski, replace the old axiomatizations, augmenting and correcting Euclid with axioms on completeness and betweenness. Ultimately, Tarski’s decision procedure points to a tantalizing possibility of automation in geometrical reasoning.

Lecture 5. Proof

What is proof? What is the relation between proof and truth? Is every mathematical truth true for a reason? After clarifying the distinction between syntax and semantics and discussing various views on the nature of proof, including proof-as-dialogue, we shall consider the nature of formal proof. We shall highlight the importance of soundness, completeness, and verifiability in any formal proof system, outlining the central ideas used in proving the completeness theorem. The compactness property distills the finiteness of proofs into an independent, purely semantic consequence. Computer-verified proof promises increasing significance; its role is well illustrated by the history of the four-color theorem. Nonclassical logics, such as intuitionistic logic, arise naturally from formal systems by weakening the logical rules.

Lecture 6. Computability

What is computability? Kurt Gödel defined a robust class of computable functions, the primitive recursive functions, and yet he gave reasons to despair of a fully satisfactory answer. Nevertheless, Alan Turing’s machine concept of computability, growing out of a careful philosophical analysis of the nature of human computability, proved robust and laid a foundation for the contemporary computer era; the widely accepted Church-Turing thesis asserts that Turing had the right notion. The distinction between computable decidability and computable enumerability, highlighted by the undecidability of the halting problem, shows that not all mathematical problems can be solved by machine, and a vast hierarchy looms in the Turing degrees, an infinitary information theory. Complexity theory refocuses the subject on the realm of feasible computation, with the still-unsolved P versus NP problem standing in the background of nearly every serious issue in theoretical computer science.

Lecture 7. Incompleteness

David Hilbert sought to secure the consistency of higher mathematics by finitary reasoning about the formalism underlying it, but his program was dashed by Gödel’s incompleteness theorems, which show that no consistent formal system can prove even its own consistency, let alone the consistency of a higher system. We shall describe several proofs of the first incompleteness theorem, via the halting problem, self-reference, and definability, showing senses in which we cannot complete mathematics. After this, we shall discuss the second incompleteness theorem, the Rosser variation, and Tarski’s theorem on the nondefinability of truth. Ultimately, one is led to the inherent hierarchy of consistency strength rising above every foundational mathematical theory.

Lecture 8. Set Theory

We shall discuss the emergence of set theory as a foundation of mathematics. Cantor founded the subject with key set-theoretic insights, but Frege’s formal theory was naive, refuted by the Russell paradox. Zermelo’s set theory, in contrast, grew ultimately into the successful contemporary theory, founded upon a cumulative conception of the set-theoretic universe. Set theory was simultaneously a new mathematical subject, with its own motivating questions and tools, but it also was a new foundational theory with a capacity to represent essentially arbitrary abstract mathematical structure. Sophisticated technical developments, including in particular, the forcing method and discoveries in the large cardinal hierarchy, led to a necessary engagement with deep philosophical concerns, such as the criteria by which one adopts new mathematical axioms and set-theoretic pluralism.

Book review, Catarina Dutilh Novaes, The dialogical roots of deduction

In this insightful and remarkable work, Professor Novaes defends and explores at length the philosophical thesis that mathematical proof and deduction generally has a fundamentally dialogical nature, proceeding in a back-and-forth dialogue between two semi-adversarial but collaborative actors, the Prover and the Skeptic, who together aim to find mathematical insight. This view of proof-as-dialogue, she argues, carries explanatory power for the philosophy of mathematical practice, explaining diverse aspects of proof-writing, refereeing, and more, including the multifaceted roles of proof, including proof as verification, proof as certification, proof for communication and persuasion, proof as explanation, and proof as a driver of mathematical innovation.

In extensive, refined scholarly work, Novaes explores the historical and intellectual roots of the dialogical perspective on deduction, tracing the idea from ancient times through medieval philosophy and into the present day, including case studies of current mathematical developments, such as Mochizuki’s claimed proof of the abc conjecture, as well as recent psychological experiments on the role of group reasoning in resolving certain well-known disappointing failures of rationality, such as in the Wason card experiment. Truly fascinating.

On the basis of her work, I have nominated Novaes for the Lakatos Award (given annually “for an outstanding contribution to the philosophy of science, widely interpreted, in the form of a book published in English during the current year or the previous five years”). Lakatos himself, of course, was a friend of dialogical mathematics—his famous Proofs and Refutations proceeds after all in a dialogue between mathematicians of different philosophical outlooks. Novaes engages with Lakatos’s work explicitly, pointing out the obvious parallels, but also highlighting important differences between her Prover/Skeptic dialogical account and the kind of proof dialogues appearing in Proofs and Refutations. In light of this connection with Lakatos, I would find it especially fitting for Novaes to win the Lakatos Award.

My review on GoodReads

Counting to Infinity and Beyond

Anyone can learn to count in the ordinals, even a child, and so let us learn how to count to $\omega^2$, the first compound limit ordinal.

The large-format poster is available:

Some close-up views:

I would like to thank the many people who had made helpful suggestions concerning my poster, including Andrej Bauer and especially Saul Schleimer, who offered many detailed suggestions.

Infinite sets and Foundations—Interviewed on the Daniel Rubin Show

I was interviewed 26 August 2021 by mathematician Daniel Rubin on his show, and we had a lively, wideranging discussion spanning mathematics, infinity, and the philosophy of mathematics. Please enjoy!

Contents

0:00 Intro

2:11 Joel’s background. Interaction between math and philosophy

9:04 Joel’s work; infinite chess.

14:45 Infinite ordinals

22:27 The Cantor-Bendixson process

29:41 Uncountable ordinals

32:10 First order vs. second order theories

41:16 Non-standard analysis

46:57 The ZFC axioms and well-ordering of the reals

58:11 Showing independence of statements. Models and forcing.

1:04:38 Sets, classes, and categories

1:19:22 Is there one true set theory? Are projective sets Lebesgue measurable?

1:30:20 What does set theory look like if certain axioms are rejected?

1:36:06 How to judge philosophical positions about math

1:42:01 Concrete math where set theory becomes relevant. Tarski-Seidenberg on positive polynomials.

1:48:48 Goodstein sequences and the use of infinite ordinals

1:58:43 The state of set theory today

2:01:41 Joel’s recent books

Go check out the other episodes on Daniel’s channel!

An algebra of orders

Did you know that you can add and multiply orders? For any two order structures $A$ and $B$, we can form the ordered sum $A+B$ and ordered product $A\otimes B$, and other natural operations, such as the disjoint sum $A\sqcup B$, which make altogether an arithmetic of orders. We combine orders with these operations to make new orders, often with interesting properties. Let us explore the resulting algebra of orders!

$\newcommand\Z{\mathbb{Z}}\newcommand\N{\mathbb{N}}\newcommand\Q{\mathbb{Q}}\newcommand\P{\mathbb{P}}\newcommand\iso{\cong}\newcommand\of{\subseteq}$
One of the most basic operations that we can use to combine two orders is the disjoint sum operation $A\sqcup B$. This is the order resulting from placing a copy of $A$ adjacent to a copy of $B$, side-by-side, forming a combined order with no instances of the order relation between the two parts. If $A$ is the orange $\vee$-shaped order here and $B$ is the yellow linear order, for example, then $A\sqcup B$ is the combined order with all five nodes.

Another kind of addition is the ordered sum of two orders $A+B$, which is obtained by placing a copy of $B$ above a copy of $A$, as indicated here by adding the orange copy of $A$ and the yellow copy of $B$. Also shown is the sum $B+A$, with the summands reversed, so that we take $B$ below and $A$ on top. It is easy to check that the ordered sum of two orders is an order. One notices immediately, of course, that the resulting ordered sums $A+B$ and $B+A$ are not the same! The order $A+B$ has a greatest element, whereas $B+A$ has two maximal elements. So the ordered sum operation on orders is not commutative. Nevertheless, we shall still call it addition. The operation, which has many useful and interesting features, goes back at least to the 19th century with Cantor, who defined the addition of well orders this way.

In order to illustrate further examples, I have assembled here an addition table for several simple finite orders. The choices for $A$ appear down the left side and those for $B$ at the top, with the corresponding sum $A+B$ displayed in each cell accordingly.

We can combine the two order addition operations, forming a variety of other orders this way.

The reader is encouraged to explore further how to add various finite orders using these two forms of addition. What is the smallest order that you cannot generate from $1$ using $+$ and $\sqcup$? Please answer in the comments.

We can also add infinite orders. Displayed here, for example, is the order $\N+(1\sqcup 1)$, the natural numbers wearing two yellow caps. The two yellow nodes at the top form a copy of $1\sqcup 1$, while the natural numbers are the orange nodes below. Every natural number (yes, all infinitely many of them) is below each of the two nodes at the top, which are incomparable to each other. Notice that even though we have Hasse diagrams for each summand order here, there can be no minimal Hasse diagram for the sum, because any particular line from a natural number to the top would be implied via transitivity from higher such lines, and we would need such lines, since they are not implied by the lower lines. So there is no minimal Hasse diagram.

This order happens to illustrate what is called an exact pair, which occurs in an order when a pair of incomparable nodes bounds a chain below, with the property that any node below both members of the pair is below something in the chain. This phenomenon occurs in sometimes unexpected contexts—any countable chain in the hierarchy of Turing degrees in computability theory, for example, admits an exact pair.

Let us turn now to multiplication. The ordered product $A\otimes B$ is the order resulting from having $B$ many copies of $A$. That is, we replace each node of $B$ with an entire copy of the $A$ order. Within each of these copies of $A$, the order relation is just as in $A$, but the order relation between nodes in different copies of $A$, we follow the $B$ relation. It is not difficult to check that indeed this is an order relation. We can illustrate here with the same two orders we had earlier.

In forming the ordered product $A\otimes B$, in the center here, we take the two yellow nodes of $B$, shown greatly enlarged in the background, and replace them with copies of $A$. So we have ultimately two copies of $A$, one atop the other, just as $B$ has two nodes, one atop the other. We have added the order relations between the lower copy of $A$ and the upper copy, because in $B$ the lower node is related to the upper node. The order $A\otimes B$ consists only of the six orange nodes—the large highlighted yellow nodes of $B$ here serve merely as a helpful indication of how the product is formed and are not in any way part of the product order $A\otimes B$.

Similarly, with $B\otimes A$, at the right, we have the three enlarged orange nodes of $B$ in the background, which have each been replaced with copies of $A$. The nodes of each of the lower copies of $A$ are related to the nodes in the top copy, because in $B$ the two lower nodes are related to the upper node.

I have assembled a small multiplication table here for some simple finite orders.

So far we have given an informal account of how to add and multiply ordered ordered structures. So let us briefly be a little more precise and formal with these matters.

In fact, when it comes to addition, there is a slightly irritating matter in defining what the sums $A\sqcup B$ and $A+B$ are exactly. Specifically, what are the domains? We would like to conceive of the domains of $A\sqcup B$ and $A+B$ simply as the union the domains of $A$ and $B$—we’d like just to throw the two domains together and form the sums order using that combined domain, placing $A$ on the $A$ part and $B$ on the $B$ part (and adding relations from the $A$ to the $B$ part for $A+B$). Indeed, this works fine when the domains of $A$ and $B$ are disjoint, that is, if they have no points in common. But what if the domains of $A$ and $B$ overlap? In this case, we can’t seem to use the union in this straightforward manner. In general, we must disjointify the domains—we take copies of $A$ and $B$, if necessary, on domains that are disjoint, so that we can form the sums $A\sqcup B$ and $A+B$ on the union of those nonoverlapping domains.

What do we mean precisely by “taking a copy” of an ordered structure $A$? This way of talking in mathematics partakes in the philosophy of structuralism. We only care about our mathematical structures up to isomorphism, after all, and so it doesn’t matter which isomorphic copies of $A$ and $B$ we use; the resulting order structures $A\sqcup B$ will be isomorphic, and similarly for $A+B$. In this sense, we are defining the sum orders only up to isomorphism.

Nevertheless, we can be definite about it, if only to verify that indeed there are copies of $A$ and $B$ available with disjoint domains. So let us construct a set-theoretically specific copy of $A$, replacing each individual $a$ in the domain of $A$ with $(a,\text{orange})$, for example, and replacing the elements $b$ in the domain of $B$ with $(b,\text{yellow})$. If “orange” is a specific object distinct from “yellow,” then these new domains will have no points in common, and we can form the disjoint sum $A\sqcup B$ by using the union of these new domains, placing the $A$ order on the orange objects and the $B$ order on the yellow objects.

Although one can use this specific disjointifying construction to define what $A\sqcup B$ and $A+B$ mean as specific structures, I would find it to be a misunderstanding of the construction to take it as a suggestion that set theory is anti-structuralist. Set theorists are generally as structuralist as they come in mathematics, and in light of Dedekind’s categorical account of the natural numbers, one might even find the origin of the philosophy of structuralism in set theory. Rather, the disjointifying construction is part of the general proof that set theory abounds with isomorphic copies of whatever mathematical structure we might have, and this is part of the reason why it serves well as a foundation of mathematics for the structuralist. To be a structuralist means not to care which particular copy one has, to treat one’s mathematical structures as invariant under isomorphism.

But let me mention a certain regrettable consequence of defining the operations by means of a specific such disjointifying construction in the algebra of orders. Namely, it will turn out that neither the disjoint sum operation nor the ordered sum operation, as operations on order structures, are associative. For example, if we use $1$ to represent the one-point order, then $1\sqcup 1$ means the two-point side-by-side order, one orange and one yellow, but really what we mean is that the points of the domain are $\set{(a,\text{orange}),(a,\text{yellow})}$, where the original order is on domain $\set{a}$. The order $(1\sqcup 1)\sqcup 1$ then means that we take an orange copy of that order plus a single yellow point. This will have domain
$$\set{\bigl((a,\text{orange}),\text{orange}\bigr),\bigl((a,\text{yellow}),\text{orange}\bigr),(a,\text{yellow})}$$
The order $1\sqcup(1\sqcup 1)$, in contrast, means that we take a single orange point plus a yellow copy of $1\sqcup 1$, leading to the domain
$$\set{(a,\text{orange}),\bigl((a,\text{orange}),\text{yellow}\bigr),\bigl((a,\text{yellow}),\text{yellow}\bigr)}$$
These domains are not the same! So as order structures, the order $(1\sqcup 1)\sqcup 1$ is not identical with $1\sqcup(1\sqcup 1)$, and therefore the disjoint sum operation is not associative. A similar problem arises with $1+(1+1)$ and $(1+1)+1$.

But not to worry—we are structuralists and care about our orders here only up to isomorphism. Indeed, the two resulting orders are isomorphic as orders, and more generally, $(A\sqcup B)\sqcup C$ is isomorphic to $A\sqcup(B\sqcup C)$ for any orders $A$, $B$, and $C$, and similarly with $A+(B+C)\cong(A+B)+C$, as discussed with the theorem below. Furthermore, the order isomorphism relation is a congruence with respect to the arithmetic we have defined, which means that $A\sqcup B$ is isomorphic to $A’\sqcup B’$ whenever $A$ and $B$ are respectively isomorphic to $A’$ and $B’$, and similarly with $A+B$ and $A\otimes B$. Consequently, we can view these operations as associative, if we simply view them not as operations on the order structures themselves, but on their order-types, that is, on their isomorphism classes. This simple abstract switch in perspective restores the desired associativity. In light of this, we are free to omit the parentheses and write $A\sqcup B\sqcup C$ and $A+B+C$, if care about our orders only up to isomorphism. Let us therefore adopt this structuralist perspective for the rest of our treatment of the algebra of orders.

Let us give a more precise formal definition of $A\otimes B$, which requires no disjointification. Specifically, the domain is the set of pairs $\set{(a,b)\mid a\in A, b\in B}$, and the order is defined by $(a,b)\leq_{A\otimes B}(a’,b’)$ if and only if $b\leq_B b’$, or $b=b’$ and $a\leq_A a’$. This order is known as the reverse lexical order, since we are ordering the nodes in the dictionary manner, except starting from the right letter first rather than the left as in an ordinary dictionary. One could of course have defined the product using the lexical order instead of the reverse lexical order, and this would give $A\otimes B$ the meaning of “$A$ copies of $B$.” This would be a fine alternative and in my experience mathematicians who rediscover the ordered product on their own often tend to use the lexical order, which is natural in some respects. Nevertheless, there is a huge literature with more than a century of established usage with the reverse lexical order, from the time of Cantor, who defined ordinal multiplication $\alpha\beta$ as $\beta$ copies of $\alpha$. For this reason, it seems best to stick with the reverse lexical order and the accompanying idea that $A\otimes B$ means “$B$ copies of $A$.” Note also that with the reverse lexical order, we shall be able to prove left distributivity $A\otimes(B+C)=A\otimes B+A\otimes C$, whereas with the lexical order, one will instead have right distributivity $(B+C)\otimes^* A=B\otimes^* A+C\otimes^* A$.

Let us begin to prove some basic facts about the algebra of orders.

Theorem. The following identities hold for orders $A$, $B$, and $C$.

  1. Associativity of disjoint sum, ordered sum, and ordered product.\begin{eqnarray*}A\sqcup(B\sqcup C) &\iso& (A\sqcup B)\sqcup C\\ A+(B+C) &\iso& (A+B)+C\\ A\otimes(B\otimes C) &\iso& (A\otimes B)\otimes C \end{eqnarray*}
  2. Left distributivity of product over disjoint sum and ordered sum.\begin{eqnarray*} A\otimes(B\sqcup C) &\iso& (A\otimes B)\sqcup(A\otimes C)\\ A\otimes(B+C) &\iso& (A\otimes B)+(A\otimes C) \end{eqnarray*}

In each case, these identities are clear from the informal intended meaning of the orders. For example, $A+(B+C)$ is the order resulting from having a copy of $A$, and above it a copy of $B+C$, which is a copy of $B$ and a copy of $C$ above it. So one has altogether a copy of $A$, with a copy of $B$ above that and a copy of $C$ on top. And this is the same as $(A+B)+C$, so they are isomorphic.

One can also aspire to give a detailed formal proof verifying that our color-coded disjointifying process works as desired, and the reader is encouraged to do so as an exercise. To my way of thinking, however, such a proof offers little in the way of mathematical insight into algebra of orders. Rather, it is about checking the fine print of our disjointifying process and making sure that things work as we expect. Several of the arguments can be described as parenthesis-rearranging arguments—one extracts the desired information from the structure of the domain order and puts that exact same information into the correct form for the target order.

For example, if we have used the color-scheme disjointifying process described above, then the elements of $A\sqcup(B\sqcup C)$ each have one of the following forms, where $a\in A$, $b\in B$, and $c\in C$:
$$(a,\text{orange})$$
$$\bigl((b,\text{orange}),\text{yellow}\bigr)$$
$$\bigl((c,\text{yellow}),\text{yellow}\bigr)$$
We can define the color-and-parenthesis-rearranging function $\pi$ to put them into the right form for $(A\sqcup B)\sqcup C$ as follows:
\begin{align*}
\pi:(a,\text{orange})\quad&\mapsto\quad \bigl((a,\text{orange}),\text{orange}\bigl) \\
\pi:\bigl((b,\text{orange}),\text{yellow}\bigr)\quad&\mapsto\quad \bigl((b,\text{yellow}),\text{orange}\bigl) \\
\pi:\bigl((c,\text{yellow}),\text{yellow}\bigr)\quad&\mapsto\quad (c,\text{yellow})
\end{align*}
In each case, we will preserve the order, and since the orders are side-by-side, the cases never interact in the order, and so this is an isomorphism.

Similarly, for distributivity, the elements of $A\otimes(B\sqcup C)$ have the two forms:
$$\bigl(a,(b,\text{orange})\bigr)$$
$$\bigl(a,(c,\text{yellow})\bigr)$$
where $a\in A$, $b\in B$, and $c\in C$. Again we can define the desired ismorphism $\tau$ by putting these into the right form for $(A\otimes B)\sqcup(A\otimes C)$ as follows:
\begin{align*}
\tau:\bigl(a,(b,\text{orange})\bigr)\quad&\mapsto\quad \bigl((a,b),\text{orange}\bigr) \\
\tau:\bigl(a,(c,\text{yellow})\bigr)\quad&\mapsto\quad\bigl((a,c),\text{yellow}\bigr)
\end{align*}
And again, this is an isomorphism, as desired.

Since order multiplication is not commutative, it is natural to inquire about the right-sided distributivity laws:
\begin{eqnarray*}
(B+C)\otimes A&\overset{?}{\cong}&(B\otimes A)+(C\otimes A)\\
(B\sqcup C)\otimes A&\overset{?}{\cong}&(B\otimes A)\sqcup(C\otimes A)
\end{eqnarray*}
Unfortunately, however, these do not hold in general, and the following instances are counterexamples. Can you see what to take as $A$, $B$, and $C$? Please answer in the comments.

Theorem.

  1. If $A$ and $B$ are linear orders, then so are $A+B$ and $A\otimes B$.
  2. If $A$ and $B$ are nontrivial linear orders and both are endless, then $A+B$ is endless; if at least one of them is endless, then $A\otimes B$ is endless.
  3. If $A$ is an endless dense linear order and $B$ is linear, then $A\otimes B$ is an endless dense linear order.
  4. If $A$ is an endless discrete linear order and $B$ is linear, then $A\otimes B$ is an endless discrete linear order.

Proof. If both $A$ and $B$ are linear orders, then it is clear that $A+B$ is linear. Any two points within the $A$ copy are comparable, and any two points within the $B$ copy, and every point in the $A$ copy is below any point in the $B$ copy. So any two points are comparable and thus we have a linear order. With the product $A\otimes B$, we have $B$ many copies of $A$, and this is linear since any two points within one copy of $A$ are comparable, and otherwise they come from different copies, which are then comparable since $B$ is linear. So $A\otimes B$ is linear.

For statement (2), we know that $A+B$ and $A\otimes B$ are nontrivial linear orders. If both $A$ and $B$ are endless, then clearly $A+B$ is endless, since every node in $A$ has something below it and every node in $B$ has something above it. For the product $A\otimes B$, if $A$ is endless, then every node in any copy of $A$ has nodes above and below it, and so this will be true in $A\otimes B$; and if $B$ is endless, then there will always be higher and lower copies of $A$ to consider, so again $A\otimes B$ is endless, as desired.

For statement (3), assume that $A$ is an endless dense linear and that $B$ is linear. We know from (1) that $A\otimes B$ is a linear order. Suppose that $x<y$ in this order. If $x$ and $y$ live in the same copy of $A$, then there is a node $z$ between them, because $A$ is dense. If $x$ occurs in one copy of $A$ and $B$ in another, then because $A$ is endless, there will a node $z$ above $x$ in its same copy, leading to $x<z<y$ as desired. (Note: we don’t need $B$ to be dense.)

For statement (4), assume instead that $A$ is an endless discrete linear order and $B$ is linear. We know that $A\otimes B$ is a linear order. Every node of $A\otimes B$ lives in a copy of $A$, where it has an immediate successor and an immediate predecessor, and these are also immediate successor and predecessor in $A\otimes B$. From this, it follows also that $A\otimes B$ is endless, and so it is an endless discrete linear order. $\Box$

The reader is encouraged to consider as an exercise whether one can drop the “endless” hypotheses in the theorem. Please answer in the comments.

Theorem. The endless discrete linear orders are exactly those of the form $\Z\otimes L$ for some linear order $L$.

Proof. If $L$ is a linear order, then $\Z\otimes L$ is an endless discrete linear order by the theorem above, statement (4). So any order of this form has the desired feature. Conversely, suppose that $\P$ is an endless discrete linear order. Define an equivalence relation for points in this order by which $p\sim q$ if and only $p$ and $q$ are at finite distance, in the sense that there are only finitely many points between them. This relation is easily seen to be reflexive, transitive and symmetric, and so it is an equivalence relation. Since $\P$ is an endless discrete linear order, every object in the order has an immediate successor and immediate predecessor, which remain $\sim$-equivalent, and from this it follows that the equivalence classes are each ordered like the integers $\Z$, as indicated by the figure here.

The equivalence classes amount to a partition of $\P$ into disjoint segments of order type $\Z$, as in the various colored sections of the figure. Let $L$ be the induced order on the equivalence classes. That is, the domain of $L$ consists of the equivalence classes $\P/\sim$, which are each a $\Z$ chain in the original order, and we say $[a]<_L[b]$ just in case $a<_{\P}b$. This is a linear order on the equivalence classes. And since $\P$ is $L$ copies of its equivalence classes, each of which is ordered like $\Z$, it follows that $\P$ is isomorphic to $\Z\otimes L$, as desired. $\Box$

(Interested readers are advised that the argument above uses the axiom of choice, since in order to assemble the isomorphism of $\P$ with $\Z\otimes L$, we need in effect to choose a center point for each equivalence class.)

If we consider the integers inside the rational order $\Z\of\Q$, it is clear that we can have a discrete suborder of a dense linear order. How about a dense suborder of a discrete linear order?

Question. Is there a discrete linear order with a suborder that is a dense linear order?

What? How could that happen? In my experience, mathematicians first coming to this topic often respond instinctively that this should be impossible. I have seen sophisticated mathematicians make such a pronouncement when I asked the audience about it in a public lecture. The fundamental nature of a discrete order, after all, is completely at odds with density, since in a discrete order, there is a next point up and down, and a next next point, and so on, and this is incompatible with density.

Yet, surprisingly, the answer is Yes! It is possible—there is a discrete order with a suborder that is densely ordered. Consider the extremely interesting order $\Z\otimes\Q$, which consists of $\Q$ many copies of $\Z$, laid out here increasing from left to right. Each tiny blue dot is a rational number, which has been replaced with an entire copy of the integers, as you can see in the magnified images at $a$, $b$, and $c$.

The order is quite subtle, and so let me also provide an alternative presentation of it. We have many copies of $\Z$, and those copies are densely ordered like $\Q$, so that between any two copies of $\Z$ is another one, like this:

Perhaps it helps to imagine that the copies of $\Z$ are getting smaller and smaller as you squeeze them in between the larger copies. But you can indeed always fit another copy of $\Z$ between, while leaving room for the further even tinier copies of $\Z$ to come.

The order $\Z\otimes\Q$ is discrete, in light of the theorem characterizing discrete linear orders. But also, this is clear, since every point of $\Z\otimes\Q$ lives in its local copy of $\Z$, and so has an immediate successor and predecessor there. Meanwhile, if we select exactly one point from each copy of $\Z$, the $0$ of each copy, say, then these points are ordered like $\Q$, which is dense. Thus, we have proved:

Theorem. The order $\Z\otimes\Q$ is a discrete linear order having a dense linear order as a suborder.

One might be curious now about the order $\Q\otimes\Z$, which is $\Z$ many copies of $\Q$. This order, however, is a countable endless dense linear order, and therefore is isomorphic to $\Q$ itself.


This material is adapted from my book-in-progress, Topics in Logic, drawn from Chapter 3 on Relational Logic, which incudes an extensive section on order theory, of which this is an important summative part.

Linear gradings of partial orders

Order relations are often fruitfully conceived as being stratified into “levels” in a natural way. The level structure is meant to be compatible with the order, in the sense that as one moves strictly up in the order, one also ascends successively from lower levels to strictly higher levels. With the simple order relation pictured here, for example, we might naturally imagine the least element $0$ on a bottom level, the three middle nodes $a$, $b$, and $c$ on an intermediate level, and node $1$ on a level at the top. With this level structure, as we move up strictly in the order, then we also move up strictly in the hierarchy of levels.

What exactly is a level? Let us be a little more precise. We can depict the intended level structure of our example order more definitely as in the figure here. At the left appears the original order relation, while the yellow highlighted bands organize the nodes into three levels, with node $0$ on the bottom level, nodes $a$, $b$, and $c$ on the middle level, and node $1$ on the top level. This level structure in effect describes a linear preorder relation $\leq$ for which $0\leq a,b,c\leq 1$, with the three intermediate nodes all equivalent with respect to this preorder—they are on the same level.

$\def\<#1>{\left\langle#1\right\rangle}\newcommand{\of}{\subseteq}\newcommand{\set}[1]{{\,{#1}\,}}\renewcommand\emptyset{\varnothing}$Thus, we realize that a level structure of an order relation is simply a linear preorder relation that respects strict instances of the original order, and the levels are simply the equivalence classes of the preorder. More precisely, we define that a linear grading of an order relation $\<A,\preccurlyeq>$ is a linear preorder relation $\leq$ on $A$ for which every strict instance of the original order is also graded strictly by the linear preorder; that is, we require that
$$a\prec b\quad\text{ implies }\quad a<b$$ Thus, any strictly lower point in the original order is on a lower level, and we define that objects are on the same level if they are equivalent with respect to the preorder. A linearly graded order is a relational structure $\<A,\preccurlyeq,\leq>$ with two orders on the same domain, the first $\preccurlyeq$ being an order relation on $A$ and the second $\leq$ being a linear preorder relation that grades the first order.

It turns out that there are often far more ways to stratify a given order by levels than one might have expected. For the simple order above, for example, there are thirteen distinct linear grading orders, as shown here.

The conclusion is inescapable that the level concept is not inherent in the order relation itself, for a given order relation may admit a huge variety of different level hierarchies, each of them compatible with the given order.

One should therefore not make the mistake of thinking that if one has an order relation, such as a philosophical reduction notion of some kind or a supervenience relation, then one is automatically entitled to speak of “levels” of the order. One might want to speak of “low-level” phenomena or “high-level” concepts in the order, but one must recognize that the order relation itself does not determine a specific hierarchy of levels, although it does place limitations on the possible stratifications. My point is that there is often surprising flexibility in the nature of the level structure, as the example above shows even in a very simple case, and so what counts as low or high in terms of levels may be much less determined than one expects. In some of the linear gradings above, for example, the node $a$ could be described as high-level, and in others, it is low-level. Therefore if one wants to speak of levels for one’s order, then one must provide further elucidation of the stratification one has in mind.

Meanwhile, we often can provide a natural level structure. In the power set $P(X)$ of a finite set $X$, ordered by the subset relation $A\of B$, for example, we can naturally stratify the relation by the sizes of the set, that is, in terms of the number of elements. Thus, we would place the $\emptyset$ on the bottom level, and the singleton sets $\{a\}$ on the next level, and then the doubletons $\{a,b\}$, and so on. This stratification by cardinality doesn’t quite work when $X$ is infinite, however, since there can be instances of strict inclusion $A\subsetneq B$ where $A$ and $B$ are both infinite and nevertheless equinumerous. Is there a level stratification of the infinite power set order?

Indeed there is, for every order relation admits a grading into levels.

Theorem. Every order relation $\<A,\preccurlyeq>$ can be linearly graded. Indeed, every order relation can be extended to a linear order (not merely a preorder), and so it can be graded into levels with exactly one node on each level.

Proof. Let us begin with the finite case, which we prove by induction. Assume $\preccurlyeq$ is an order relation on a finite set $A$. We seek to find a linear order $\leq$ on $A$ such that $x\preccurlyeq y\implies x\leq y$. If $A$ has at most one element, then we are done immediately, since $\preccurlyeq$ would itself already be linear.

Let us proceed by induction. Assume that every order of size $n$ has a linear grading, and that we have a partial order $\preccurlyeq$ on a set $A$ of size $n+1$. Every finite order has at least one maximal element, so let $a\in A$ be a $\preccurlyeq$-maximal element. If we consider the relation $\preccurlyeq$ on the remaining elements $A\setminus\{a\}$, it is a partial order of size $n$, and thus admits a linear grading order $\leq$. We can now simply place $a$ atop that order, and this will be a linear grading of $\<A,\preccurlyeq>$, because $a$ was maximal, and so making it also greatest in the grading order will cohere with the grading condition.

So by induction, every finite partial order relation can be extended to a linear order.

Now, we consider the general case. Suppose that $\preccurlyeq$ is a partial order relation on a (possibly infinite) set $A$. We construct a theory $T$ in a language with a new relation symbol $\leq$ and constant symbols $\dot a$ for every element $a\in A$. The theory $T$ should assert that $\leq$ is a linear order, that $\dot a\leq \dot b$, whenever it is actually true that $a\preccurlyeq b$ in the original order, and that $\dot a\neq\dot b$ whenever $a\neq b$. So the theory $T$ describes the situation that we want, namely, a linear order that conforms with the original partial order.

The key observation is that every finite subset of the theory will be satisfiable, since such a finite subtheory in effect reduces to the case of finite orders, which we handled above. That is, if we take only finitely many of the axioms of $T$, then it involves a finite partial order on the nodes that are mentioned, and by the finite case of the theorem, this order is refined by a linear order, which provides a model of the subtheory. So every finite subtheory of $T$ is satisfiable, and so by the compactness theorem, $T$ itself also is satisfiable.

Any model of $T$ provides a linear order $\leq$ on the constant symbols $\dot a$, which will be a linear order extending the original order relation, as desired. $\Box$

This material is adapted from my book-in-progress, Topics in Logic, which includes a chapter on relational logic, included an extended section on orders.

The lattice of sets of natural numbers is rich

$\newcommand\of{\subseteq}\newcommand\N{\mathbb{N}}\newcommand\R{\mathbb{R}}\def\<#1>{\left\langle#1\right\rangle}\newcommand\Z{\mathbb{Z}}\newcommand\Q{\mathbb{Q}}\newcommand\ltomega{{{<}\omega}}\newcommand\unaryminus{-}\newcommand\intersect{\cap}\newcommand\union{\cup}\renewcommand\emptyset{\varnothing}$Let us explore the vast and densely populated expanses of the lattice of all sets of natural numbers—the power set lattice $\<P(\N),\of>$.

We find in this lattice every possible set of natural numbers $A\of\N$, and we consider them with respect to the subset relation $A\of B$. This is a lattice order because the union of two sets $A\union B$ is their least upper bound with respect to inclusion and the intersection $A\intersect B$ is their greatest lower bound. Indeed, it is a distributive lattice in light of $A\intersect(B\union C)=(A\intersect B)\union(A\intersect C)$, but furthermore, as a power set algebra it is a Boolean algebra and every Boolean algebra is a distributive lattice.

The empty set $\emptyset$ is the global least element at the bottom of the lattice, of course, and the whole set $\N$ is the greatest element, at the top. The singletons $\set{n}$ are atoms, one step up from $\emptyset$, while their complements $\N\setminus\set{n}$ are coatoms, one step down from $\N$. One generally conceives of the finite sets as clustering near the Earthly bottom of the lattice, finitely close to $\emptyset$, whereas the cofinite sets soar above, finitely close to the heavenly top. In the vast central regions between, in contrast, we find the infinite-coinfinite sets, including many deeply interesting sets such as the prime numbers, the squares, the square-free numbers, the composites, and uncountably many more.

Question. Which familiar orders can we find as suborders in the power set lattice $P(\N)$ of all sets of natural numbers?

We can easily find a copy of the natural-number order $\<\N,\leq>$ in the power set lattice $P(\N)$, simply by considering the chain of initial segments:
$$\emptyset\of\set{0}\of\set{0,1}\of\set{0,1,2}\of\cdots$$
This sequence climbs like a ladder up the left side of the main figure. Curious readers will notice that although every one of these sets is finite and hence conceived as clinging to Earth, as we had said, nevertheless there is no upper bound for this sequence in the lattice other than the global maximum $\N$—any particular natural number is eventually subsumed. Thus one climbs to the heavens on this ladder, which stretches to the very top of the lattice, while remaining close to Earth all the while on every rung.

The complements of those sets similarly form a infinite descending sequence:
$$\cdots\ \of\ \N\setminus\set{0,1,2}\ \of\ \N\setminus\set{0,1}\ \of\ \N\setminus\set{0}\ \of\ \N$$
This chain of cofinite sets dangles like a knotted rope from the heavens down the right side of the main figure. Every knot on this rope is a cofinite set, which we conceived as very near the top, and yet the rope dangles temptingly down to Earth, for there is no lower bound except $\emptyset$.

Can we find a copy of the discrete integer order $\<\Z,\leq>$ in the power set lattice?

Yes, we can. An example is already visible in the red chain at the right side of main lattice diagram, a sky crane hanging in space. Standing at the set of odd numbers, we can ascend in discrete steps by adding additional even numbers one at a time; or we can descend similarly by removing successive odd numbers. The result is a discrete chain isomorphic to the integer order $\Z$, and it is unbounded except by the extreme points $\emptyset$ and $\N$.

Can we find a dense order, say, a copy of the rational line $\<\Q,\leq>$?

What? You want to find a dense order in the lattice of sets of natural numbers? That might seem impossible, if one thinks of the lattice as having a fundamentally discrete nature. The nearest neighbors of any set, after all, are obtained by adding or removing exactly one element—doing so makes a discrete step up or down with no strictly intermediate set. Does the essentially discrete nature of the lattice suggest that we cannot embed the dense order $\<\Q,\leq>$?

No! The surprising fact is that indeed we can embed the rational line into the power set lattice $P(\N)$. Indeed, much more is true—it turns out that we can embed a copy of any countable order whatsoever into the lattice $P(\N)$. Every countable order that occurs anywhere at all occurs in the lattice of sets of natural numbers.

Theorem. The power set lattice on the natural numbers $P(\N)$ is universal for all countable orders. That is, every order relation on a countable domain is isomorphic to a suborder of $\<P(\N),\of>$.

Proof. Let us prove first that every order relation $\<L,\preccurlyeq>$ is isomorphic to the subset relation $\of$ on a collection of sets. Namely, for each $q\in L$ let $q{\downarrow}$ be the down set of $q$, the set of predecessors of $q$:
$$q{\downarrow}=\set{p\in L\mid p\preccurlyeq q}\of L$$
This is a subset of $L$, and what I claim is that
$$p\preccurlyeq q\quad\text{ if and only if }\quad p{\downarrow}\of q{\downarrow}.$$
This is almost immediate. For the forward implication, if $p\preccurlyeq q$, then any order element below $p$ is also below $q$, and so $p{\downarrow}\of q{\downarrow}$. Conversely, if $p{\downarrow}\of q{\downarrow}$, then since $p\in p{\downarrow}$, we must have $p\in q{\downarrow}$ and hence $p\preccurlyeq q$ as desired. So the map $p\mapsto p{\downarrow}$ is an isomorphism of the original order $\<L,\preccurlyeq>$ with the family of all downsets $\set{p{\downarrow}\mid p\in L}$ using the subset relation.

To complete the proof, we simply observe that the power set lattice $P(L)$ of a countably infinite set $L$ is isomorphic to $P(\N)$, and for finite $L$ it is isomorphic to the power set lattice below a finite set of natural numbers. This is because any bijection of $L$ with a set of natural numbers induces a corresponding isomorphism of the respective subsets. So we may find a copy of $L$ inside $P(\N)$, as desired. $\Box$

One may make the proof more concrete by combining the two steps, like this: if $\<L,\preccurlyeq>$ is any countable order relation, then enumerate the domain set $L=\set{p_0,p_1,p_2,\ldots}$, and for each $p\in L$ let $A_p=\set{n\in\N\mid p_n\preccurlyeq p}\of\N$. This is like the down set of $p$, except we take the indices of the elements instead of the elements themselves. It follows that $p\preccurlyeq q$ if and only if $A_p\of A_q$, and so the mapping $p\mapsto A_p$ is an order isomorphism of $\<L,\preccurlyeq>$ with the collection of $A_p$ sets under $\of$, as desired.

Since the set of rational numbers $\Q$ is countable, it follows as an instance of the universality theorem above that we may find a copy of the rational line $\<\Q,\leq>$ inside the power set lattice $\<P(\N),\of>$. Can you find an elegant explicit presentation of rational line in the powerset lattice? In light of the characterization of $\Q$ as a countable endless dense linear order, it will suffice to find any chain in $P(\N)$ that forms a countable endless dense linear order.

But wait, there is more! We can actually find a copy of the real line $\<\R,\leq>$ in the power set lattice on the naturals! Wait…really? How is that possible? The reals are uncountable, and so this would be an uncountable chain the lattice of sets of natural numbers, but every subset $A\of\N$ is countable, and there are only countably many elements to add to $A$ as you go up. How could there be an uncountable chain in the lattice?

Let me explain.

Theorem. The power set lattice on the natural numbers $\<P(\N),\of>$ has an uncountable chain, and indeed, it has a copy of the real continuum $\<\R,\leq>$.

Proof. We have already observed that there must be a copy of the rational line $\<\Q,\leq>$ in the power set lattice $P(\N)$. In other words, there is a map $q\mapsto A_q\of\N$ such that
$$p<q\quad\text{ if and only if }\quad A_p\of A_q$$
for any rational numbers $p,q\in\Q$.

We may now find our desired copy of the real continuum simply by completing this embedding. Namely, for any real number $r\in\R$, let
$$A_r=\bigcup_{\substack{q\leq r\\ q\in\Q}} A_q.$$
Since the sets $A_q$ for rational numbers $q$ form a chain, it follows easily that $r\leq s$ implies $A_r\of A_s$. And if $r<s$, then $A_r$ will be strictly contained in some $A_q$ for a rational number $q$ strictly between $r<q<s$, and consequently strictly contained in $A_s$. From this, it follows that $r\leq s$ if and only if $A_r\of A_s$, and so we have found a copy of the real number order in the power set lattice, as desired. $\Box$

Let us now round out the surprising events of this section by proving another shocking fact—we can also find an uncountable antichain in the power set lattice $P(\N)$.

Theorem. The power set lattice on the natural numbers $\<P(\N),\of>$ has an uncountable antichain, a collection of continuum many pairwise incomparable sets.

Proof. Consider the tree of finite binary sequences $2^{\ltomega}$. This is the set of all finite binary sequences, ordered by the relation $s\leq t$ that holds when $t$ extends $s$ by possibly adding additional binary digits on the end. This is a countable order relation, and so by universality theorem above we may find a copy of it in the power set lattice $P(\N)$. But let us be a little more specific. We shall label each node $s$ of the tree $2^{\ltomega}$ with a distinct natural number. For any infinite binary path $x\in 2^\omega$ through the tree, let $A_x$ be the set of numbers that appear along the path. Every $A_x$ is infinite, but if $x\neq y$, then $x$ and $y$ will eventually branch away from each other, and consequently $A_x$ and $A_y$ will have only finitely many numbers in common. In particular, neither $A_x$ nor $A_y$ will be a subset of the other, and so we have found an antichain in the powerset lattice. The number of paths through the binary tree is the number of infinite binary sequences, which is the same as the cardinality of the continuum of real numbers, as desired. $\Box$

So the lattice $P(\N)$ offers a rich complexity. But is there also homogeneity to be found here? Place yourself at one of the sets in the central region of the lattice, an infinite coinfinite set. There are various new elements for you to add to make a larger set—you can add just this element, or just that one, or any of infinitely many new atoms for you to add to make a larger set. And you could add any combination of those atoms. Above any coinfinite point in the lattice, therefore, one finds a perfect copy of the whole lattice again. Looking upward from any coinfinite set, it always looks the same.

Theorem. The lattice of all sets of natural numbers is homogenous in several respects.

  1. The lattice of sets above any given coinfinite set $A\of\N$ is isomorphic to the whole power set lattice $P(\N)$.
  2. The lattice of sets below any given infinite set $B\of\N$ is isomorphic to the whole power set lattice $P(\N)$.
  3. For any two infinite coinfinite sets $A,B\of \N$, there is an automorphism of the lattice $P(\N)$ carrying $A$ to $B$.

Proof. Let us prove statement (2) first. Consider any infinite set $B\of\N$ and the sublattice in $P(\N)$ below it, which is precisely the power set lattice $P(B)$ of all subsets of $B$. Since $B$ is an infinite set of natural numbers, there is a bijection $\pi:B\to\N$. We may therefore transform subsets of $B$ to subsets of $\N$ by applying the bijection. Namely, for any $X\of B$ define $\bar\pi(X)=\pi[B]=\set{\pi(b)\mid b\in X}$. Since $\pi$ is injective, we have $X\of Y$ if and only if $\bar\pi(X)\of\bar\pi(Y)$, and the map $\bar\pi$ maps onto $P(\N)$ precisely because $\pi$ is surjective, so every $Z\of\N$ is $Z=\bar\pi(X)$, where $X=\pi^{\unaryminus 1}Z=\set{b\in B\mid \pi(b)\in Z}$. So $\bar\pi$ is an isomorphism of the lattice $P(B)$ with $P(\N)$, as desired.

For statement (1), consider any cofinite set $A\of\N$ and let $A{\uparrow}$ be the part of the power set lattice above $A$, meaning $A\uparrow=\set{X\of\N\mid A\of X}$. Since the complement $B=\N\setminus A$ is infinite, there is a bijection of it with the whole set of natural numbers $\pi:B\to\N$. I claim that the lattice $A{\uparrow}$ is isomorphic with the lattice $P(B)$, which by statement (1) we know is isomorphic to $P(\N)$. To see this, we simply cast $A$ out of any set $X$ containing $A$. What remains is a subset of $B$, and every subset of $B$ will be realized. More specifically, if $A\of X$ we define $\tau(X)=X\setminus A=X\intersect B$, and observe that $X\of Y$ if and only if $\tau(X)\of\tau(Y)$, for the sets already containing $A$. And if $Z\of B$, then $Z=\tau(A\union Z)$. So $\tau$ is an isomorphism of the lattice $A{\uparrow}$ with $P(B)$, which by statement (1) is isomorphic to the whole lattice $P(\N)$, as desired.

Finally, let us fix two infinite coinfinite sets $A,B\of \N$. In particular, $A$ and $B$ are in bijection, as are the complements $\N\setminus A$ and $\N\setminus B$. By combining these two bijections, we get a bijection $\pi:\N\to\N$ of $\N$ with itself that carries $A$ exactly to $B$, that is, for which $\pi[A]=B$. (The reader will have an exercise in my book to give a careful account of this construction.) Because $\pi$ is a bijection of $\N$ with itself, it induces a corresponding automorphism of the lattice, defined by $\bar\pi(X)=\pi[X]$ for $X\of\N$. This is an isomorphism of $P(\N)$ with itself, and $\bar\pi(A)=\pi[A]=B$, as desired. $\Box$

Statement (3) of the theorem shows that any two infinite coinfinite sets, being automorphic images of one another, play the same structural role in the lattice $P(\N)$—they will have all the same lattice-theoretic properties. In light of this theorem, therefore, one should not look for structure theorems concerning particular sets in the lattice. Rather, the structure theory of $P(\N)$ is about the structure as a whole, as with the theorems I have proved above.

This is a selection from my book in progress Topics in Logic. It appears in chapter 3, which is on relational logic, and which includes a section on orders with a subsection specifically on the lattice of all sets of natural numbers, because I find it fascinating.

The Tennenbaum phenomenon for computable quotient presentations of models of arithmetic and set theory, Shanghai, August 2021

This will be a talk for the conference Fudan Model Theory and Philosophy of Mathematics, held at Fudan University in Shanghai and online, 21-24 August 2021. My talk will take place on Zoom on 23 August 20:00 Beijing time (1pm BST).

Abstract. Tennenbaum famously proved that there is no computable presentation of a nonstandard model of arithmetic or indeed of any model of set theory. In this talk, I shall discuss the Tennenbaum phenomenon as it arises for computable quotient presentations of models. Quotient presentations offer a philosophically attractive treatment of identity, a realm in which questions of identity are not necessarily computable. Objects in the presentation serve in effect as names for objects in the final quotient structure, names that may represent the same or different items in that structure, but one cannot necessarily tell which. Bakhadyr Khoussainov outlined a sweeping vision for quotient presentations in computable model theory and made several conjectures concerning the Tennenbaum phenomenon. In this talk, I shall discuss joint work with Michał Godziszewski that settles and addresses several of these conjectures.

Naturality in mathematics and the hierarchy of consistency strength, University of Konstanz, July 2021

This is a talk for the Logik Kolloquium at the University of Konstanz, spanning the departments of mathematics, philosophy, linguistics, and computer science.  19 July 2021 on Zoom. 15:15 CEST (2:15 pm BST).

 
Abstract: An enduring mystery in the foundations of mathematics is the observed phenomenon that our best and strongest mathematical theories seem to be linearly ordered and indeed well-ordered by consistency strength. For any two of the familiar large cardinal hypotheses, one of them generally proves the consistency of the other. Why should this be? Why should it be linear? Some philosophers see the phenomenon as significant for the philosophy of mathematics—it points us toward an ultimate mathematical truth. Meanwhile, the linearity phenomenon is not strictly true as mathematical fact, for we can prove that the hierarchy of consistency strength is actually ill-founded, densely ordered, and nonlinear. The counterexample statements and theories, however, are often dismissed as unnatural. Linearity is thus a phenomenon only for the so-called “naturally occurring” theories. But what counts as natural? Is there a mathematically meaningful account of naturality? In this talk, I shall criticize this notion of naturality, and attempt to undermine the linearity phenomenon by presenting a variety of natural hypotheses that reveal ill-foundedness, density, and incomparability in the hierarchy of consistency strength.

The talk should be generally accessible to university logic students.

What is the game of recursive chess?

Consider this fascinating vision of recursive chess — the etching below was created by Django Pinter, a tutorial student of mine who has just completed his degree in the PPL course here at Oxford, given to me as a parting gift at the conclusion of his studies. Django’s idea was to play chess, but in order for a capture to occur successfully on the board, as here with the black Queen attempting to capture the opposing white knight, the two pieces would themselves sit down for their own game of (recursive) chess. The idea was that the capture would be successful only in the event that the subgame was won. Notice in the image that not only is there a smaller recursive game of chess, but also a still tinier subrecursive game below that (if you inspect closely), while at the same time larger pieces loom in the background, indicating that the main board itself is already several levels deep in the recursion.

The recursive chess idea may seem clear enough initially, and intriguing. But with further reflection, we might wonder how does it work exactly? What precisely is the game of recursive chess? This is my question here.

My goal with this post is to open a discussion about what ultimately should be the precise the rules and operations of recursive chess. I’m not completely sure what the best rule set will be, although I do offer several proposals here. I welcome further proposals, commentary, suggestions, and criticism about how to proceed. Once we settle upon a best or most natural version of the game, then I shall hope perhaps to prove things about it. Will you help me decide what is the game of recursive chess?

Let me describe several natural proposals:

Naïve recursion. Although this seems initially to be a simple, sound proposal, ultimately I find it problematic. The naïve idea is that when one piece wants to capture another in the game at hand, then the two pieces themselves play a game of chess, starting from the usual chess starting position. I would find it natural that the attacking piece should play as white in this game, going first, and if that player wins the subgame, then the capture in the current game is successful. If the subgame is a loss, then the capture is unsuccessful.

There seem, however, to be a variety of ways to handle the losing subgame outcome, and since these will apply with several of the other proposals, let me record them here:

  • Failed-capture. If the subgame is lost, then the capture in the current game simply does not occur. Both pieces remain on their original squares, and the turn of play passes to the opponent. Notice that this will have serious affects in certain chess situations involving zugswang, a position where a player has no good move — because if one of them is a capture, then the player can aim to play badly in the subgame and thereby legally pass the turn of play to the opponent without having made any move. This situation will in effect cause the subgame players to attempt to lose, rather than win, which seems undesirable.
  • Failed-capture-with-penalty. If the subgame is lost, then the capture does not occur, but furthermore, the attacking piece is itself lost, removed from the board, and the turn of play passes to the opponent. In effect, under this rule, every attempt at capture is putting the life of the capturing piece at risk, which makes a certain amount of sense from a military point of view. I think perhaps this is a good rule.
  • Failed-capture-with-retry. If the subgame is lost, then the capture does not occur, but both pieces remain on their original squares, and the attacking player is allowed to proceed with another (different) move. Attempting the same attack from the same board position multiple times is subject to the three-fold repetition rule. This interpretation amounts in effect to the game play searching a large part of the game tree, exploring the possible capturing moves, but with the first successful option fixed as official. It invites manipulation by the opponent, who might play badly against a misguided capture attempt, causing it to be fixed as the official move.
  • Drawn subgame. A further complication arises from the fact that the subgame can itself be drawn, rather than won. Is this sufficient to cause the penalty or the retry? Or does this count as a failed capture?

As I see it, however, the principal problem with the naïve recursion rule is that it seems to be ill-founded. After all, we can have a game with a capture, which leads to a subgame with a capture, which leads to a deeper subgame with a capture, and so on descending forever. How is the outcome determined in this infinitely descending situation? None of the subgames is ever resolved with a definite conclusion until all of them are, and there seems no coherent way to assign resolutions to them. All infinitely many subgames are simply left hanging mid-play, and indeed mid-move. For this reason, the naïve recursion idea seems ultimately incoherent to me as a game rule.

What we would seem to need instead is a well-founded recursion, one which would ultimately bottom-out in a base case. With such a recursion, every outcome of the game would be well-defined. Such a well-founded recursion would be achieved, for example, if on every subgame there were strictly fewer pieces. Eventually, the subgames would reduce to king versus king, a drawn game, and then the drawn subgame rule would be invoked to whatever affect it cause. But the recursion would definitely terminate. And perhaps most recursions would terminate because the stronger player was ultimately mating in all his attacks, without requiring any invocation of the drawn subgame rule.

We can easily describe several natural subgame positions with one fewer piece. For example, when one piece attacks another, we may naturally consider the positions that would result if we performed the capture, or if we removed the attacking piece; and we might further consider swapping the roles of the players in these positions. Such cases would constitute a well-founded recursion, because the subgame position would have fewer pieces than the main position. In this way, we arrive at several natural recursion rules for recursive chess.

Proof-of-sufficiency recursion. The motivating idea of this recursion rule is that in order for an attack to be successful, the attacking player must prove that it will be sufficient for the attack to be successful. So, when piece A attacks piece B in the game, then a subgame is set up from the position that would result if A were successfully to capture B, and the players proceed in the game in which the attack has occurred. This is the same as proceeding in the main game, under the assumption that the attack was successful. If the attacking player can win this subgame, then it shows in a sense the sufficiency of the attack as a means to win, and so on the proof-of-sufficiency idea, we allow this as a reason for the attack to have been successful.

One might object that this recursion seems at first to amount to just playing chess as usual, since the subgame is the same as the original game with the attack having succeeded. But there is a subtle difference. For a misguided attack, the attacked player can play suboptimally in the subgame, intentionally losing that game, and then play differently in the main game. There is, of course, no obligation that the players respond the same at the higher-level games as in the lower games, and this is all part of their strategic calculation.

Proof-of-necessity recursion. The motivating idea of this recursion rule, in contrast, is that in order for an attack to be successful, the attacking player must prove that it is necessary that the attack take place. When piece A attacks piece B in the main game, then a subgame is set up in which the attack has not succeeded, but instead the attacking piece is lost, but the color sides of the players are swapped. If a black Queen attacks a white knight, for example, then in the subgame position, the queen is removed, and the players proceed from that positions, but with the opposite colors. By winning this subgame, the attacking player is in effect demonstrating that if the attack were to fail, then it would be devastating for them. In other words, they are demonstrating the necessity of the success of the attack.

For the both the proof-of-sufficiency and the proof-of-necessity versions of the recursion, it seems to me that any of the three failed-capture rules would be sensible. And so we have quite a few different interpretations here for what is the game of recursive chess.

What is your proposal? Please let me know in the comments. I am interested to hear any comments or criticism.

Categorical set theories, Munich, June 2021

This is a talk for the  group in logic and philosophy of language at the Munich Center for Mathematical Philosophy

24 June 2021, 4:15 pm Munich time (3:15 pm BST) on Zoom at 925 6562 2309 (contact Ursula Danninger office.leitgeb@lrz.uni-muenchen.de for password.)

Abstract. Zermelo famously proved that second-order ZFC is quasi-categorical—the models of this theory are precisely the rank-initial segments of the set-theoretic universe cut off at an inaccessible cardinal. Which are the fully categorical extensions of this theory? This question gives rise to the notion of categorical large cardinals, and opens the door to several puzzling philosophical issues, such as the conflict between categoricity as a fundamental value in mathematics and reflection principles in set theory. (This is joint work with Robin Solberg, Oxford.)

Plenitudinous Primes!

A proof of the infinitude of primes, in meter.

Let’s prove the primes’ infinitude
they do exist in multitude

of quite astounding magnitude
we count the primes in plenitude!

‘Twas proved by Euclid way back when
in Elements he argued then

He proved it with exactitude
in his Book IX he did conclude

for any given finite list
there will be primes the list has missed

to prove this gem let number N
be when you multiply them then

multiply the list for fun,
multiply, and then add one

If into N we shall divide
the listed primes seen in that guide

remainder one each leaves behind
so none as factors could we find

but every number has some prime factor
with our N take such an actor

since it can’t be on the list
we thus deduce more primes exist

No finite list can be complete
and so the claim is in retreat

By iterating this construction
primes do flow in vast deduction

as many as we’d like to see
so infinite the primes must be

Thus proved the primes’ infinitude
in quite enormous multitude

of arb’trarly great magnitude
we count the primes in plenitude!

‘Twas known by Euclid way back then
he proved it in the Elements when

in modern times we know some more
math’s never done, there’s new folklore:

For Bertrand, Chebyshev had said
about the puzzle in his head

and Erdős said it once again:
there’s always a prime between n and 2n.

We proved the primes’ infinitude
they do exist in multitude

of inconceiv’ble magnitude
we count the primes in plentitude!

The primes are plenitudinous
they’re truly multitudinous!


Musical video production of Plenitudinous Primes by the supremely talented Hannah Hoffman!