The universal finite set, Rutgers Logic Seminar, April 2018

This will be a talk for the Rutgers Logic Seminar, April 2, 2018. Hill Center, Busch campus.

Abstract. I shall define a certain finite set in set theory $$\{x\mid\varphi(x)\}$$ and prove that it exhibits a universal extension property: it can be any desired particular finite set in the right set-theoretic universe and it can become successively any desired larger finite set in top-extensions of that universe. Specifically, ZFC proves the set is finite; the definition $\varphi$ has complexity $\Sigma_2$ and therefore any instance of it $\varphi(x)$ is locally verifiable inside any sufficient $V_\theta$; the set is empty in any transitive model and others; and if $\varphi$ defines the set $y$ in some countable model $M$ of ZFC and $y\subset z$ for some finite set $z$ in $M$, then there is a top-extension of $M$ to a model $N$ in which $\varphi$ defines the new set $z$.  The definition can be thought of as an idealized diamond sequence, and there are consequences for the philosophical theory of set-theoretic top-extensional potentialism.

This is joint work with W. Hugh Woodin.

Determinacy for open class games is preserved by forcing, CUNY Set Theory Seminar, April 2018

This will be a talk for the CUNY Set Theory Seminar, April 27, 2018, GC Room 6417, 10-11:45am (please note corrected date).

Abstract. Open class determinacy is the principle of second order set theory asserting of every two-player game of perfect information, with plays coming from a (possibly proper) class $X$ and the winning condition determined by an open subclass of $X^\omega$, that one of the players has a winning strategy. This principle finds itself about midway up the hierarchy of second-order set theories between Gödel-Bernays set theory and Kelley-Morse, a bit stronger than the principle of elementary transfinite recursion ETR, which is equivalent to clopen determinacy, but weaker than GBC+$\Pi^1_1$-comprehension. In this talk, I’ll given an account of my recent joint work with W. Hugh Woodin, proving that open class determinacy is preserved by forcing. A central part of the proof is to show that in any forcing extension of a model of open class determinacy, every well-founded class relation in the extension is ranked by a ground model well-order relation. This work therefore fits into the emerging focus in set theory on the interaction of fundamental principles of second-order set theory with fundamental set theoretic tools, such as forcing. It remains open whether clopen determinacy or equivalently ETR is preserved by set forcing, even in the case of the forcing merely to add a Cohen real.

Nonamalgamation in the Cohen generic multiverse, CUNY Logic Workshop, March 2018

This will be a talk for the CUNY Logic Workshop on March 23, 2018, GC 6417 2-3:30pm.

Abstract. Consider a countable model of set theory $M$ in the context of all its successive forcing extensions and grounds. This generic multiverse has long been known to exhibit instances of nonamalgamation: one can have two extensions $M[c]$ and $M[d]$, both adding a merely a generic Cohen real, which have no further extension in common. In this talk, I shall describe new joint work that illuminates the extent of non-amalgamation: every finite partial order (and more) embeds into the generic multiverse over any given model in a way that preserves amalgamability and non-amalgamability. The proof uses the set-theoretic blockchain argument (pictured above), which has affinities with constructions in computability theory in the Turing degrees. Other arguments, which also resemble counterparts in computability theory, show that the generic multiverse exhibits the exact pair phenonemon for increasing chains. This is joint work with Miha Habič, myself, Lukas Daniel Klausner and Jonathan Verner. The paper will be available this Spring.

Nonstandard models of arithmetic arise in the complex numbers

I’d like to explain that one may find numerous nonstandard models of arithmetic as substructures of the field of complex numbers.

The issue arose yesterday at Hans Schoutens’s talk for the CUNY Logic Workshop. The main focus of the talk was the question, for a given algebraically closed field $k$ of characteristic zero and a given model of arithmetic $\Gamma\models$PA, whether $\Gamma$ and $k$ were jointly realizable as the set of powers (as he defines it) and the set of units of a model $S$ of the generalized theory of polynomial rings over fields. Very interesting stuff.

During the talk, a side question arose, concerning exactly which models of PA arise as substructures of the field of complex numbers.

Question. Which models of PA arise as substructures of the field of complex numbers $\langle\mathbb{C},+,\cdot\rangle$?

Of course the standard model $\mathbb{N}$ arises this way, and some people thought at first it should be difficult to realize nonstandard models of PA as substructures of $\mathbb{C}$. After some back and forth, the question was ultimately answered by Alfred Dolich in the pub after the seminar, and I’d like to give his argument here (but see the Mlček reference below).  This is a case where a problem that was initially confusing becomes completely clear!

Theorem. Every model of PA of size at most continuum arises as a sub-semiring of the field of complex numbers $\langle\mathbb{C},+,\cdot\rangle$.

Proof. Suppose that $M$ is a model of PA of size at most continuum. Inside $M$, we may form $M$’s version of the algebraic numbers $A=\bar{\mathbb{Q}}^M$, the field that $M$ thinks is the algebraic closure of its version of the rationals. So $A$ is an algebraically closed field of characteristic zero, which has an elementary extension to such a field of size continuum. Since the theory of algebraically closed fields of characteristic zero is categorical in all uncountable powers, it follows that $A$ is isomorphic to a submodel of $\mathbb{C}$. Since $M$ itself is isomorphic to a substructure of its rationals $\mathbb{Q}^M$, which sit inside $A$, it follows that $M$ is isomorphic to a substructure of $\mathbb{C}$, as claimed. QED

In particular, every countable model of PA can be found as a substructure of the complex numbers.

Essentially the same argument shows the following.

Theorem. If $k$ is an uncountable algebraically closed field of characteristic zero, then every model of arithmetic $M\models$PA of size at most the cardinality of $k$ embeds into $k$.

I’ve realized that the same collection of ideas shows the following striking way to look upon the complex numbers:

Theorem. The complex numbers $\mathbb{C}$ can be viewed as a nonstandard version of the algebraic numbers $\bar{\mathbb{Q}}^M$ inside a nonstandard model $M$ of PA. Indeed, for every uncountable algebraically closed field $F$ of characteristic zero and every model of arithmetic $M\models$PA of the same cardinality, the field $F$ is isomorphic to the nonstandard algebraic numbers $\bar{\mathbb{Q}}^M$ as $M$ sees them.

Proof. Fix any such field $F$, such as the complex numbers themselves, and consider any nonstandard model of arithmetic $M$ of the same cardinality. The field $\bar{\mathbb{Q}}^M$, which is $M$’s nonstandard version of the algebraic numbers, is an algebraically closed field of characteristic zero and same uncountable size as $F$. By categoricity, these fields are isomorphic. $\Box$

I had suspected that these results were folklore in the model-theoretic community, and it has come to my attention that proper credit for the main observation of this post seems to be due to Jozef Mlček, who proved it in 1973. Thanks to Jerome Tauber for the reference, which he provided in the comments.

Modal principles of potentialism, Oxford, January 2018

This was a talk I gave at University College Oxford to the philosophy faculty.

Abstract. One of my favorite situations occurs when philosophical ideas or issues inspire a bit of mathematical analysis, which in turn raises further philosophical questions and ideas, in a fruitful cycle. The topic of potentialism originates, after all, in the classical dispute between actual and potential infinity. Linnebo and Shapiro and others have emphasized the modal nature of potentialism, de-coupling it from infinity: the essence of potentialism is about approximating a larger universe or structure by means of partial structures or universe fragments. In several mathematical projects, my co-authors and I have found the exact modal validities of several natural potentialist concepts arising in the foundations of mathematics, including several kinds of set-theoretic and arithmetic potentialism. Ultimately, the variety of kinds of potentialism suggest a refocusing of potentialism on the issue of convergent inevitability in comparison with radical branching. I defended the theses, first, that convergent potentialism is implicitly actualist, and second, that we should understand ultrafinitism in modal terms as a form of potentialism, one with suprising parallels to the case of arithmetic potentialism.

Here are my lecture notes that I used as a basis for the talk:

For a fuller, more technical account of potentialism, see the three-lecture tutorial series I gave for the Logic Winter School 2018 in Hejnice: Set-theoretic potentialism, and follow the link to the slides.

The subseries number

  • J. Brendle, W. Brian, and J. D. Hamkins, “The subseries number,” ArXiv e-prints, 2018. (manuscript under review)  
    author = {J\"org Brendle and Will Brian and Joel David Hamkins},
    title = {The subseries number},
    journal = {ArXiv e-prints},
    year = {2018},
    volume = {},
    number = {},
    pages = {},
    month = {},
    note = {manuscript under review},
    url = {},
    eprint = {1801.06206},
    archivePrefix = {arXiv},
    primaryClass = {math.LO},
    abstract = {},
    keywords = {under-review},
    source = {},

Abstract. Every conditionally convergent series of real numbers has a divergent  subseries. How many subsets of the natural numbers are needed so that every conditionally convergent series diverges on the subseries corresponding to one of these sets? The answer to this question is defined to be the subseries number, a new cardinal characteristic of the continuum. This cardinal is bounded below by $\aleph_1$ and  above by the cardinality of the continuum, but it is not provably equal to either. We define three natural variants of the subseries number, and compare them with each other, with their corresponding rearrangement numbers, and with several well-studied cardinal characteristics of the continuum. Many consistency results are obtained from these comparisons, and we obtain another by computing the value of the subseries number in the Laver model.

This paper grew naturally out of our previous paper, The rearrangement number, which considered the minimal number of permutations of $\mathbb{N}$ which suffice to reveal the conditional convergence of all conditionally convergent series. I had defined the subseries number in my answer to a MathOverflow question, On Hamkins’s answer to a question of Michael Hardy’s, asked by M. Rahman in response to the earlier MO questions on the rearrangement number.

In the paper, we situation the subseries number ß (German sharp s) with respect to other cardinal characteristics, including the rearrangement numbers.

Corey Switzer, The Cichoń Diagram for Degrees of Relative Constructibility

My student Corey Switzer has just completed a paper:

Corey Switzer, The Cichoń Diagram for Degrees of Relative Constructibility, ArXiv e-print:1801.06497.


Abstract. Following a line of research initiated in Brendle/Brooke-Taylor/Ng/Nies, I describe a general framework for turning reduction concepts of relative computability into diagrams forming an analogy with the Cichoń diagram for cardinal characteristics of the continuum. I show that working from relatively modest assumptions about a notion of reduction, one can construct a robust version of such a diagram. As an application, I define and investigate the Cichoń Diagram for degrees of constructibility relative to a fixed inner model W. Many analogies hold with the classical theory as well as some surprising differences. Along the way I introduce a new axiom stating, roughly, that the constructibility diagram is as complex as possible.

This interesting paper concerns a generalization of the Cichoń diagram to arbitrary reducibility notions, focussing on the case of the constructibility degrees, or somewhat more generally, relative constructibility $\leq_W$ over a fixed inner model $W$.

The classes are defined by an abstract generalization of the ideas underlying the familiar cardinal characteristics and the classical Cichoń diagram. Namely, $B_{\leq}(R)$ is the set of reals that $\leq$-build a witness for the fact that the reals of $W$ are small with respect to the relation $R$, that is, an $R$-bound for the reals of $W$; and $D_{\leq}(R)$ is the set of reals that $\leq$-build a witness for the fact that the reals of $W$ are not big with respect to the relation $R$, that is, a real that is not $R$-dominated by any real in $W$.

These classes fit together in a way that forms a robust analogy with the classical Cichoń diagram.

In his paper, Corey proves that the diagram is complete with respect to the inclusions indicated, by analyzing the nature of the diagram in various forcing extensions of $W$, such as the following.


In the end, he shows that in a suitable (proper) forcing extension, one can achieve all the separations simultaneously.

Indeed, the assertion that all separations are attained can be taken as a set-theoretic principle or axiom of its own, the complete Cichoń diagram assertion CD. He proves, for example, that CD is a consequence of the maximality principle.

See Corey’s blog post about his paper.

The modal logic of arithmetic potentialism and the universal algorithm

  • J. D. Hamkins, “The modal logic of arithmetic potentialism and the universal algorithm,” ArXiv e-prints, pp. 1-35, 2018. (under review)  
    author = {Joel David Hamkins},
    title = {The modal logic of arithmetic potentialism and the universal algorithm},
    journal = {ArXiv e-prints},
    year = {2018},
    volume = {},
    number = {},
    pages = {1--35},
    month = {},
    eprint = {1801.04599},
    archivePrefix = {arXiv},
    primaryClass = {math.LO},
    note = {under review},
    url = {},
    abstract = {},
    keywords = {under-review},
    source = {},
    doi = {},

Abstract. Natural potentialist systems arise from the models of arithmetic when they are considered under their various natural extension concepts, such as end-extensions, arbitrary extension, $\Sigma_n$-elementary extensions, conservative extensions and more. For these potentialist systems, I prove, a propositional modal assertion is valid in a model of arithmetic, with respect to assertions in the language of arithmetic with parameters, exactly when it is an assertion of S4. Meanwhile, with respect to sentences, the validities of a model are always between S4 and S5, and these bounds are sharp in that both endpoints are realized. The models validating exactly S5 are the models of the arithmetic maximality principle, which asserts that every possibly necessary statement is already true, and these models are equivalently characterized as those satisfying a maximal $\Sigma_1$ theory. The main proof makes fundamental use of the universal algorithm, of which this article provides a self-contained account.


In this article, I consider the models of arithmetic under various natural extension concepts, including end-extensions, arbitrary extensions, $\Sigma_n$-elementary extensions, conservative extensions and more. Each extension concept gives rise to an arithmetic potentialist system, a Kripke model of possible arithmetic worlds, and the main goal is to discover the modal validities of these systems.

For most of the extension concepts, a modal assertion is valid with respect to assertions in the language of arithmetic, allowing parameters, exactly when it is an assertion of the modal theory S4. For sentences, however, the modal validities form a theory between S4 and S5, with both endpoints being realized. A model of arithmetic validates S5 with respect to sentences just in case it is a model of the arithmetic maximality principle, and these models are equivalently characterized as those realizing a maximal $\Sigma_1$ theory.

The main argument relies fundamentally on the universal algorithm, the theorem due to Woodin that there is a Turing machine program that can enumerate any finite sequence in the right model of arithmetic, and furthermore this model can be end-extended so as to realize any further extension of that sequence available in the model. In the paper, I give a self-contained account of this theorem using my simplified proof.

The paper concludes with philosophical remarks on the nature of potentialism, including a discussion of how the linear inevitability form of potentialism is actually much closer to actualism than the more radical forms of potentialism, which exhibit branching possibility. I also propose to view the philosphy of ultrafinitism in modal terms as a form of potentialism, pushing the issue of branching possibility in ultrafinitism to the surface.

Discussion of McCallum’s paper on Reinhardt cardinals in ZF

Update: Rupert has withdrawn his claim. See the final bullet point below.

Rupert McCallum has posted a new paper to the mathematics arXiv

Rupert McCallum, The choiceless cardinals are inconsistent, mathematics arXiv 2017: 1712.09678.

He is claiming to establish the Kunen inconsistency in ZF, without the axiom of choice, which is a long-standing open question. In particular, this would refute the Reinhardt cardinals in ZF and all the stronger ZF large cardinals that have been studied.

If correct, this result will constitute a central advance in large cardinal set theory.

I am making this post to provide a place to discuss the proof and any questions that people might have about it. Please feel free to post comments with questions or answers to other questions that have been posted. I will plan to periodically summarize things in the main body of this post as the discussion proceeds.

  • My first question concerns lemma 0.4, where he claims that $j’\upharpoonright V_{\lambda+2}^N$ is a definable class in $N$. He needs this to get the embedding into $N$, but I don’t see why the embedding should be definable here.
  • I wrote to Rupert about this concern, and he replied that it may be an issue, and that he intends to post a new version of his paper, where he may retreat to the weaker claim refuting only the super-Reinhardt cardinals.
  • The updated draft is now available. Follow the link above. It will become also available on the arXiv later this week.
  • The second January 2 draft has a new section claiming again the original refutation of Reinhardt cardinals.
  • New draft January 3. Rupert has reportedly been in communication with Matteo Viale about his result.
  • Rupert has announced (Jan 3) that he is going to take a week or so to produce a careful rewrite.
  • He has made available his new draft, January 7. It will also be posted on the arXiv.
  • January 8:  In light of the issues identified on this blog, especially the issue mentioned by Gabe, Rupert has sent me an email stating (and asking me to post here) that he is planning to think it through over the next couple of weeks and will then make some kind of statement about whether he thinks he can save the argument.  For the moment, therefore, it seems that we should consider the proof to be on hold.
  • January 24: After consideration, Rupert has withdrawn the claim, sending me the following message:

    “Gabriel has very kindly given me extensive feedback on many different drafts. I attach the latest version which he commented on [January 24 draft above]. He has identified the flaw, namely that on page 3 I claim that $\exists n \forall Y \in W_n \psi(Y)$ if and only if $\forall Y \in U \psi(Y)$. This claim is not justified, and this means that there is no way that is apparent to me to rescue the proof of Lemma 1.2. Gabriel has directed me to a paper of Laver which does indeed show that my mapping e is an elementary embedding but which does not give the stronger claim that I want.


    …So, I withdraw my claim. It is possible that this method of proof can work somehow, but some new insight is needed to make it work.”


    -Rupert McCallum, January 24, 2018

On the strengths of the class forcing theorem and clopen class game determinacy, Prague set theory seminar, January 2018

This will be a talk for the Prague set theory seminar, January 24, 11:00 am to about 2pm (!).

Abstract. The class forcing theorem is the assertion that every class forcing notion admits corresponding forcing relations. This assertion is not provable in Zermelo-Fraenkel ZFC set theory or Gödel-Bernays GBC set theory, if these theories are consistent, but it is provable in stronger second-order set theories, such as Kelley-Morse KM set theory. In this talk, I shall discuss the exact strength of this theorem, which turns out to be equivalent to the principle of elementary transfinite recursion ETRord for class recursions on the ordinals. The principle of clopen determinacy for class games, in contrast, is strictly stronger, equivalent over GBC to the full principle of ETR for class recursions over arbitrary class well-founded relations. These results and others mark the beginnings of the emerging subject I call the reverse mathematics of second-order set theory.

The exact strength of the class forcing theorem | Open determinacy for class games

Self reference in computability theory and the universal algorithm, Ouroboros: Formal Criteria of Self-Reference in Mathematics and Philosophy, Bonn, February 2018

This will be a talk for the conference: Ouroboros: Formal Criteria of Self-Reference in Mathematics and Philosophy, held in Bonn, February 16-18, 2018.

Abstract. I shall give an elementary account of the universal algorithm, due to Woodin, showing how the capacity for self-reference in arithmetic gives rise to a Turing machine program $e$, which provably enumerates a finite set of numbers, but which can in principle enumerate any finite set of numbers, when it is run in a suitable model of arithmetic. Furthermore, the algorithm can successively enumerate any desired extension of the sequence, when run in a suitable top-extension of the universe. Thus, the algorithm sheds some light on the debate between free will and determinism, if one should imagine extending the universe into a nonstandard time scale. An analogous result holds in set theory, where Woodin and I have provided a universal locally definable finite set, which can in principle be any finite set, in the right universe, and which can furthermore be successively extended to become any desired finite superset of that set in a suitable top-extension of that universe.

Ouroboros Bonn 2018 Conference Poster | Slides

The universal algorithm and the universal finite set, Prague 2018

This will be a talk at the Prague Gathering of Logicians & Beauty of Logic 2018, January 25-27, 2018.

Abstract. The universal algorithm is a Turing machine program $e$ that can in principle enumerate any finite sequence of numbers, if run in the right model of PA, and furthermore, can always enumerate any desired extension of that sequence in a suitable end-extension of that model. The universal finite set is a $\Sigma_2$ definition that can in principle define any finite set, in the right model of set theory, and can always define any desired finite extension of that set in a suitable top-extension of that model. I shall give an account of both results and describe applications to the model theory of arithmetic and set theory.

Slides | Lecture notes

Set-theoretic potentialism, Winter School in Abstract Analysis 2018, Hejnice, Czech Republic

This will be a tutorial lecture series for the Winter School in Abstract Analysis 2018, held in Hejnice of the Czech Republic.

Abstract. I shall introduce and develop the theory of set-theoretic potentialism. A potentialist system is a collection of first-order structures, all in the same language $\mathcal{L}$, equipped with an accessibility relation refining the inclusion relation. Any such system, viewed as an inflationary-domain Kripke model, provides a natural interpretation for the modal extension of the underlying language $\mathcal{L}$ to include the modal operators. We seek to understand a given potentialist system by analyzing which modal assertions are valid in it.

Set theory exhibits an enormous variety of natural potentialist systems. For example, with forcing potentialism, one considers the models of set theory, each accessing its forcing extensions; with rank potentialism, one considers the collection of of rank-initial segments $V_\alpha$ of a given set-theoretic universe; with Grothendieck-Zermelo potentialism, one has the collection of $V_\kappa$ for (a proper class of) inaccessible cardinals $\kappa$; with top-extensional potentialism, one considers the collection of countable models of ZFC under the top-extension relation; and so on with many other natural examples.

In this tutorial, we shall settle the precise potentialist validities of each of these potentialist systems and others, and we shall develop the general tools that enable one to determine the modal theory of a given potentialist system. Many of these arguments proceed by building connections between certain sweeping general features of the models in the potentialist system and certain finite combinatorial objects such as trees or lattices. A key step involves finding certain kinds of independent control statements — buttons, switches, ratchets and rail-switches — in the collection of models.



Here is an interesting game I heard a few days ago from one of my undergraduate students; I’m not sure of the provenance.

The game is played with stones on a grid, which extends indefinitely upward and to the right, like the lattice $\mathbb{N}\times\mathbb{N}$.  The game begins with three stones in the squares nearest the origin at the lower left.  The goal of the game is to vacate all stones from those three squares. At any stage of the game, you may remove a stone and replace it with two stones, one in the square above and one in the square to the right, provided that both of those squares are currently unoccupied.

For example, here is a sample play.

Question. Can you play so as completely to vacate the yellow corner region?

One needs only to move the other stones out of the way so that the corner stones have room to move out. Can you do it? It isn’t so easy, but I encourage you to try.

Here is an online version of the game that I coded up quickly in Scratch: Escape!

My student mentioned the problem to me and some other students in my office on the day of the final exam, and we puzzled over it, but then it was time for the final exam. So I had a chance to think about it while giving the exam and came upon a solution. I’ll post my answer later on, but I’d like to give everyone a chance to think about it first.

Solution. Here is the solution I hit upon, and it seems that many others also found this solution. The main idea is to assign an invariant to the game positions. Let us assign weights to the squares in the lattice according to the following pattern. We give the corner square weight $1/2$, the next diagonal of squares $1/4$ each, and then $1/8$, and so on throughout the whole playing board. Every square should get a corresponding weight according to the indicated pattern.

The weights are specifically arranged so that making a move in the game preserves the total weight of the occupied squares. That is, the total weight of the occupied squares is invariant as play proceeds, because moving a stone with weight $1/2^k$ will create two stones of weight $1/2^{k+1}$, which adds up to the same. Since the original three stones have total weight $\frac 12+\frac14+\frac14=1$, it follows that the total weight remains $1$ after every move in the game.

Meanwhile, let us consider the total weight of all the squares on the board. If you consider the bottom row only, the weights add to $\frac12+\frac14+\frac18+\cdots$, which is the geometric series with sum $1$. The next row has total weight $\frac14+\frac18+\frac1{16}+\cdots$, which adds to $1/2$. And the next adds to $1/4$ and so on. So the total weight of all the squares on the board is $1+\frac12+\frac14+\cdots$, which is $2$.  Since we have $k$ stones with weight $1/2^k$, another way to think about it is that we are essentially establishing the sum $\sum_k\frac k{2^k}=2$.

The subtle conclusion is that after any finite number of moves, only finitely many of those other squares are occupied, and so some of them remain empty. So after only finitely many moves, the total weight of the occupied squares off of the original L-shape is strictly less than $1$. Since the total weight of all the occupied squares is exactly $1$, this means that the L-shape has not been vacated.

So it is impossible to vacate the original L-shape in finitely many moves. $\Box$

Suppose that we relax the one-stone-per-square requirement, and allow you to stack several stones on a single square, provided that you eventually unstack them. In other words, can you play the stacked version of the game, so as to vacate the original three squares, provided that all the piled-up stones eventually are unstacked?

No, it is impossible! And the proof is the same invariant-weight argument as above. The invariance argument does not rely on the one-stone-per-square rule during play, since it is still an invariant if one multiplies the weight of a square by the number of stones resting upon it. So we cannot transform the original stones, with total weight $1$, to any finite number of stones on the rest of the board (with one stone per square in the final position), since those other squares do not have sufficient weight to add up to $1$, even if we allow them to be stacked during intermediate stages of play.

Meanwhile, let us consider playing the game on a finite $n\times n$ board, with the rule modified so that stones that would be created in row or column $n+1$ in the infinite game simply do not materialize in the $n\times n$ game. This breaks the proof, since the weight is no longer an invariant for moves on the outer edges. Furthermore, one can win this version of the game. It is easy to see that one can systematically vacate all stones on the upper and outer edges, simply by moving any of them that is available, pushing the remaining stones closer to the outer corner and into oblivion. Similarly, one can vacate the penultimate outer edges, by doing the same thing, which will push stones into the outer edges, which can then be vacated. By reverse induction from the outer edges in, one can vacate every single row and column. Thus, for play on this finite board with the modified rule on the outer edges, one can vacate the entire $n\times n$ board!

Indeed, in the finite $n\times n$ version of the game, there is no way to lose! If one simply continues making legal moves as long as this is possible, then the board will eventually be completely vacated. To see this, notice first that if there are stones on the board, then there is at least one legal move. Suppose that we can make an infinite sequence of legal moves on the $n\times n$ board. Since there are only finitely many squares, some of the squares must have been moved-upon infinitely often. If you consider such a square closest to the origin (or of minimal weight in the scheme of weights above), then since the lower squares are activated only finitely often, it is clear that eventually the given square will replenished for the last time. So it cannot have been activated infinitely often. (Alternatively, argue by induction on squares from the lower left that they are moved-upon at most finitely often.) Indeed, I claim that the number of steps to win, vacating the $n\times n$ board, does not depend on the order of play. One can see this by thinking about the path of a given stone and its clones through the board, ignoring the requirement that a given square carries only one stone. That is, let us make all the moves in parallel time. Since there is no interaction between the stones that would otherwise interfere, it is clear that the number of stones appearing on a given square in total is independent of the order of play. A tenacious person could calculate the sum exactly: each square is becomes occupied by a number of stones that is equal to the number of grid paths to it from one of the original three stones, and one could use this sum to calculate the total length of play on the $n\times n$ board.

Still don’t know, an epistemic logic puzzle

Here is a epistemic logic puzzle that I wrote for my students in the undergraduate logic course I have been teaching this semester at the College of Staten Island at CUNY.  We had covered some similar puzzles in lecture, including Cheryl’s Birthday and the blue-eyed islanders.

Bonus Question. Suppose that Alice and Bob are each given a different fraction, of the form $\frac{1}{n}$, where $n$ is a positive integer, and it is commonly known to them that they each know only their own number and that it is different from the other one. The following conversation ensues.


JDH: I privately gave you each a different rational number of the form $\frac{1}{n}$. Who has the larger number?

Alice: I don’t know.

Bob: I don’t know either.

Alice: I still don’t know.

Bob: Suddenly, now I know who has the larger number.

Alice: In that case, I know both numbers.

What numbers were they given?

Give the problem a try! See the solution posted below.

Meanwhile, for a transfinite epistemic logic challenge — considerably more difficult — see my puzzle Cheryl’s rational gifts.












When Alice says she doesn’t know, in her first remark, the meaning is exactly that she doesn’t have $\frac 11$, since that is only way she could have known who had the larger number.  When Bob replies after this that he doesn’t know, then it must be that he also doesn’t have $\frac 11$, but also that he doesn’t have $\frac 12$, since in either of these cases he would know that he had the largest number, but with any other number, he couldn’t be sure. Alice replies to this that she still doesn’t know, and the content of this remark is that Alice has neither $\frac 12$ nor $\frac 13$, since in either of these cases, and only in these cases, she would have known who has the larger number. Bob replies that suddenly, he now knows who has the larger number. The only way this could happen is if he had either $\frac 13$ or $\frac 14$, since in either of these cases he would have the larger number, but otherwise he wouldn’t know whether Alice had $\frac 14$ or not. But we can’t be sure yet whether Bob has $\frac 13$ or $\frac 14$. When Alice says that now she knows both numbers, however, then it must be because the information that she has allows her to deduce between the two possibilities for Bob. If she had $\frac 15$ or smaller, she wouldn’t be able to distinguish the two possibilities for Bob. Since we already ruled out $\frac 13$ for her, she must have $\frac 14$. So Alice has $\frac 14$ and Bob has $\frac 13$.

Many of the commentators came to the same conclusion. Congratulations to all who solved the problem! See also the answers posted on my math.stackexchange question and on Twitter:

Epistemic logic puzzle: Still Don’t Know.