Boolean ultrapowers, the Bukovský-Dehornoy phenomenon, and iterated ultrapowers

[bibtex key=”FuchsHamkins:TheBukovskyDehornoyPhenomenonForBooleanUltrapowers”]

Abstract. We show that while the length $\omega$ iterated ultrapower by a normal ultrafilter is a Boolean ultrapower by the Boolean algebra of Příkrý forcing, it is consistent that no iteration of length greater than $\omega$ (of the same ultrafilter and its images) is a Boolean ultrapower. For longer iterations, where different ultrafilters are used, this is possible, though, and we give Magidor forcing and a generalization of Příkrý forcing as examples. We refer to the discovery that the intersection of the finite iterates of the universe by a normal measure is the same as the generic extension of the direct limit model by the critical sequence as the Bukovský-Dehornoy phenomenon, and we develop a criterion (the existence of a simple skeleton) for when a version of this phenomenon holds in the context of Boolean ultrapowers.

Second-order transfinite recursion is equivalent to Kelley-Morse set theory over GBC

1167px-Wooden_spiral_stairs_(Nebotičnik,_Ljubljana)_croped
A few years ago, I had observed after hearing a talk by Benjamin Rin that the principle of first-order transfinite recursion for set well-orders is equivalent to the replacement axiom over Zermelo set theory, and thus we may take transfinite recursion as a fundamental set-theoretic principle, one which yields full ZFC when added to Zermelo’s weaker theory (plus foundation).

In later work, Victoria Gitman and I happened to prove that the principle of elementary transfinite recursion ETR, which allows for first-order class recursions along proper class well-orders (not necessarily set-like) is equivalent to the principle of determinacy for clopen class games [1]. Thus, once again, a strong recursion principle exhibited robustness as a fundamental set-theoretic principle.

The theme continued in recent joint work on the class forcing theorem, in which Victoria Gitman, myself, Peter Holy, Philipp Schlicht and Kameryn Williams [2] proved that the principle $\text{ETR}_{\text{Ord}}$, which allows for first-order class recursions of length $\text{Ord}$, is equivalent to twelve natural set-theoretic principles, including the existence of forcing relations for class forcing notions, the existence of Boolean completions for class partial orders, the existence of various kinds of truth predicates for infinitary logics, the existence of $\text{Ord}$-iterated truth predicates, and to the principle of determinacy for clopen class games of rank at most $\text{Ord}+1$.

A few days ago, a MathOverflow question of Alec Rhea’s — Is there a stronger form of recursion? — led me to notice that one naturally gains additional strength by pushing the recursion principles further into second-order set theory.

So let me introduce the second-order recursion principle STR and make the comparatively simple observation that over Gödel-Bernays GBC set theory this is equivalent to Kelley-Morse set theory KM. Thus, we may take this kind of recursion as a fundamental set-theoretic principle.

Definition. In the context of second-order set theory, the principle of second-order transfinite recursion, denoted STR, asserts of any formula $\varphi$ in the second-order language of set theory, that if $\Gamma=\langle I,\leq_\Gamma\rangle$ is any class well-order and $Z$ is any class parameter, then there is a class $S\subset I\times V$ that is a solution of the recursion, in the sense that
$$S_i=\{\ x\ \mid\  \varphi(x,S\upharpoonright i,Z)\ \}$$
for every $i\in I$, where where $S_i=\{\ x\ \mid\ (i,x)\in S\ \}$ is the section on coordinate $i$ and where $S\upharpoonright i=\{\ (j,x)\in S\ \mid\ j<_\Gamma i\ \}$ is the part of the solution at stages below $i$ with respect to $\Gamma$.

Theorem. The principle of second-order transfinite recursion STR is equivalent over GBC to the second-order comprehension principle. In other words, GBC+STR is equivalent to KM.

Proof. Kelley-Morse set theory proves that every second-order recursion has a solution in the same way that ZFC proves that every set-length well-ordered recursion has a solution. Namely, we simply consider the classes which are partial solutions to the recursion, in that they obey the recursive requirement, but possibly only on an initial segment of the well-order $\Gamma$. We may easily show by induction that any two such partial solutions agree on their common domain (this uses second-order comprehension in order to find the least point of disagreement, if any), and we can show that any given partial solution, if not already a full solution, can be extended to a partial solution on a strictly longer initial segment. Finally, we show that the common values of all partial solutions is therefore a solution of the recursion. This final step uses second-order comprehension in order to define what the common values are for the partial solutions to the recursion.

Conversely, the principle of second-order transfinite recursion clearly implies the second-order comprehension axiom, by considering recursions of length one. For any second-order assertion $\varphi$ and class parameter $Z$, we may deduce that $\{x\mid \varphi(x,Z)\}$ is a class, and so the second-order class comprehension principle holds. $\Box$

It is natural to consider various fragments of STR, such as $\Sigma^1_n\text{-}\text{TR}_\Gamma$, which is the assertion that every $\Sigma^1_n$-formula $\varphi$ admits a solution for recursions of length $\Gamma$.  Such principles are provable in proper fragments of KM, since for a given level of complexity, we only need a corresponding fragment of comprehension to undertake the proof that the recursion has a solution. The full STR asserts $\Sigma^1_\omega\text{-}\text{TR}$, allowing any length. The theorem shows that STR is equivalent to recursions of length $1$, since once you get the second-order comprehension principle, then you get solutions for recursions of any length. Thus, with second-order transfinite recursion, a little goes a long way. Perhaps it is more natural to think of transfinite recursion in this context not as axiomatizing KM, since it clearly implies second-order comprehension straight away, but rather as an apparent strengthening of KM that is actually provable in KM. This contrasts with the first-order situation of ETR with respect to GBC, where GBC+ETR does make a proper strengthening of GBC.

  1. [bibtex key=”GitmanHamkins2016:OpenDeterminacyForClassGames”]
  2. [bibtex key=”GitmanHamkinsHolySchlichtWilliams:The-exact-strength-of-the-class-forcing-theorem”]
Photo by Petar Milošević (Own work) [CC BY-SA 4.0], via Wikimedia Commons

The exact strength of the class forcing theorem

[bibtex key=”GitmanHamkinsHolySchlichtWilliams2020:The-exact-strength-of-the-class-forcing-theorem”]

Abstract. The class forcing theorem, which asserts that every class forcing notion $\newcommand\P{\mathbb{P}}\P$ admits a forcing relation $\newcommand\forces{\Vdash}\forces_\P$, that is, a relation satisfying the forcing relation recursion — it follows that statements true in the corresponding forcing extensions are forced and forced statements are true — is equivalent over Gödel-Bernays set theory GBC to the principle of elementary transfinite recursion $\newcommand\Ord{\text{Ord}}\newcommand\ETR{\text{ETR}}\ETR_{\Ord}$ for class recursions of length $\Ord$. It is also equivalent to the existence of truth predicates for the infinitary languages $\mathcal{L}_{\Ord,\omega}(\in,A)$, allowing any class parameter $A$; to the existence of truth predicates for the language $\mathcal{L}_{\Ord,\Ord}(\in,A)$; to the existence of $\Ord$-iterated truth predicates for first-order set theory $\mathcal{L}_{\omega,\omega}(\in,A)$; to the assertion that every separative class partial order $\P$ has a set-complete class Boolean completion; to a class-join separation principle; and to the principle of determinacy for clopen class games of rank at most $\Ord+1$. Unlike set forcing, if every class forcing relation $\P$ has a forcing relation merely for atomic formulas, then every such $\P$ has a uniform forcing relation that applies uniformly to all formulas. Our results situate the class forcing theorem in the rich hierarchy of theories between GBC and Kelley-Morse set theory KM.

We shall characterize the exact strength of the class forcing theorem, which asserts that every class forcing notion $\P$ has a corresponding forcing relation $\forces_\P$, a relation satisfying the forcing relation recursion. When there is such a forcing relation, then statements true in any corresponding forcing extension are forced and forced statements are true in those extensions.

Unlike the case of set forcing, where one may prove in ZFC that every set forcing notion has corresponding forcing relations, for class forcing it is consistent with Gödel-Bernays set theory GBC that there is a proper class forcing notion lacking a corresponding forcing relation, even merely for the atomic formulas. For certain forcing notions, the existence of an atomic forcing relation implies Con(ZFC) and much more, and so the consistency strength of the class forcing theorem strictly exceeds GBC, if this theory is consistent. Nevertheless, the class forcing theorem is provable in stronger theories, such as Kelley-Morse set theory. What is the exact strength of the class forcing theorem?

Our project here is to identify the strength of the class forcing theorem by situating it in the rich hierarchy of theories between GBC and KM, displayed in part in the figure above, with the class forcing theorem highlighted in blue. It turns out that the class forcing theorem is equivalent over GBC to an attractive collection of several other natural set-theoretic assertions; it is a robust axiomatic principle.

Hierarchy between GBC and KM

The main theorem is naturally part of the emerging subject we call the reverse mathematics of second-order set theory, a higher analogue of the perhaps more familiar reverse mathematics of second-order arithmetic. In this new research area, we are concerned with the hierarchy of second-order set theories between GBC and KM and beyond, analyzing the strength of various assertions in second-order set theory, such as the principle ETR of elementary transfinite recursion, the principle of $\Pi^1_1$-comprehension or the principle of determinacy for clopen class games. We fit these set-theoretic principles into the hierarchy of theories over the base theory GBC. The main theorem of this article does exactly this with the class forcing theorem by finding its exact strength in relation to nearby theories in this hierarchy.

Main Theorem. The following are equivalent over Gödel-Bernays set theory.

  1. The atomic class forcing theorem: every class forcing notion admits forcing relations for atomic formulas $$p\forces\sigma=\tau\qquad\qquad p\forces\sigma\in\tau.$$
  2. The class forcing theorem scheme: for each first-order formula $\varphi$ in the forcing language, with finitely many class names $\dot \Gamma_i$, there is a forcing relation applicable to this formula and its subformulas
    $$p\forces\varphi(\vec \tau,\dot\Gamma_0,\ldots,\dot\Gamma_m).$$
  3. The uniform first-order class forcing theorem: every class forcing notion $\P$ admits a uniform forcing relation $$p\forces\varphi(\vec \tau),$$ applicable to all assertions $\varphi$ in the first-order forcing language with finitely many class names $\mathcal{L}_{\omega,\omega}(\in,V^\P,\dot\Gamma_0,\ldots,\dot\Gamma_m)$.
  4. The uniform infinitary class forcing theorem: every class forcing notion $\P$ admits a uniform forcing relation $$p\forces\varphi(\vec \tau),$$ applicable to all assertions $\varphi$ in the infinitary forcing language with finitely many class names $\mathcal{L}_{\Ord,\Ord}(\in,V^\P,\dot\Gamma_0,\ldots,\dot\Gamma_m)$.
  5. Names for truth predicates: every class forcing notion $\P$ has a class name $\newcommand\T{{\rm T}}\dot\T$ and a forcing relation for which $1\forces\dot\T$ is a truth-predicate for the first-order forcing language with finitely many class names $\mathcal{L}_{\omega,\omega}(\in,V^\P,\dot\Gamma_0,\ldots,\dot\Gamma_m)$.
  6. Every class forcing notion $\P$, that is, every separative class partial order, admits a Boolean completion $\mathbb{B}$, a set-complete class Boolean algebra into which $\P$ densely embeds.
  7. The class-join separation principle plus $\ETR_{\Ord}$-foundation.
  8. For every class $A$, there is a truth predicate for $\mathcal{L}_{\Ord,\omega}(\in,A)$.
  9. For every class $A$, there is a truth predicate for $\mathcal{L}_{\Ord,\Ord}(\in,A)$.
  10. For every class $A$, there is an $\Ord$-iterated truth predicate for $\mathcal{L}_{\omega,\omega}(\in,A)$.
  11. The principle of determinacy for clopen class games of rank at most $\Ord+1$.
  12. The principle $\ETR_{\Ord}$ of elementary transfinite recursion for $\Ord$-length recursions of first-order properties, using any class parameter.

Implication cycle 12

We prove the theorem by establishing the complete cycle of indicated implications. The red arrows indicate more difficult or substantive implications, while the blue arrows indicate easier or nearly immediate implications. The green dashed implication from statement (12) to statement (1), while not needed for the completeness of the implication cycle, is nevertheless used in the proof that (12) implies (4). The proof of (12) implies (7) also uses (8), which follows from the fact that (12) implies (9) implies (8).

For more, download the paper from the arxiv: [bibtex key=”GitmanHamkinsHolySchlichtWilliams:The-exact-strength-of-the-class-forcing-theorem”]

See also Victoria’s post, Kameryn’s post.

When does every definable nonempty set have a definable element?

[bibtex key=”DoraisHamkins:When-does-every-definable-nonempty-set-have-a-definable-element”]

Abstract. The assertion that every definable set has a definable element is equivalent over ZF to the principle $V=\newcommand\HOD{\text{HOD}}\HOD$, and indeed, we prove, so is the assertion merely that every $\Pi_2$-definable set has an ordinal-definable element. Meanwhile, every model of ZFC has a forcing extension satisfying $V\neq\HOD$ in which every $\Sigma_2$-definable set has an ordinal-definable element. Similar results hold for $\HOD(\mathbb{R})$ and $\HOD(\text{Ord}^\omega)$ and other natural instances of $\HOD(X)$.

It is not difficult to see that the models of ZF set theory in which every definable nonempty set has a definable element are precisely the models of $V=\HOD$. Namely, if $V=\HOD$, then there is a definable well-ordering of the universe, and so the $\HOD$-least element of any definable nonempty set is definable; and conversely, if $V\neq\HOD$, then the set of minimal-rank non-OD sets is definable, but can have no definable element.

In this brief article, we shall identify the limit of this elementary observation in terms of the complexity of the definitions. Specifically, we shall prove that $V=\HOD$ is equivalent to the assertion that every $\Pi_2$-definable nonempty set contains an ordinal-definable element, but that one may not replace $\Pi_2$-definability here by $\Sigma_2$-definability.

Theorem. The following are equivalent in any model $M$ of ZF:

  1. $M$ is a model of $\text{ZFC}+\text{V}=\text{HOD}$.
  2. $M$ thinks there is a definable well-ordering of the universe.
  3. Every definable nonempty set in $M$ has a definable element.
  4. Every definable nonempty set in $M$ has an ordinal-definable element.
  5. Every ordinal-definable nonempty set in $M$ has an ordinal-definable element.
  6. Every $\Pi_2$-definable nonempty set in $M$ has an ordinal-definable element.

Theorem. Every model of ZFC has a forcing extension satisfying $V\neq\HOD$, in which every $\Sigma_2$-definable set has a definable element.

The proof of this latter theorem is reminiscent of several proofs of the maximality principle (see A simple maximality principle), where one undertakes a forcing iteration attempting at each stage to force and then preserve a given $\Sigma_2$ assertion.

This inquiry grew out of a series of questions and answers posted on MathOverflow and the exchange of the authors there.

A model of the generic Vopěnka principle in which the ordinals are not $\Delta_2$-Mahlo

[bibtex key=”GitmanHamkins2018:A-model-of-the-generic-Vopenka-principle-in-which-the-ordinals-are-not-Mahlo”]

Abstract. The generic Vopěnka principle, we prove, is relatively consistent with the ordinals being non-Mahlo. Similarly, the generic Vopěnka scheme is relatively consistent with the ordinals being definably non-Mahlo. Indeed, the generic Vopěnka scheme is relatively consistent with the existence of a $\Delta_2$-definable class containing no regular cardinals. In such a model, there can be no $\Sigma_2$-reflecting cardinals and hence also no remarkable cardinals. This latter fact answers negatively a question of Bagaria, Gitman and Schindler.

 

The Vopěnka principle is the assertion that for every proper class of first-order structures in a fixed language, one of the structures embeds elementarily into another. This principle can be formalized as a single second-order statement in Gödel-Bernays set-theory GBC, and it has a variety of useful equivalent characterizations. For example, the Vopěnka principle holds precisely when for every class $A$, the universe has an $A$-extendible cardinal, and it is also equivalent to the assertion that for every class $A$, there is a stationary proper class of $A$-extendible cardinals (see theorem 6 in my paper The Vopěnka principle is inequivalent to but conservative over the Vopěnka scheme) In particular, the Vopěnka principle implies that ORD is Mahlo: every class club contains a regular cardinal and indeed, an extendible cardinal and more.

To define these terms, recall that a cardinal $\kappa$ is extendible, if for every $\lambda>\kappa$, there is an ordinal $\theta$ and an elementary embedding $j:V_\lambda\to V_\theta$ with critical point $\kappa$. It turns out that, in light of the Kunen inconsistency, this weak form of extendibility is equivalent to a stronger form, where one insists also that $\lambda<j(\kappa)$; but there is a subtle issue about this that comes up with the virtual forms of these axioms, where the virtual weak and virtual strong forms are no longer equivalent. Relativizing to a class parameter, a cardinal $\kappa$ is $A$-extendible for a class $A$, if for every $\lambda>\kappa$, there is an elementary embedding
$$j:\langle V_\lambda, \in, A\cap V_\lambda\rangle\to \langle V_\theta,\in,A\cap V_\theta\rangle$$
with critical point $\kappa$, and again one may equivalently insist also that $\lambda<j(\kappa)$. Every such $A$-extendible cardinal is therefore extendible and hence inaccessible, measurable, supercompact and more. These are amongst the largest large cardinals.

In the first-order ZFC context, set theorists commonly consider a first-order version of the Vopěnka principle, which we call the Vopěnka scheme, the scheme making the Vopěnka assertion of each definable class separately, allowing parameters. That is, the Vopěnka scheme asserts, of every formula $\varphi$, that for any parameter $p$, if $\{\,x\mid \varphi(x,p)\,\}$ is a proper class of first-order structures in a common language, then one of those structures elementarily embeds into another.

The Vopěnka scheme is naturally stratified by the assertions $\text{VP}(\Sigma_n)$, for the particular natural numbers $n$ in the meta-theory, where $\text{VP}(\Sigma_n)$ makes the Vopěnka assertion for all $\Sigma_n$-definable classes. Using the definable $\Sigma_n$-truth predicate, each assertion $\text{VP}(\Sigma_n)$ can be expressed as a single first-order statement in the language of set theory.

In my previous paper, The Vopěnka principle is inequivalent to but conservative over the Vopěnka scheme, I proved that the Vopěnka principle is not provably equivalent to the Vopěnka scheme, if consistent, although they are equiconsistent over GBC and furthermore, the Vopěnka principle is conservative over the Vopěnka scheme for first-order assertions. That is, over GBC the two versions of the Vopěnka principle have exactly the same consequences in the first-order language of set theory.

In this article, Gitman and I are concerned with the virtual forms of the Vopěnka principles. The main idea of virtualization, due to Schindler, is to weaken elementary-embedding existence assertions to the assertion that such embeddings can be found in a forcing extension of the universe. Gitman and Schindler had emphasized that the remarkable cardinals, for example, instantiate the virtualized form of supercompactness via the Magidor characterization of supercompactness. This virtualization program has now been undertaken with various large cardinals, leading to fruitful new insights.

Carrying out the virtualization idea with the Vopěnka principles, we define the generic Vopěnka principle to be the second-order assertion in GBC that for every proper class of first-order structures in a common language, one of the structures admits, in some forcing extension of the universe, an elementary embedding into another. That is, the structures themselves are in the class in the ground model, but you may have to go to the forcing extension in order to find the elementary embedding.

Similarly, the generic Vopěnka scheme, introduced by Bagaria, Gitman and Schindler, is the assertion (in ZFC or GBC) that for every first-order definable proper class of first-order structures in a common language, one of the structures admits, in some forcing extension, an elementary embedding into another.

On the basis of their work, Bagaria, Gitman and Schindler had asked the following question:

Question. If the generic Vopěnka scheme holds, then must there be a proper class of remarkable cardinals?

There seemed good reason to expect an affirmative answer, even assuming only $\text{gVP}(\Sigma_2)$, based on strong analogies with the non-generic case. Specifically, in the non-generic context Bagaria had proved that $\text{VP}(\Sigma_2)$ was equivalent to the existence of a proper class of supercompact cardinals, while in the virtual context, Bagaria, Gitman and Schindler proved that the generic form $\text{gVP}(\Sigma_2)$ was equiconsistent with a proper class of remarkable cardinals, the virtual form of supercompactness. Similarly, higher up, in the non-generic context Bagaria had proved that $\text{VP}(\Sigma_{n+2})$ is equivalent to the existence of a proper class of $C^{(n)}$-extendible cardinals, while in the virtual context, Bagaria, Gitman and Schindler proved that the generic form $\text{gVP}(\Sigma_{n+2})$ is equiconsistent with a proper class of virtually $C^{(n)}$-extendible cardinals.

But further, they achieved direct implications, with an interesting bifurcation feature that specifically suggested an affirmative answer to the question above. Namely, what they showed at the $\Sigma_2$-level is that if there is a proper class of remarkable cardinals, then $\text{gVP}(\Sigma_2)$ holds, and conversely if $\text{gVP}(\Sigma_2)$ holds, then there is either a proper class of remarkable cardinals or a proper class of virtually rank-into-rank cardinals. And similarly, higher up, if there is a proper class of virtually $C^{(n)}$-extendible cardinals, then $\text{gVP}(\Sigma_{n+2})$ holds, and conversely, if $\text{gVP}(\Sigma_{n+2})$ holds, then either there is a proper class of virtually $C^{(n)}$-extendible cardinals or there is a proper class of virtually rank-into-rank cardinals. So in each case, the converse direction achieves a disjunction with the target cardinal and the virtually rank-into-rank cardinals. But since the consistency strength of the virtually rank-into-rank cardinals is strictly stronger than the generic Vopěnka principle itself, one can conclude on consistency-strength grounds that it isn’t always relevant, and for this reason, it seemed natural to inquire whether this second possibility in the bifurcation could simply be removed. That is, it seemed natural to expect an affirmative answer to the question, even assuming only $\text{gVP}(\Sigma_2)$, since such an answer would resolve the bifurcation issue and make a tighter analogy with the corresponding results in the non-generic/non-virtual case.

In this article, however, we shall answer the question negatively. The details of our argument seem to suggest that a robust analogy with the non-generic/non-virtual principles is achieved not with the virtual $C^{(n)}$-cardinals, but with a weakening of that property that drops the requirement that $\lambda<j(\kappa)$. Indeed, our results seems to offer an illuminating resolution of the bifurcation aspect of the results we mentioned from Bagaria, Gitmand and Schindler, because it provides outright virtual large-cardinal equivalents of the stratified generic Vopěnka principles. Because the resulting virtual large cardinals are not necessarily remarkable, however, our main theorem shows that it is relatively consistent with even the full generic Vopěnka principle that there are no $\Sigma_2$-reflecting cardinals and therefore no remarkable cardinals.

Main Theorem.

  1. It is relatively consistent that GBC and the generic Vopěnka principle holds, yet ORD is not Mahlo.
  2. It is relatively consistent that ZFC and the generic Vopěnka scheme holds, yet ORD is not definably Mahlo, and not even $\Delta_2$-Mahlo. In such a model, there can be no $\Sigma_2$-reflecting cardinals and therefore also no remarkable cardinals.

For more, go to the arcticle:

[bibtex key=”GitmanHamkins2018:A-model-of-the-generic-Vopenka-principle-in-which-the-ordinals-are-not-Mahlo”]

The universal definition — it can define any mathematical object you like, in the right set-theoretic universe

Alex_wong_2015_(Unsplash) (1)In set theory, we have the phenomenon of the universal definition. This is a property $\phi(x)$, first-order expressible in the language of set theory, that necessarily holds of exactly one set, but which can in principle define any particular desired set that you like, if one should simply interpret the definition in the right set-theoretic universe. So $\phi(x)$ could be defining the set of real numbes $x=\mathbb{R}$ or the integers $x=\mathbb{Z}$ or the number $x=e^\pi$ or a certain group or a certain topological space or whatever set you would want it to be. For any mathematical object $a$, there is a set-theoretic universe in which $a$ is the unique object $x$ for which $\phi(x)$.

The universal definition can be viewed as a set-theoretic analogue of the universal algorithm, a topic on which I have written several recent posts:

Let’s warm up with the following easy instance.

Theorem. Any particular real number $r$ can become definable in a forcing extension of the universe.

Proof. By Easton’s theorem, we can control the generalized continuum hypothesis precisely on the regular cardinals, and if we start (by forcing if necessary) in a model of GCH, then there is a forcing extension where $2^{\aleph_n}=\aleph_{n+1}$ just in case the $n^{th}$ binary digit of $r$ is $1$. In the resulting forcing extension $V[G]$, therefore, the real $r$ is definable as: the real whose binary digits conform with the GCH pattern on the cardinals $\aleph_n$. QED

Since this definition can be settled in a rank-initial segment of the universe, namely, $V_{\omega+\omega}$, the complexity of the definition is $\Delta_2$. See my post on Local properties in set theory to see how I think about locally verifiable and locally decidable properties in set theory.

If we push the argument just a little, we can go beyond the reals.

Theorem. There is a formula $\psi(x)$, of complexity $\Sigma_2$, such that for any particular object $a$, there is a forcing extension of the universe in which $\psi$ defines $a$.

Proof. Fix any set $a$. By the axiom of choice, we may code $a$ with a set of ordinals $A\subset\kappa$ for some cardinal $\kappa$. (One well-orders the transitive closure of $\{a\}$ and thereby finds a bijection $\langle\mathop{tc}(\{a\}),\in\rangle\cong\langle\kappa,E\rangle$ for some $E\subset\kappa\times\kappa$, and then codes $E$ to a set $A$ by an ordinal pairing function. The set $A$ tells you $E$, which tells you $\mathop{tc}(\{a\})$ by the Mostowski collapse, and from this you find $a$.) By Easton’s theorem, there is a forcing extension $V[G]$ in which the GCH holds at all $\aleph_{\lambda+1}$ for a limit ordinal $\lambda<\kappa$, but fails at $\aleph_{\kappa+1}$, and such that $\alpha\in A$ just in case $2^{\aleph_{\alpha+2}}=\aleph_{\alpha+3}$ for $\alpha<\kappa$. That is, we manipulate the GCH pattern to exactly code both $\kappa$ and the elements of $A\subset\kappa$. Let $\phi(x)$ assert that $x$ is the set that is decoded by this process: look for the first stage where the GCH fails at $\aleph_{\lambda+2}$, and then extract the set $A$ of ordinals, and then check if $x$ is the set coded by $A$. The assertion $\phi(x)$ did not depend on $a$, and since it can be verified in any sufficiently large $V_\theta$, the assertion $\phi(x)$ has complexity $\Sigma_2$. QED

Let’s try to make a better universal definition. As I mentioned at the outset, I have been motivated to find a set-theoretic analogue of the universal algorithm, and in that computable context, we had a universal algorithm that could not only produce any desired finite set, when run in the right universe, but which furthermore had a robust interaction between models of arithmetic and their top-extensions: any set could be extended to any other set for which the algorithm enumerated it in a taller universe. Here, I’d like to achieve the same robustness of interaction with the universal definition, as one moves from one model of set theory to a taller model. We say that one model of set theory $N$ is a top-extension of another $M$, if all the new sets of $N$ have rank totally above the ranks occuring in $M$. Thus, $M$ is a rank-initial segment of $N$. If there is a least new ordinal $\beta$ in $N\setminus M$, then this is equivalent to saying that $M=V_\beta^N$.

Theorem. There is a formula $\phi(x)$, such that

  1. In any model of ZFC, there is a unique set $a$ satisfying $\phi(a)$.
  2. For any countable model $M\models\text{ZFC}$ and any $a\in M$, there is a top-extension $N$ of $M$ such that $N\models \phi(a)$.

Thus, $\phi(x)$ is the universal definition: it always defines some set, and that set can be any desired set, even when moving from a model $M$ to a top-extension $N$.

Proof. The previous manner of coding will not achieve property 2, since the GCH pattern coding started immediately, and so it would be preserved to any top extension. What we need to do is to place the coding much higher in the universe, so that in the top extension $N$, it will occur in the part of $N$ that is totally above $M$.

But consider the following process. In any model of set theory, let $\phi(x)$ assert that $x$ is the empty set unless the GCH holds at all sufficiently large cardinals, and indeed $\phi(x)$ is false unless there is a cardinal $\delta$ and ordinal $\gamma<\delta^+$ such that the GCH holds at all cardinals above $\aleph_{\delta+\gamma}$. In this case, let $\delta$ be the smallest such cardinal for which that is true, and let $\gamma$ be the smallest ordinal working with this $\delta$. So both $\delta$ and $\gamma$ are definable. Now, let $A\subset\gamma$ be the set of ordinals $\alpha$ for which the GCH holds at $\aleph_{\delta+\alpha+1}$, and let $\phi(x)$ assert that $x$ is the set coded by the set $A$.

It is clear that $\phi(x)$ defines a unique set, in any model of ZFC, and so (1) holds. For (2), suppose that $M$ is a countable model of ZFC and $a\in M$. It is a fact that every countable model of ZFC has a top-extension, by the definable ultrapower method. Let $N_0$ be a top extension of $M$. Let $N=N_0[G]$ be a forcing extension of $N_0$ in which the set $a$ is coded into the GCH pattern very high up, at cardinals totally above $M$, and such that the GCH holds above this coding, in such a way that the process described in the previous paragraph would define exactly the set $a$. So $\phi(a)$ holds in $N$, which is a top-extension of $M$ as no new sets of small rank are added by the forcing. So statement (2) also holds. QED

The complexity of the definition is $\Pi_3$, mainly because in order to know where to look for the coding, one needs to know the ordinals $\delta$ and $\gamma$, and so one needs to know that the GCH always holds above that level. This is a $\Pi_3$ property, since it cannot be verified locally only inside some $V_\theta$.

A stronger analogue with the universal algorithm — and this is a question that motivated my thinking about this topic — would be something like the following:

Question. Is there is a $\Sigma_2$ formula $\varphi(x)$, that is, a locally verifiable property, with the following properties?

  1. In any model of ZFC, the class $\{x\mid\varphi(x)\}$ is a set.
  2. It is consistent with ZFC that $\{x\mid\varphi(x)\}$ is empty.
  3. In any countable model $M\models\text{ZFC}$ in which $\{x\mid\varphi(x)\}=a$ and any set $b\in M$ with $a\subset b$, then there is a top-extension $N$ of $M$ in which $\{x\mid\varphi(x)\}=b$.

An affirmative answer would be a very strong analogue with the universal algorithm and Woodin’s theorem about which I wrote previously. The idea is that the $\Sigma_2$ properties $\varphi(x)$ in set theory are analogous to the computably enumerable properties in computability theory. Namely, to verify that an object has a certain computably enumerable property, we run a particular computable process and then sit back, waiting for the process to halt, until a stage of computation arrives at which the property is verified. Similarly, in set theory, to verify that a set has a particular $\Sigma_2$ property, we sit back watching the construction of the cumulative set-theoretic universe, until a stage $V_\beta$ arrives that provides verification of the property. This is why in statement (3) we insist that $a\subset b$, since the $\Sigma_2$ properties are always upward absolute to top-extensions; once an object is placed into $\{x\mid\varphi(x)\}$, then it will never be removed as one makes the universe taller.

So the hope was that we would be able to find such a universal $\Sigma_2$ definition, which would serve as a set-theoretic analogue of the universal algorithm used in Woodin’s theorem.

If one drops the first requirement, and allows $\{x\mid \varphi(x)\}$ to sometimes be a proper class, then one can achieve a positive answer as follows.

Theorem. There is a $\Sigma_2$ formula $\varphi(x)$ with the following properties.

  1. If the GCH holds, then $\{x\mid\varphi(x)\}$ is empty.
  2. For any countable model $M\models\text{ZFC}$ where $a=\{x\mid \varphi(x)\}$ and any $b\in M$ with $a\subset b$, there is a top extension $N$ of $M$ in which $N\models\{x\mid\varphi(x)\}=b$.

Proof. Let $\varphi(x)$ assert that the set $x$ is coded into the GCH pattern. We may assume that the coding mechanism of a set is marked off by certain kinds of failures of the GCH at odd-indexed alephs, with the pattern at intervening even-indexed regular cardinals forming the coding pattern.  This is $\Sigma_2$, since any large enough $V_\theta$ will reveal whether a given set $x$ is coded in this way. And because of the manner of coding, if the GCH holds, then no set is coded. Also, if the GCH holds eventually, then only a set-sized collection is coded. Finally, any countable model $M$ where only a set is coded can be top-extended to another model $N$ in which any desired superset of that set is coded. QED

Update.  Originally, I had proposed an argument for a negative answer to the question, and I was actually a bit disappointed by that, since I had hoped for a positive answer. However, it now seems to me that the argument I had written is wrong, and I am grateful to Ali Enayat for his remarks on this in the comments. I have now deleted the incorrect argument.

Meanwhile, here is a positive answer to the question in the case of models of $V\neq\newcommand\HOD{\text{HOD}}\HOD$.

Theorem. There is a $\Sigma_2$ formula $\varphi(x)$ with the following properties:

  1. In any model of $\newcommand\ZFC{\text{ZFC}}\ZFC+V\neq\HOD$, the class $\{x\mid\varphi(x)\}$ is a set.
  2. It is relatively consistent with $\ZFC$ that $\{x\mid\varphi(x)\}$ is empty; indeed, in any model of $\ZFC+\newcommand\GCH{\text{GCH}}\GCH$, the class $\{x\mid\varphi(x)\}$ is empty.
  3. If $M\models\ZFC$ thinks that $a=\{x\mid\varphi(x)\}$ is a set and $b\in M$ is a larger set $a\subset b$, then there is a top-extension $N$ of $M$ in which $\{x\mid \varphi(x)\}=b$.

Proof. Let $\varphi(x)$ hold, if there is some ordinal $\alpha$ such that every element of $V_\alpha$ is coded into the GCH pattern below some cardinal $\delta_\alpha$, with $\delta_\alpha$ as small as possible with that property, and $x$ is the next set coded into the GCH pattern above $\delta_\alpha$. This is a $\Sigma_2$ property, since it can be verified in any sufficiently large $V_\theta$.

In any model of $\ZFC+V\neq\HOD$, there must be some sets that are no coded into the $\GCH$ pattern, for if every set is coded that way then there would be a definable well-ordering of the universe and we would have $V=\HOD$. So in any model of $V\neq\HOD$, there is a bound on the ordinals $\alpha$ for which $\delta_\alpha$ exists, and therefore $\{x\mid\varphi(x)\}$ is a set. So statement (1) holds.

Statement (2) holds, because we may arrange it so that the GCH itself implies that no set is coded at all, and so $\varphi(x)$ would always fail.

For statement (3), suppose that $M\models\ZFC+\{x\mid\varphi(x)\}=a\subseteq b$ and $M$ is countable. In $M$, there must be some minimal rank $\alpha$ for which there is a set of rank $\alpha$ that is not coded into the GCH pattern. Let $N$ be an elementary top-extension of $M$, so $N$ agrees that $\alpha$ is that minimal rank. Now, by forcing over $N$, we can arrange to code all the sets of rank $\alpha$ into the GCH pattern above the height of the original model $M$, and we can furthermore arrange so as to code any given element of $b$ just above that coding. And so on, we can iterate it so as to arrange the coding above the height of $M$ so that exactly the elements of $b$ now satisfy $\varphi(x)$, but no more. In this way, we will ensure that $N\models\{x\mid\varphi(x)\}=b$, as desired. QED

I find the situation unusual, in that often results from the models-of-arithmetic context generalize to set theory with models of $V=\HOD$, because the global well-order means that models of $V=\HOD$ have definable Skolem functions, which is true in every model of arithmetic and which sometimes figures implicitly in constructions. But here, we have the result of Woodin’s theorem generalizing from models of arithmetic to models of $V\neq\HOD$.  Perhaps this suggests that we should expect a fully positive solution for models of set theory.

Further update. Woodin and I have now established the fully general result of the universal finite set, which subsumes much of the preliminary early analysis that I had earlier made in this post. Please see my post, The universal finite set.

The universal algorithm: a new simple proof of Woodin’s theorem

This is the third in a series of posts I’ve made recently concerning what I call the universal algorithm, which is a program that can in principle compute any function, if only you should run it in the right universe. Earlier, I had presented a few elementary proofs of this surprising theorem: see Every function can be computable! and A program that accepts exactly any desired finite set, in the right universe.

$\newcommand\PA{\text{PA}}$Those arguments established the universal algorithm, but they fell short of proving Woodin’s interesting strengthening of the theorem, which explains how the universal algorithm can be extended from any arithmetic universe to a larger one, in such a way so as to extend the given enumerated sequence in any desired manner. Woodin emphasized how his theorem raises various philosophical issues about the absoluteness or rather the non-absoluteness of finiteness, which I find extremely interesting.

Woodin’s proof, however, is a little more involved than the simple arguments I provided for the universal algorithm alone. Please see the paper Blanck, R., and Enayat, A. Marginalia on a theorem of Woodin, The Journal of Symbolic Logic, 82(1), 359-374, 2017. doi:10.1017/jsl.2016.8 for a further discussion of Woodin’s argument and related results.

What I’ve recently discovered, however, is that in fact one can prove Woodin’s stronger version of the theorem using only the method of the elementary argument. This variation also allows one to drop the countability requirement on the models, as was done by Blanck and Enayat. My thinking on this argument was greatly influenced by a comment of Vadim Kosoy on my original post.

It will be convenient to adopt an enumeration model of Turing computability, by which we view a Turing machine program as providing a means to computably enumerate a list of numbers. We start the program running, and it generates a list of numbers, possibly finite, possibly infinite, possibly empty, possibly with repetition. This way of using Turing machines is fully Turing equivalent to the usual way, if one simply imagines enumerating input/output pairs so as to code any given computable partial function.

Theorem.(Woodin) There is a Turing machine program $e$ with the following properties.

  1. $\PA$ proves that $e$ enumerates a finite sequence of numbers.
  2. For any finite sequence $s$, there is a model $M$ of $\PA$ in which program $e$ enumerates exactly $s$.
  3. For any model $M$ in which $e$ enumerates a (possibly nonstandard) sequence $s$ and any $t\in M$ extending $s$, there is an end-extension $N$ of $M$ in which $e$ enumerates exactly $t$.

It is statement (3) that makes this theorem stronger than merely the universal algorithm that I mentioned in my earlier posts and which I find particularly to invite philosophical speculation on the provisional nature of finiteness. After all, if in one universe the program $e$ enumerates a finite sequence $s$, then for any $t$ extending $s$ — we might imagine having painted some new pattern $t$ on top of $s$ — there is a taller universe in which $e$ enumerates exactly $t$. So we need only wait long enough (into the next universe), and then our program $e$ will enumerate exactly the sequence $t$ we had desired.

Proof. This is the new elementary proof.  Let’s begin by recalling the earlier proof of the universal algorithm, for statements (1) and (2) only. Namely, let $e$ be the program that undertakes a systematic exhaustive search through all proofs from $\PA$ for a proof of a statement of the form, “program $e$ does not enumerate exactly the sequence $s$,” where $s$ is an explicitly listed finite sequence of numbers. Upon finding such a proof (the first such proof found), it proceeds to enumerate exactly the numbers appearing in $s$.  Thus, at bottom, the program $e$ is a petulant child: it searches for a proof that it shouldn’t behave in a certain way, and then proceeds at once to behave in exactly the forbidden manner.

(The reader may notice an apparent circularity in the definition of program $e$, since we referred to $e$ when defining $e$. But this is no problem at all, and it is a standard technique in computability theory to use the Kleene recursion theorem to show that this kind of definition is completely fine. Namely, we really define a program $f(e)$ that performs that task, asking about $e$, and then by the recursion theorem, there is a program $e$ such that $e$ and $f(e)$ compute the same function, provably so. And so for this fixed-point program $e$, it is searching for proofs about itself.)

It is clear that the program $e$ will enumerate a finite list of numbers only, since either it never finds the sought-after proof, in which case it enumerates nothing, and the empty sequence is finite, or else it does find a proof, in which case it enumerates exactly the finitely many numbers explicitly appearing in the statement that was proved. So $\PA$ proves that in any case $e$ enumerates a finite list. Further, if $\PA$ is consistent, then you will not be able to refute any particular finite sequence being enumerated by $e$, because if you could, then (for the smallest such instance) the program $e$ would in fact enumerate exactly those numbers, and this would be provable, contradicting $\text{Con}(\PA)$. Precisely because you cannot refute that statement, it follows that the theory $\PA$ plus the assertion that $e$ enumerates exactly $s$ is consistent, for any particular $s$. So there is a model $M$ of $\PA$ in which program $e$ enumerates exactly $s$. This establishes statements (1) and (2) for this program.

Let me now modify the program in order to achieve the key third property. Note that the program described above definitely does not have property (3), since once a nonempty sequence $s$ is enumerated, then the program is essentially finished, and so running it in a taller universe $N$ will not affect the sequence it enumerates. To achieve (3), therefore, we modify the program by allowing it to add more to the sequence.

Specfically, for the new modified version of the program $e$, we start as before by searching for a proof in $\PA$ that the list enumerated by $e$ is not exactly some explicitly listed finite sequence $s$. When this proof is found, then $e$ immediately enumerates the numbers appearing in $s$. Next, it inspects the proof that it had found. Since the proof used only finitely many $\PA$ axioms, it is therefore a proof from a certain fragment $\PA_n$, the $\Sigma_n$ fragment of $\PA$. Now, the algorithm $e$ continues by searching for a proof in a strictly smaller fragment that program $e$ does not enumerate exactly some explicitly listed sequence $t$ properly extending the sequence of numbers already enumerated. When such a proof is found, it then immediately enumerates (the rest of) those numbers. And now simply iterate this, looking for new proofs in still-smaller fragments of $\PA$ that a still-longer extension is not the sequence enumerated by $e$.

Succinctly: the program $e$ searches for a proof, in a strictly smaller fragment of $\PA$ each time, that $e$ does not enumerate exactly a certain explicitly listed sequence $s$ extending whatever has been already enumerated so far, and when found, it enumerates those new elements, and repeats.

We can still prove in $\PA$ that $e$ enumerates a finite sequence, since the fragment of $\PA$ that is used each time is going down, and $\PA$ proves that this can happen only finitely often. So statement (1) holds.

Again, you cannot refute that any particular finite sequence $s$ is the sequence enumerated by $e$, since if you could do this, then in the standard model, the program would eventually find such a proof, and then perhaps another and another, until ultimately, it would find some last proof that $e$ does not enumerate exactly some finite sequence $t$, at which time the program will have enumerated exactly $t$ and never afterward add to it. So that proof would have proved a false statement. This is a contradiction, since that proof is standard.

So again, precisely because you cannot refute these statements, it follows that it is consistent with $\PA$ that program $e$ enumerates exactly $s$, for any particular finite $s$. So statement (2) holds.

Finally, for statement (3), suppose that $M$ is a model of $\PA$ in which $e$ enumerates exactly some finite sequence $s$. If $s$ is the empty sequence, then $M$ thinks that there is no proof in $\PA$ that $e$ does not enumerate exactly $t$, for any particular $t$. And so it thinks the theory $\PA+$ “$e$ enumerates exactly $t$” is consistent. So in $M$ we may build the Henkin model $N$ of this theory, which is an end-extension of $M$ in which $e$ enumerates exactly $t$, as desired.

If alternatively $s$ was nonempty, then it was enumerated by $e$ in $M$ because in $M$ there was ultimately a proof in some fragment $\PA_n$ that it should not do so, but it never found a corresponding proof about an extension of $s$ in any strictly smaller fragment of $\PA$.  So $M$ has a proof from $\PA_n$ that $e$ does not enumerate exactly $s$, even though it did.

Notice that $n$ must be nonstandard, because $M$ has a definable $\Sigma_k$-truth predicate for every standard $k$, and using this predicate, $M$ can see that every $\PA_k$-provable statement must be true.

Precisely because the model $M$ lacked the proofs from the strictly smaller fragment $\PA_{n-1}$, it follows that for any particular finite $t$ extending $s$ in $M$, the model thinks that the theory $T=\PA_{n-1}+$ “$e$ enumerates exactly $t$” is consistent. Since $n$ is nonstandard, this theory includes all the actual $\PA$ axioms. In $M$ we can build the Henkin model $N$ of this theory, which will be an end-extension of $M$ in which $\PA$ holds and program $e$ enumerates exactly $t$, as desired for statement (3). QED

Corollary. Let $e$ be the universal algorithm program $e$ of the theorem. Then

  1. For any infinite sequence $S:\mathbb{N}\to\mathbb{N}$, there is a model $M$ of $\PA$ in which program $e$ enumerates a (nonstandard finite) sequence starting with $S$.
  2. If $M$ is any model of $\PA$ in which program $e$ enumerates some (possibly nonstandard) finite sequence $s$, and $S$ is any $M$-definable infinite sequence extending $s$, then there is an end-extension of $M$ in which $e$ enumerates a sequence starting with $S$.

Proof. (1) Fix $S:\mathbb{N}\to\mathbb{N}$. By a simple compactness argument, there is a model $M$ of true arithmetic in which the sequence $S$ is the standard part of some coded nonstandard finite sequence $t$. By the main theorem, there is some end-extension $M^+$ of $M$ in which $e$ enumerates $t$, which extends $S$, as desired.

(2) If $e$ enumerates $s$ in $M$, a model of $\PA$, and $S$ is an $M$-infinite sequence definable in $M$, then by a compactness argument inside $M$, we can build a model $M’$ inside $M$ in which $S$ is coded by an element, and then apply the main theorem to find a further end-extension $M^+$ in which $e$ enumerates that element, and hence which enumerates an extension of $S$. QED

Kaethe Lynn Bruesselbach Minden, PhD 2017, CUNY Graduate Center

Kaethe Lynn Bruesselbach Minden successfully defended her dissertation on April 7, 2017 at the CUNY Graduate Center, under the supervision of Professor Gunter Fuchs. I was a member of the dissertation committee, along with Arthur Apter.

Her defense was impressive!  She was a master of the entire research area, ready at hand with the technical details to support her account of any topic that arose.

Kaethe Minden + sloth

math blognylogic profilear$\chi$iv | math geneology

Kaethe Minden, “On Subcomplete Forcing,” Ph.D. dissertation for The Graduate Center of the City University of New York, May, 2017. (arxiv/1705.00386)

Abstract. I survey an array of topics in set theory and their interaction with, or in the context of, a novel class of forcing notions: subcomplete forcing. Subcomplete forcing notions satisfy some desirable qualities; for example they don’t add any new reals to the model, and they admit an iteration theorem. While it is straightforward to show that any forcing notion which is countably closed is also subcomplete, it turns out that other well-known, more subtle forcing notions like Prikry forcing and Namba forcing are also subcomplete. Subcompleteness was originally defined by Ronald Björn Jensen around 2009. Jensen’s writings make up the vast majority of the literature on the subject. Indeed, the definition in and of itself is daunting. I have attempted to make the subject more approachable to set theorists, while showing various properties of subcomplete forcing which one might desire of a forcing class.

It is well-known that countably closed forcings cannot add branches through $\omega_1$-trees. I look at the interaction between subcomplete forcing and $\omega_1$-trees. It turns out that sub-complete forcing also does not add cofinal branches to $\omega_1$-trees. I show that a myriad of other properties of trees of height $\omega_1$ as explored in [FH09] are preserved by subcomplete forcing; for example, I show that the unique branch property of Suslin trees is preserved by subcomplete forcing.

Another topic I explored is the Maximality Principle ($\text{MP}$). Following in the footsteps of Hamkins [Ham03], Leibman [Lei], and Fuchs [Fuc08], [Fuc09], I examine the subcomplete maximality principle. In order to elucidate the ways in which subcomplete forcing generalizes the notion of countably closed forcing, I compare the countably closed maximality principle ($\text{MP}_{<\omega_1\text{-closed}}$) to the subcomplete maximality principle ($\text{MP}_{sc}$). Again, since countably closed forcing is subcomplete, this is a natural question to ask. I was able to show that many of the results about $\text{MP}_{<\omega_1\text{-closed}}$ also hold for $\text{MP}_{sc}$; for example, the boldface appropriate notion of $\text{MP}_{sc}$ is equiconsistent with a fully reflecting cardinal. However, it is not the case that there are direct implications between the subcomplete and countably closed maximality principles.

Another forcing principle explored in my thesis is the Resurrection Axiom ($\text{RA}$). Hamkins and Johnstone [HJ14a] defined the resurrection axiom only relative to $H_{\mathfrak{c}}$, and focus mainly on the resurrection axiom for proper forcing. They also show the equiconsistency of various resurrection axioms with an uplifting cardinal. I argue that the subcomplete resurrection axiom should naturally be considered relative to $H_{\omega_2}$, and showed that the subcomplete resurrection axiom is equiconsistent with an uplifting cardinal.

A question reasonable to ask about any class of forcings is whether or not the resurrection axiom and the maximality principle can consistently both hold for that class. I originally had this question about the full principles, not restricted to any class, but in my thesis it was appropriate to look at the question for subcomplete forcing. I answer the question positively for subcomplete forcing using a strongly uplifting fully reflecting cardinal, which is a combination of the large cardinals needed to force the principles separately. I show that the boldface versions of $\text{MP}_{sc}+\text{RA}_{sc}$ both holding is equiconsistent with the existence of a strongly uplifting fully reflecting cardinal. While Jensen [Jen14] shows that Prikry forcing is subcomplete, I long suspected that many variants of Prikry forcing which have a kind of genericity criterion are also subcomplete. After much work I managed to show that a variant of Prikry forcing known as Diagonal Prikry Forcing is subcomplete, giving another example of subcomplete forcing to add to the list.

Kaethe Minden defense

Kaethe has taken up a faculty position at Marlboro College in Vermont.

Miha E. Habič, PhD 2017, CUNY Graduate Center

Miha E. Habič successfully defended his dissertation under my supervision at the CUNY Graduate Center on April 7th, 2017, earning his Ph.D. degree in May 2017.

It was truly a pleasure to work with Miha, who is an outstanding young mathematician with enormous promise. I shall look forward to seeing his continuing work.

Miha Habic

Cantor’s paradise | MathOverflow | MathSciNet  | NY Logic profilear$\chi$iv

Miha E. Habič, “Joint Laver diamonds and grounded forcing axioms,”  Ph.D. dissertation for The Graduate Center of the City University of New York, May, 2017 (arxiv:1705.04422).

Abstract. In chapter 1 a notion of independence for diamonds and Laver diamonds is investigated. A sequence of Laver diamonds for $\kappa$ is joint if for any sequence of targets there is a single elementary embedding $j$ with critical point $\kappa$ such that each Laver diamond guesses its respective target via $j$. In the case of measurable cardinals (with similar results holding for (partially) supercompact cardinals) I show that a single Laver diamond for $\kappa$ yields a joint sequence of length $\kappa$, and I give strict separation results for all larger lengths of joint sequences. Even though the principles get strictly stronger in terms of direct implication, I show that they are all equiconsistent. This is contrasted with the case of $\theta$-strong cardinals where, for certain $\theta$, the existence of even the shortest joint Laver sequences carries nontrivial consistency strength. I also formulate a notion of jointness for ordinary $\diamondsuit_\kappa$-sequences on any regular cardinal $\kappa$. The main result concerning these shows that there is no separation according to length and a single $\diamondsuit_\kappa$-sequence yields joint families of all possible lengths.

 

In chapter 2 the notion of a grounded forcing axiom is introduced and explored in the case of Martin’s axiom. This grounded Martin’s axiom, a weakening of the usual axiom, states that the universe is a ccc forcing extension of some inner model and the restriction of Martin’s axiom to the posets coming from that ground model holds. I place the new axiom in the hierarchy of fragments of Martin’s axiom and examine its effects on the cardinal characteristics of the continuum. I also show that the grounded version is quite a bit more robust under mild forcing than Martin’s axiom itself.

Miha Habic defenseMiha will shortly begin a post-doctoral research position at Charles University in Prague.

Models of set theory with the same reals and the same cardinals, but which disagree on the continuum hypothesis

Terry_Marks,_Nightmare_in_a_MirrorI’d like to describe a certain interesting and surprising situation that can happen with models of set theory.

Theorem. If $\newcommand\ZFC{\text{ZFC}}\ZFC$ set theory is consistent, then there are two models of $\ZFC$ set theory $M$ and $N$ for which

  • $M$ and $N$ have the same real numbers $$\newcommand\R{\mathbb{R}}\R^M=\R^N.$$
  • $M$ and $N$ have the ordinals and the same cardinals $$\forall\alpha\qquad \aleph_\alpha^M=\aleph_\alpha^N$$
  • But $M$ thinks that the continuum hypothesis $\newcommand\CH{\text{CH}}\CH$ is true, while $N$ thinks that $\CH$ is false.

This is a little strange, since the two models have the set $\R$ in common and they agree on the cardinal numbers, but $M$ thinks that $\R$ has size $\aleph_1$ and $N$ will think that $\R$ has size $\aleph_2$.  In particular, $M$ can well-order the reals in order type $\omega_1$ and $N$ can do so in order-type $\omega_2$, even though the two models have the same reals and they agree that these order types have different cardinalities.

Another abstract way to describe what is going on is that even if two models of set theory, even transitive models, agree on which ordinals are cardinals, they needn’t agree on which sets are equinumerous, for sets they have in common, even for the reals.

Let me emphasize that it is the requirement that the models have the same cardinals that makes the problem both subtle and surprising. If you drop that requirement, then the problem is an elementary exercise in forcing: start with any model $V$, and first force $\CH$ to fail in $V[H]$ by adding a lot of Cohen reals, then force to $V[G]$ by collapsing the continuum to $\aleph_1$. This second step adds no new reals and forces $\CH$, and so $V[G]$ and $V[H]$ will have the same reals, while $V[H]$ thinks $\CH$ is true and $V[G]$ thinks $\CH$ is false. The problem becomes nontrivial and interesting mainly when you insist that cardinals are not collapsed.

In fact, the situation described in the theorem can be forced over any given model of $\ZFC$.

Theorem. Every model of set theory $V\models\ZFC$ has two set-forcing extensions $V[G]$ and $V[H]$ for which

  • $V[G]$ and $V[H]$ have the same real numbers $$\newcommand\R{\mathbb{R}}\R^{V[G]}=\R^{V[H]}.$$
  • $V[G]$ and $V[H]$ have the same cardinals $$\forall\alpha\qquad \aleph_\alpha^{V[G]}=\aleph_\alpha^{V[H]}$$
  • But $V[G]$ thinks that the continuum hypothesis $\CH$ is true, while $V[H]$ thinks that $\CH$ is false.

Proof. Start in any model $V\models\ZFC$, and by forcing if necessary, let’s assume $\CH$ holds in $V$. Let $H\subset\text{Add}(\omega,\omega_2)$ be $V$-generic for the forcing to add $\omega_2$ many Cohen reals. So $V[H]$ satisfies $\neg\CH$ and has the same ordinals and cardinals as $V$.

Next, force over $V[H]$ using the forcing from $V$ to collapse $\omega_2$ to $\omega_1$, forming the extension $V[H][g]$, where $g$ is the generic bijection between those ordinals. Since we used the forcing in $V$, which is countably closed there, it makes sense to consider $V[g]$.  In this extension, the forcing $\text{Add}(\omega,\omega_1^V)$ and $\text{Add}(\omega,\omega_2^V)$ are isomorphic. Since $H$ is $V[g]$-generic for the latter, let $G=g\mathrel{“}H$ be the image of this filter in $\text{Add}(\omega,\omega_1)$, which is therefore $V[g]$-generic for the former. So $V[g][G]=V[g][H]$. Since the forcing $\text{Add}(\omega,\omega_1)$ is c.c.c., it follows that $V[G]$ also has the same cardinals as $V$ and hence also the same as in $V[H]$.

If we now view these extensions as $V[G][g]=V[H][g]$ and note that the coutable closure of $g$ in $V$ implies that $g$ adds no new reals over either $V[G]$ or $V[H]$, it follows that $\R^{V[G]}=\R^{V[H]}$. So the two models have the same reals and the same cardinals. But $V[G]$ has $\CH$ and $V[H]$ has $\neg\CH$, in light of the forcing, and so the proof is complete. QED

Let me prove the following surprising generalization.

Theorem. If $V$ is any model of $\ZFC$ and $V[G]$ is the forcing extension obtained by adding $\kappa$ many Cohen reals, for some uncountable $\kappa$, then for any other uncountable cardinal $\lambda$, there is another forcing extension $V[H]$ where $H$ is $V$-generic for the forcing to add $\lambda$ many Cohen reals, yet $\R^{V[G]}=\R^{V[H]}$.

Proof. Start in $V[G]$, and let $g$ be $V[G]$-generic to collapse $\lambda$ to $\kappa$, using the collapse forcing of the ground model $V$. This forcing is countably closed in $V$ and therefore does not add reals over $V[G]$. In $V[g]$, the two forcing notions $\text{Add}(\omega,\kappa)$ and $\text{Add}(\omega,\lambda)$ are isomorphic. Thus, since $G$ is $V[g]$-generic for the former poset, it follows that the image $H=g\mathrel{“}G$ is $V[g]$-generic for the latter poset. So $V[H]$ is generic over $V$ for adding $\lambda$ many Cohen reals. By construction, we have $V[G][g]=V[H][g]$, and since $g$ doesn’t add reals, it follows that $\R^{V[G]}=\R^{V[H]}$, as desired. QED

I have a vague recollection of having first heard of this problem many years ago, perhaps as a graduate student, although I don’t quite recall where it was or indeed what the construction was — the argument above is my reconstruction (which I have updated and extended from my initial post). If someone could provide a reference in the comments for due credit, I’d be appreciative.  The problem appeared a few years ago on MathOverflow.

A program that accepts exactly any desired finite set, in the right universe

One_small_step_(3598325560)Last year I made a post about the universal program, a Turing machine program $p$ that can in principle compute any desired function, if it is only run inside a suitable model of set theory or arithmetic.  Specifically, there is a program $p$, such that for any function $f:\newcommand\N{\mathbb{N}}\N\to\N$, there is a model $M\models\text{PA}$ — or of $\text{ZFC}$, whatever theory you like — inside of which program $p$ on input $n$ gives output $f(n)$.

This theorem is related to a very interesting theorem of W. Hugh Woodin’s, which says that there is a program $e$ such that $\newcommand\PA{\text{PA}}\PA$ proves $e$ accepts only finitely many inputs, but such that for any finite set $A\subset\N$, there is a model of $\PA$ inside of which program $e$ accepts exactly the elements of $A$. Actually, Woodin’s theorem is a bit stronger than this in a way that I shall explain.

Victoria Gitman gave a very nice talk today on both of these theorems at the special session on Computability theory: Pushing the Boundaries at the AMS sectional meeting here in New York, which happens to be meeting right here in my east midtown neighborhood, a few blocks from my home.

What I realized this morning, while walking over to Vika’s talk, is that there is a very simple proof of the version of Woodin’s theorem stated above.  The idea is closely related to an idea of Vadim Kosoy mentioned in my post last year. In hindsight, I see now that this idea is also essentially present in Woodin’s proof of his theorem, and indeed, I find it probable that Woodin had actually begun with this idea and then modified it in order to get the stronger version of his result that I shall discuss below.

But in the meantime, let me present the simple argument, since I find it to be very clear and the result still very surprising.

Theorem. There is a Turing machine program $e$, such that

  1. $\PA$ proves that $e$ accepts only finitely many inputs.
  2. For any particular finite set $A\subset\N$, there is a model $M\models\PA$ such that inside $M$, the program $e$ accepts all and only the elements of $A$.
  3. Indeed, for any set $A\subset\N$, including infinite sets, there is a model $M\models\PA$ such that inside $M$, program $e$ accepts $n$ if and only if $n\in A$.

Proof.  The program $e$ simply performs the following task:  on any input $n$, search for a proof from $\PA$ of a statement of the form “program $e$ does not accept exactly the elements of $\{n_1,n_2,\ldots,n_k\}$.” Accept nothing until such a proof is found. For the first such proof that is found, accept $n$ if and only if $n$ is one of those $n_i$’s.

In short, the program $e$ searches for a proof that $e$ doesn’t accept exactly a certain finite set, and when such a proof is found, it accepts exactly the elements of this set anyway.

Clearly, $\PA$ proves that program $e$ accepts only a finite set, since either no such proof is ever found, in which case $e$ accepts nothing (and the empty set is finite), or else such a proof is found, in which case $e$ accepts only that particular finite set. So $\PA$ proves that $e$ accepts only finitely many inputs.

But meanwhile, assuming $\PA$ is consistent, then you cannot refute the assertion that program $e$ accepts exactly the elements of some particular finite set $A$, since if you could prove that from $\PA$, then program $e$ actually would accept exactly that set (for the shortest such proof), in which case this would also be provable, contradicting the consistency of $\PA$.

Since you cannot refute any particular finite set as the accepting set for $e$, it follows that it is consistent with $\PA$ that $e$ accepts any particular finite set $A$ that you like. So there is a model of $\PA$ in which $e$ accepts exactly the elements of $A$. This establishes statement (2).

Statement (3) now follows by a simple compactness argument. Namely, for any $A\subset\N$, let $T$ be the theory of $\PA$ together with the assertions that program $e$ accepts $n$, for any particular $n\in A$, and the assertions that program $e$ does not accept $n$, for $n\notin A$. Any finite subtheory of this theory is consistent, by statement (2), and so the whole theory is consistent. Any model of this theory realizes statement (3). QED

One uses the Kleene recursion theorem to show the existence of the program $e$, which makes reference to $e$ in the description of what it does. Although this may look circular, it is a standard technique to use the recursion theorem to eliminate the circularity.

This theorem immediately implies the classical result of Mostowski and Kripke that there is an independent family of $\Pi^0_1$ assertions, since the assertions $n\notin W_e$ are exactly such a family.

The theorem also implies a strengthening of the universal program theorem that I proved last year. Indeed, the two theorems can be realized with the same program!

Theorem. There is a Turing machine program $e$ with the following properties:

  1. $\PA$ proves that $e$ computes a finite function;
  2. For any particular finite partial function $f$ on $\N$, there is a model $M\models\PA$ inside of which program $e$ computes exactly $f$.
  3. For any partial function $f:\N\to\N$, finite or infinite, there is a model $M\models\PA$ inside of which program $e$ on input $n$ computes exactly $f(n)$, meaning that $e$ halts on $n$ if and only if $f(n)\downarrow$ and in this case $\varphi_e(n)=f(n)$.

Proof. The proof of statements (1) and (2) is just as in the earlier theorem. It is clear that $e$ computes a finite function, since either it computes the empty function, if no proof is found, or else it computes the finite function mentioned in the proof. And you cannot refute any particular finite function for $e$, since if you could, it would have exactly that behavior anyway, contradicting $\text{Con}(\PA)$. So statement (2) holds. But meanwhile, we can get statement (3) by a simple compactness argument. Namely, fix $f$ and let $T$ be the theory asserting $\PA$ plus all the assertions either that $\varphi_e(n)\uparrow$, if $n$ is not the domain of $f$, and $\varphi_e(n)=k$, if $f(n)=k$.  Every finite subtheory of this theory is consistent, by statement (2), and so the whole theory is consistent. But any model of this theory exactly fulfills statement (3). QED

Woodin’s proof is more difficult than the arguments I have presented, but I realize now that this extra difficulty is because he is proving an extremely interesting and stronger form of the theorem, as follows.

Theorem. (Woodin) There is a Turing machine program $e$ such that $\PA$ proves $e$ accepts at most a finite set, and for any finite set $A\subset\N$ there is a model $M\models\PA$ inside of which $e$ accepts exactly $A$. And furthermore, in any such $M$ and any finite $B\supset A$, there is an end-extension $M\subset_{end} N\models\PA$, such that in $N$, the program $e$ accepts exactly the elements of $B$.

This is a much more subtle claim, as well as philosophically interesting for the reasons that he dwells on.

The program I described above definitely does not achieve this stronger property, since my program $e$, once it finds the proof that $e$ does not accept exactly $A$, will accept exactly $A$, and this will continue to be true in all further end-extensions of the model, since that proof will continue to be the first one that is found.

Worldly cardinals are not always downwards absolute

 

UniversumI recently came to realize that worldly cardinals are not necessarily downward absolute to transitive inner models. That is, it can happen that a cardinal $\kappa$ is worldly in the full set-theoretic universe $V$, but not in some transitive inner model $W$, even when $W$ is itself a model of ZFC. The observation came out of some conversations I had with Alexander Block from Hamburg during his recent research visit to New York. Let me explain the argument.

A cardinal $\kappa$ is inaccessible, if it is an uncountable regular strong limit cardinal. The structure $V_\kappa$, consisting of the rank-initial segment of the set-theoretic universe up to $\kappa$, which can be generated from the empty set by applying the power set operation $\kappa$ many times, has many nice features. In particular, it is transitive model of $\newcommand\ZFC{\text{ZFC}}\ZFC$. The models $V_\kappa$ for $\kappa$ inaccessible are precisely the uncountable Grothendieck universes used in category theory.

Although the inaccessible cardinals are often viewed as the entryway to the large cardinal hierarchy, there is a useful large cardinal concept weaker than inaccessibility. Namely, a cardinal $\kappa$ is worldly, if $V_\kappa$ is a model of $\ZFC$. Every inaccessible cardinal is worldly, and in fact a limit of worldly cardinals, because if $\kappa$ is inaccessible, then there is an elementary chain of cardinals $\lambda<\kappa$ with $V_\lambda\prec V_\kappa$, and all such $\lambda$ are worldly. The regular worldly cardinals are precisely the inaccessible cardinals, but the least worldly cardinal is always singular of cofinality $\omega$.

The worldly cardinals can be seen as a kind of poor-man’s inaccessible cardinal, in that worldliness often suffices in place of inaccessibility in many arguments, and this sometimes allows one to weaken a large cardinal hypothesis. But meanwhile, they do have some significant strengths. For example, if $\kappa$ is worldly, then $V_\kappa$ satisfies the principle that every set is an element of a transitive model of $\ZFC$.

It is easy to see that inaccessibility is downward absolute, in the sense that if $\kappa$ is inaccessible in the full set-theoretic universe $V$ and $W\newcommand\of{\subseteq}\of V$ is a transitive inner model of $\ZFC$, then $\kappa$ is also inaccessible in $W$. The reason is that $\kappa$ cannot be singular in $W$, since any short cofinal sequence in $W$ would still exist in $V$; and it cannot fail to be a strong limit there, since if some $\delta<\kappa$ had $\kappa$-many distinct subsets in $W$, then this injection would still exist in $V$. So inaccessibility is downward absolute.

The various degrees of hyper-inaccessibility are also downwards absolute to inner models, so that if $\kappa$ is an inaccessible limit of inaccessible limits of inaccessible cardinals, for example, then this is also true in any inner model. This downward absoluteness extends all the way through the hyperinaccessibility hierarchy and up to the Mahlo cardinals and beyond. A cardinal $\kappa$ is Mahlo, if it is a strong limit and the regular cardinals below $\kappa$ form a stationary set. We have observed that being regular is downward absolute, and it is easy to see that every stationary set $S$ is stationary in every inner model, since otherwise there would be a club set $C$ disjoint from $S$ in the inner model, and this club would still be a club in $V$. Similarly, the various levels of hyper-Mahloness are also downward absolute.

So these smallish large cardinals are generally downward absolute. How about the worldly cardinals? Well, we can prove first off that worldliness is downward absolute to the constructible universe $L$.

Observation. If $\kappa$ is worldly, then it is worldly in $L$.

Proof. If $\kappa$ is worldly, then $V_\kappa\models\ZFC$. This implies that $\kappa$ is a beth-fixed point. The $L$ of $V_\kappa$, which is a model of $\ZFC$, is precisely $L_\kappa$, which is also the $V_\kappa$ of $L$, since $\kappa$ must also be a beth-fixed point in $L$. So $\kappa$ is worldly in $L$. QED

But meanwhile, in the general case, worldliness is not downward absolute.

Theorem. Worldliness is not necessarily downward absolute to all inner models. It is relatively consistent with $\ZFC$ that there is a worldly cardinal $\kappa$ and an inner model $W\of V$, such that $\kappa$ is not worldly in $W$.

Proof. Suppose that $\kappa$ is a singular worldly cardinal in $V$. And by forcing if necessary, let us assume the GCH holds in $V$. Let $V[G]$ be the forcing extension where we perform the Easton product forcing $\newcommand\P{\mathbb{P}}\P$, so as to force a violation of the GCH at every regular cardinal $\gamma$. So the stage $\gamma$ forcing is $\newcommand\Q{\mathbb{Q}}\Q_\gamma=\text{Add}(\gamma,\gamma^{++})$.

First, I shall prove that $\kappa$ is worldly in the forcing extension $V[G]$. Since every set of rank less than $\kappa$ is added by some stage less than $\kappa$, it follows that $V_\kappa^{V[G]}$ is precisely $\bigcup_{\gamma<\kappa} V_\kappa[G_\gamma]$. Most of the $\ZFC$ axioms hold easily in $V_\kappa^{V[G]}$; the only difficult case is the collection axiom. And for this, by considering the ranks of witnesses, it suffices to show for every $\gamma<\kappa$ that every function $f:\gamma\to\kappa$ that is definable from parameters in $V_\kappa^{V[G]}$ is bounded. Suppose we have such a function, defined by $f(\alpha)=\beta$ just in case $\varphi(\alpha,\beta,p)$ holds in $V_\kappa^{V[G]}$. Let $\delta<\kappa$ be larger than the rank of $p$. Now consider $V_\kappa[G_\delta]$, which is a set-forcing extension of $V_\kappa$ and therefore a model of $\ZFC$. The fail forcing, from stage $\delta$ up to $\kappa$, is homogeneous in this model. And therefore we know that $f(\alpha)=\beta$ just in case $1$ forces $\varphi(\check\alpha,\check\beta,\check p)$, since these arguments are all in the ground model $V_\kappa[G_\delta]$. So the function is already definable in $V_\kappa[G_\delta]$. Because this is a model of $\ZFC$, the function $f$ is bounded below $\kappa$. So we get the collection axiom in $V_\kappa^{V[G]}$ and hence all of $\ZFC$ there, and so $\kappa$ is worldly in $V[G]$.

For any $A\of\kappa$, let $\P_A$ be the restriction of the Easton product forcing to include only the stages in $A$, and let $G_A$ be the corresponding generic filter. The full forcing $\P$ factors as $\P_A\times\P_{\kappa\setminus A}$, and so $V[G_A]\of V[G]$ is a transitive inner model of $\ZFC$.

But if we pick $A\of\kappa$ to be a short cofinal set in $\kappa$, which is possible because $\kappa$ is singular, then $\kappa$ will not be worldly in the inner model $V[G_A]$, since in $V_\kappa[G_A]$ we will be able to identify that sequence as the places where the GCH fails. So $\kappa$ is not worldly in $V[G_A]$.

In summary, $\kappa$ was worldly in $V[G]$, but not in the transitive inner model $W=V[G_A]$, and so worldliness is not downward absolute. QED

The inclusion relations of the countable models of set theory are all isomorphic

[bibtex key=”HamkinsKikuchi:The-inclusion-relations-of-the-countable-models-of-set-theory-are-all-isomorphic”]

mereology type

Abstract. The structures $\langle M,\newcommand\of{\subseteq}\of^M\rangle$ arising as the inclusion relation of a countable model of sufficient set theory $\langle M,\in^M\rangle$, whether well-founded or not, are all isomorphic. These structures $\langle M,\of^M\rangle$ are exactly the countable saturated models of the theory of set-theoretic mereology: an unbounded atomic relatively complemented distributive lattice. A very weak set theory suffices, even finite set theory, provided that one excludes the $\omega$-standard models with no infinite sets and the $\omega$-standard models of set theory with an amorphous set. Analogous results hold also for class theories such as Gödel-Bernays set theory and Kelley-Morse set theory.

Set-theoretic mereology is the study of the inclusion relation $\of$ as it arises within set theory. In any set-theoretic context, with the set membership relation $\in$, one may define the corresponding inclusion relation $\of$ and investigate its properties. Thus, every model of set theory $\langle M,\in^M\rangle$ gives rise to a corresponding model of set-theoretic mereology $\langle M,\of^M\rangle$, the reduct to the inclusion relation.

In our previous article,

J. D. Hamkins and M. Kikuchi, Set-theoretic mereology, Logic and Logical Philosophy, special issue “Mereology and beyond, part II”, vol. 25, iss. 3, pp. 1-24, 2016.

we had identified exactly the complete theory of these mereological structures $\langle M,\of^M\rangle$. Namely, if $\langle M,\in^M\rangle$ is a model of set theory, even for extremely weak theories, including set theory without the infinity axiom, then the corresponding mereological reduct $\langle M,\of^M\rangle$ is an unbounded atomic relatively complemented distributive lattice. We call this the theory of set-theoretic mereology. By a quantifier-elimination argument that we give in our earlier paper, partaking of Tarski’s Boolean-algebra invariants and Ersov’s work on lattices, this theory is complete, finitely axiomatizable and decidable.  We had proved among other things that $\in$ is never definable from $\of$ in any model of set theory and furthermore, some models of set-theoretic mereology can arise as the inclusion relation of diverse models of set theory, with different theories. Furthermore, we proved that $\langle\text{HF},\subseteq\rangle\prec\langle V,\subseteq\rangle$.

After that work, we found it natural to inquire:

Question. Which models of set-theoretic mereology arise as the inclusion relation $\of$ of a model of set theory?

More precisely, given a model $\langle M,\newcommand\sqof{\sqsubseteq}\sqof\rangle$ of set-theoretic mereology, under what circumstances can we place a binary relation $\in^M$ on $M$ in such a way that $\langle M,\in^M\rangle$ is a model of set theory and the inclusion relation $\of$ defined in $\langle M,\in^M\rangle$ is precisely the given relation $\sqof$? One can view this question as seeking a kind of Stone-style representation of the mereological structure $\langle M,\sqof\rangle$, because such a model $M$ would provide a representation of $\langle M,\sqof\rangle$ as a relative field of sets via the model of set theory $\langle M,\in^M\rangle$.

A second natural question was to wonder how much of the theory of the original model of set theory can be recovered from the mereological reduct.

Question. If $\langle M,\of^M\rangle$ is the model of set-theoretic mereology arising as the inclusion relation $\of$ of a model of set theory $\langle M,\in^M\rangle$, what part of the theory of $\langle M,\in^M\rangle$ is determined by the structure $\langle M,\of^M\rangle$?

In the case of the countable models of ZFC, these questions are completely answered by our main theorems.

Main Theorems.

  1. All countable models of set theory $\langle M,\in^M\rangle\models\text{ZFC}$ have isomorphic reducts $\langle M,\of^M\rangle$ to the inclusion relation.
  2. The same holds for models of considerably weaker theories such as KP and even finite set theory, provided one excludes the $\omega$-standard models without infinite sets and the $\omega$-standard models having an amorphous set.
  3. These inclusion reducts $\langle M,\of^M\rangle$ are precisely the countable saturated models of set-theoretic mereology.
  4. Similar results hold for class theory: all countable models of Gödel-Bernays set theory have isomorphic reducts to the inclusion relation, and this reduct is precisely the countably infinite saturated atomic Boolean algebra.

Specifically, we show that the mereological reducts $\langle M,\of^M\rangle$ of the models of sufficient set theory are always $\omega$-saturated, and from this it follows on general model-theoretic grounds that they are all isomorphic, establishing statements (1) and (2). So a countable model $\langle M,\sqof\rangle$ of set-theoretic mereology arises as the inclusion relation of a model of sufficient set theory if and only if it is $\omega$-saturated, establishing (3) and answering the first question. Consequently, in addition, the mereological reducts $\langle M,\of^M\rangle$ of the countable models of sufficient set theory know essentially nothing of the theory of the structure $\langle M,\in^M\rangle$ from which they arose, since $\langle M,\of^M\rangle$ arises equally as the inclusion relation of other models $\langle M,\in^*\rangle$ with any desired sufficient alternative set theory, a fact which answers the second question. Our analysis works with very weak set theories, even finite set theory, provided one excludes the $\omega$-standard models with no infinite sets and the $\omega$-standard models with an amorphous set, since the inclusion reducts of these models are not $\omega$-saturated. We also prove that most of these results do not generalize to uncountable models, nor even to the $\omega_1$-like models.

Our results have some affinity with the classical results in models of arithmetic concerned with the additive reducts of models of PA. Restricting a model of set theory to the inclusion relation $\of$ is, after all, something like restricting a model of arithmetic to its additive part. Lipshitz and Nadel (1978) proved that a countable model of Presburger arithmetic (with $+$ only) can be expanded to a model of PA if and only if it is computably saturated. We had hoped at first to prove a corresponding result for the mereological reducts of the models of set theory. In arithmetic, the additive reducts are not all isomorphic, since the standard system of the PA model is fully captured by the additive reduct. Our main result for the countable models of set theory, however, turned out to be stronger than we had expected, since the inclusion reducts are not merely computably saturated, but fully $\omega$-saturated, and this is why they are all isomorphic. Meanwhile, Lipshitz and Nadel point out that their result does not generalize to uncountable models of arithmetic, and similarly ours also does not generalize to uncountable models of set theory.

The work leaves the following question open:

Question. Are the mereological reducts $\langle M,\of^M\rangle$ of all the countable models $\langle M,\in^M\rangle$ of ZF with an amorphous set all isomorphic?

We expect the answer to come from a deeper understanding of the Tarski-Ersov invariants for the mereological structures combined with knowledge of models of ZF with amorphous sets.

This is joint work with Makoto Kikuchi.

All countable models of set theory have the same inclusion relation up to isomorphism, CUNY Logic Workshop, April 2017

This will be a talk for the CUNY Logic Workshop, April 28, 2:00-3:30 in room 6417 at the CUNY Graduate Center.

mereology type

Abstract.  Take any countable model of set theory $\langle M,\in^M\rangle\models\text{ZFC}$, whether well-founded or not, and consider the corresponding inclusion relation $\langle M,\newcommand\of{\subseteq}\of^M\rangle$.  All such models, we prove, are isomorphic. Indeed, if $\langle M,\in^M\rangle$ is a countable model of set theory — a very weak theory suffices, including finite set theory, if one excludes the $\omega$-standard models with no infinite sets and the $\omega$-standard models with an amorphous set — then the corresponding inclusion reduct $\langle M,\of^M\rangle$ is an $\omega$-saturated model of the theory we have called set-theoretic mereology. Since this is a complete theory, it follows by the back-and-forth construction that all such countable saturated models are isomorphic. Thus, the inclusion relation $\langle M,\of^M\rangle$ knows essentially nothing about the theory of the set-theoretic structure $\langle M,\in^M\rangle$ from which it arose. Analogous results hold also for class theories such as Gödel-Bernays set theory and Kelley-Morse set theory.

This is joint work with Makoto Kikuchi, and our paper is available at

J. D. Hamkins and M. Kikuchi, The inclusion relations of the countable models of set theory are all isomorphic, manuscript under review.

Our previous work, upon which these results build, is available at:

J. D. Hamkins and M. Kikuchi, Set-theoretic mereology, Logic and Logical Philosophy, special issue “Mereology and beyond, part II”, vol. 25, iss. 3, pp. 1-24, 2016.

The definable cut of a model of set theory can be changed by small forcing

Cupid carving his bow -- ParmigianinoIf $M$ is a model of ZFC set theory, let $I$ be the definable cut of its ordinals, the collection of ordinals that are below an ordinal $\delta$ of $M$ that is definable in $M$ without parameters. This would include all the ordinals of $M$, if the definable ordinals happen to be unbounded in $M$, but one can also construct examples where the definable cut is bounded in $M$.  Let $M_I$ be the corresponding definable cut of $M$ itself, the rank-initial segment of $M$ determined by $I$, or in other words, the collection of all sets $x$ in $M$ of rank below a definable ordinal of $M$. Equivalently, $$M_I=\bigcup_{\delta\in I} V_\delta^M.$$ It is not difficult to see that this is an elementary substructure $M_I\prec M$, because we can verify the Tarski-Vaught criterion as follows. If $M\models\exists y\ \varphi(x,y)$, where $x\in M_I$, then let $\delta$ be a definable ordinal above the rank of $x$. In this case, the ordinal $\theta$, which is the supremum over all $a\in V_\delta$ of the minimal rank of a set $y$ for which $\varphi(a,y)$, if there is such a $y$. This supremum $\theta$ is definable, and so since $x\in V_\delta$, the minimal rank of a $y$ such that $\varphi(x,y)$ is at most $\theta$. Consequently, since $\theta\in I$, such a $y$ can be found in $M_I$. So we have found the desired witness inside the substructure, and so it is elementary $M_I\prec M$. Note that in the general case, one does not necessarily know that $I$ has a least upper bound in $M$. Under suitable assumptions, it can happen that $I$ is unbounded in $M$, that $I$ is an ordinal of $M$, or that $I$ is bounded in $M$, but has no least upper bound.

What I am interested in for this post is how the definable cut might be affected by forcing. Of course, it is easy to see that if $M$ is definable in $M[G]$, then the definable cut of $M[G]$ is at least as high as the definable cut of $M$, simply because the definable ordinals of $M$ remain definable in $M[G]$.

A second easy observation is that if the definable cut of $M$ is bounded in $M$, then we could perform large collapse forcing, collapsing a cardinal above $I$ to $\omega$, which would of course make every cardinal of $I$ countable in the extension $M[G]$. In this case, since $\omega_1^{M[G]}$ is definable, it would change the definable cut. So this kind of very large forcing can change the definable cut, making it larger.

But what about small forcing? Suppose that the forcing notion $\newcommand\P{\mathbb{P}}\P$ we intend to forcing with is small in the sense that it is in the definable cut $M_I$. This would be true if $\P$ itself were definable, for example, but really we only require that $\P$ has rank less than some definable ordinal of $M$. Can this forcing change the definable cut?

Let me show at least that the definable cut can never go up after small forcing.

Theorem. If $G\subset\P$ is $M$-generic for forcing $\P$ in the definable cut of $M$, then the definable cut of $M[G]$ is below or the same in the ordinals as it was in $M$.

Proof. Suppose that $G\subset\P$ is $M$-generic, and we consider the forcing extension $M[G]$. We have already proved that $M_I\prec M$ is an elementary submodel. I claim that this relation lifts to the forcing extension $M_I[G]\prec M[G]$. Note first that since $\P\in M_I$ and $M_I$ is a rank initial segment of $M$, it follows that $M_I$ has all the subsets of $\P$ in $M$, and so $G$ is $M_I$-generic. So the extension $M_I[G]$ makes sense. Next, suppose that $M[G]\models\varphi(a)$ for some $a\in M_I[G]$. If $\dot a$ is a name for $a$ in $M_I$, then there is some condition $p\in G$ forcing $\varphi(\dot a)$ over $M$. Since $M_I\prec M$, this is also forced by $p$ over $M_I$, and thus $M_I[G]\models\varphi(a)$ as well, as desired. So $M_I[G]\prec M[G]$, and from this it follows that every definable ordinal of $M[G]$ is in the cut $I$. So the definable cut did not get higher. QED

But can it go down? Not if the model $M$ is definable in $M[G]$, by our earlier easy observation. Consequently,

Theorem. If $M$ is definable in $M[G]$, where $G\subset\P$ is $M$-generic for forcing $\P$ below the definable cut of $M$, then the definable cut of $M[G]$ is the same as the definable cut of $M$.

Proof. It didn’t go down, since $M$ is definable in $M[G]$; and it didn’t go up, since $\P$ was small. QED

What if $M$ is not definable in $M[G]$? Can we make the definable cut go down after small forcing? The answer is yes.

Theorem. If ZFC is consistent, then there is a model $M\models\text{ZFC}$ with a definable notion of forcing $\P$ (hence in the definable cut of $M$), such that if $G\subset\P$ is $M$-generic, then the definable cut of the forcing extension $M[G]$ is strictly shorter than the definable cut of $M[G]$.

Proof. Start with a model of $\text{ZFC}+V=L$, whose definable ordinals are bounded by a cardinal $\delta$. Let’s call it $L$, and let $I$ be the definable cut of $L$, which we assume is bounded by $\delta$. Let $M=L[G]$ be the forcing extension of $L$ obtained by performing an Easton product, adding a Cohen subset to every regular cardinal above $\delta$ in $L$. Since this forcing adds no sets below $\delta$, but adds a Cohen set at $\delta^+$, it follows that $\delta$ becomes definable in $L[G]$. In fact, since the forcing is homogeneous and definable from $\delta$, it follows that the definable ordinals of $L[G]$ are precisely the ordinals that are definable in $L$ with parameter $\delta$. These may be bounded or unbounded in $L[G]$. Now, let $\newcommand\Q{\mathbb{Q}}\Q$ be the Easton product forcing at the stages below $\delta$, and suppose that $G\subset\Q$ is $L[G]$-generic. Consider the model $L[G][H]$. Note that the forcing $\Q$ is definable in $L[G]$, since $\delta$ is definable there. This two-step forcing can be combined into one giant Easton product in $L$, the product that simply forces to add a Cohen subset to every regular cardinal. Since this version of the forcing is homogeneous and definable in $L$, it follows that the definable ordinals of $L[G][H]$ are precisely the definable ordinals of $L$, which are bounded by $I$. In summary, the definable cut of $L[G]$ is strictly above $\delta$, since $\delta$ is definable in $L[G]$, and the forcing $\Q$ has size and rank $\delta$; but the forcing extension $L[G][H]$ has definable cut $I$, which is strictly bounded by $\delta$. So the definable cut was made smaller by small forcing, as claimed. QED

This post is an account of some ideas that Alexander Block and I had noted today during the course of our mathematical investigation of another matter.