Set-theoretic potentialism, CUNY Logic Workshop, September, 2016

This will be a talk for the CUNY Logic Workshop, September 16, 2016, at the CUNY Graduate Center, Room 6417, 2-3:30 pm.

Book 06487 20040730160046 droste effect nevit.jpgAbstract.  In analogy with the ancient views on potential as opposed to actual infinity, set-theoretic potentialism is the philosophical position holding that the universe of set theory is never fully completed, but rather has a potential character, with greater parts of it becoming known to us as it unfolds. In this talk, I should like to undertake a mathematical analysis of the modal commitments of various specific natural accounts of set-theoretic potentialism. After developing a general model-theoretic framework for potentialism and describing how the corresponding modal validities are revealed by certain types of control statements, which we call buttons, switches, dials and ratchets, I apply this analysis to the case of set-theoretic potentialism, including the modalities of true-in-all-larger-$V_\beta$, true-in-all-transitive-sets, true-in-all-Grothendieck-Zermelo-universes, true-in-all-countable-transitive-models and others. Broadly speaking, the height-potentialist systems generally validate exactly S4.3 and the height-and-width-potentialist systems generally validate exactly S4.2. Each potentialist system gives rise to a natural accompanying maximality principle, which occurs when S5 is valid at a world, so that every possibly necessary statement is already true.  For example, a Grothendieck-Zermelo universe $V_\kappa$, with $\kappa$ inaccessible, exhibits the maximality principle with respect to assertions in the language of set theory using parameters from $V_\kappa$ just in case $\kappa$ is a $\Sigma_3$-reflecting cardinal, and it exhibits the maximality principle with respect to assertions in the potentialist language of set theory with parameters just in case it is fully reflecting $V_\kappa\prec V$.

This is current joint work with Øystein Linnebo, in progress, which builds on some of my prior work with George Leibman and Benedikt Löwe in the modal logic of forcing.

CUNY Logic Workshop abstract | link to article will be posted later

Set-theoretic mereology as a foundation of mathematics, Logic and Metaphysics Workshop, CUNY, October 2016

This will be a talk for the Logic and Metaphysics Workshop at the CUNY Graduate Center, Monday, October 24, 2016, 4:15-6:15 pm.

Venn_Diagram_of_sets_((P),(Q),(R))Abstract. In light of the comparative success of membership-based set theory in the foundations of mathematics, since the time of Cantor, Zermelo and Hilbert, it is natural to wonder whether one might find a similar success for set-theoretic mereology, based upon the set-theoretic inclusion relation $\subseteq$ rather than the element-of relation $\in$.  How well does set-theoretic mereological serve as a foundation of mathematics? Can we faithfully interpret the rest of mathematics in terms of the subset relation to the same extent that set theorists have argued (with whatever degree of success) that we may find faithful representations in terms of the membership relation? Basically, can we get by with merely $\subseteq$ in place of $\in$? Ultimately, I shall identify grounds supporting generally negative answers to these questions, concluding that set-theoretic mereology by itself cannot serve adequately as a foundational theory.

This is joint work with Makoto Kikuchi, and the talk is based on our joint article:

J. D. Hamkins and M. Kikuchi, Set-theoretic mereology, Logic and Logical Philosophy, special issue “Mereology and beyond, part II”, pp. 1-24, 2016.

The modal logic of set-theoretic potentialism, Kyoto, September 2016

Kyoto cuisineThis will be a talk for the workshop conference Mathematical Logic and Its Applications, which will be held at the Research Institute for Mathematical Sciences, Kyoto University, Japan, September 26-29, 2016, organized by Makoto Kikuchi. The workshop is being held in memory of Professor Yuzuru Kakuda, who was head of the research group in logic at Kobe University during my stay there many years ago.

Abstract.  Set-theoretic potentialism is the ontological view in the philosophy of mathematics that the universe of set theory is never fully completed, but rather has a potential character, with greater parts of it becoming known to us as it unfolds. In this talk, I should like to undertake a mathematical analysis of the modal commitments of various specific natural accounts of set-theoretic potentialism. After developing a general model-theoretic framework for potentialism and describing how the corresponding modal validities are revealed by certain types of control statements, which we call buttons, switches, dials and ratchets, I apply this analysis to the case of set-theoretic potentialism, including the modalities of true-in-all-larger-$V_\beta$, true-in-all-transitive-sets, true-in-all-Grothendieck-Zermelo-universes, true-in-all-countable-transitive-models and others. Broadly speaking, the height-potentialist systems generally validate exactly S4.3 and the height-and-width-potentialist systems validate exactly S4.2. Each potentialist system gives rise to a natural accompanying maximality principle, which occurs when S5 is valid at a world, so that every possibly necessary statement is already true.  For example, a Grothendieck-Zermelo universe $V_\kappa$, with $\kappa$ inaccessible, exhibits the maximality principle with respect to assertions in the language of set theory using parameters from $V_\kappa$ just in case $\kappa$ is a $\Sigma_3$-reflecting cardinal, and it exhibits the maximality principle with respect to assertions in the potentialist language of set theory with parameters just in case it is fully reflecting $V_\kappa\prec V$.

This is joint work with Øystein Linnebo, which builds on some of my prior work with George Leibman and Benedikt Löwe in the modal logic of forcing. Our research article is currently in progress.

Workshop program

The rearrangement number: how many rearrangements of a series suffice to verify absolute convergence? Mathematics Colloquium at Penn, September 2016

This will be a talk for the Mathematics Colloquium at the University of Pennsylvania, Wednesday, September 14, 2016, 3:30 pm, tea at 3 pm, in the mathematics department.

UPenn Campus
Abstract. The well-known Riemann rearrangement theorem asserts that a series $\sum_n a_n$ is absolutely convergent if and only if every rearrangement $\sum_n a_{p(n)}$ of it is convergent, and furthermore, any conditionally convergent series can be rearranged so as to converge to any desired extended real value. But how many rearrangements $p$ suffice to test for absolute convergence in this way? The rearrangement number, a new cardinal characteristic of the continuum, is the smallest size of a family of permutations, such that whenever the convergence and value of a convergent series is invariant by all these permutations, then it is absolutely convergent. The exact value of the rearrangement number turns out to be independent of the axioms of set theory. In this talk, I shall place the rearrangement number into a discussion of cardinal characteristics of the continuum, including an elementary introduction to the continuum hypothesis and, time permitting, an account of Freiling’s axiom of symmetry.

This talk is based in part on current joint work with Jörg Brendle, Andreas Blass, Will Brian, myself, Michael Hardy and Paul Larson.

Related MathOverflow post: How many rearrangements must fail to alter the value of a sum before you conclude that none do?

Set-theoretic geology and the downward-directed grounds hypothesis, CUNY Set Theory seminar, September 2016

This will be a talk for the CUNY Set Theory Seminar, September 2 and 9, 2016.

Blender3D EarthQuarterCut.jpgIn two talks, I shall give a complete detailed account of Toshimichi Usuba’s recent proof of the strong downward-directed grounds hypothesis.  This breakthrough result answers what had been for ten years the central open question in the area of set-theoretic geology and leads immediately to numerous consequences that settle many other open questions in the area, as well as to a sharpening of some of the central concepts of set-theoretic geology, such as the fact that the mantle coincides with the generic mantle and is a model of ZFC.

Although forcing is often viewed as a method of constructing larger models extending a given model of set theory, the topic of set-theoretic geology inverts this perspective by investigating how the current set-theoretic universe $V$ might itself have arisen as a forcing extension of an inner model.  Thus, an inner model $W\subset V$ is a ground of $V$ if we can realize $V=W[G]$ as a forcing extension of $W$ by some $W$-generic filter $G\subset\mathbb{Q}\in W$.  It is a consequence of the ground-model definability theorem that every such $W$ is definable from parameters, and from this it follows that many second-order-seeming questions about the structure of grounds turn out to be first-order expressible in the language of set theory.

For example, Reitz had inquired in his dissertation whether any two grounds of $V$ must have a common deeper ground. Fuchs, myself and Reitz introduced the downward-directed grounds hypothesis DDG and the strong DDG, which asserts a positive answer, even for any set-indexed collection of grounds, and we showed that this axiom has many interesting consequences for set-theoretic geology.

Last year, Usuba proved the strong DDG, and I shall give a complete account of the proof, with some simplifications I had noticed. I shall also present Usuba’s related result that if there is a hyper-huge cardinal, then there is a bedrock model, a smallest ground. I find this to be a surprising and incredible result, as it shows that large cardinal existence axioms have consequences on the structure of grounds for the universe.

Among the consequences of Usuba’s result I shall prove are:

  1. Bedrock models are unique when they exist.
  2. The mantle is absolute by forcing.
  3. The mantle is a model of ZFC.
  4. The mantle is the same as the generic mantle.
  5. The mantle is the largest forcing-invariant class, and equal to the intersection of the generic multiverse.
  6. The inclusion relation agrees with the ground-of relation in the generic multiverse. That is, if $N\subset M$ are in the same generic multiverse, then $N$ is a ground of $M$.
  7. If ZFC is consistent, then the ZFC-provably valid downward principles of forcing are exactly S4.2.
  8. (Usuba) If there is a hyper-huge cardinal, then there is a bedrock for the universe.

Related topics in set-theoretic geology:

CUNY Set theory seminar abstract I | abstract II

Pluralism-inspired mathematics, including a recent breakthrough in set-theoretic geology, Set-theoretic Pluralism Symposium, Aberdeen, July 2016

Set-theoretic Pluralism, Symposium I, July 12-17, 2016, at the University of Aberdeen.  My talk will be the final talk of the conference.

University of AberdeenAbstract. I shall discuss several bits of pluralism-inspired mathematics, including especially an account of Toshimichi Usuba’s recent proof of the strong downward-directed grounds DDG hypothesis, which asserts that the collection of ground models of the set-theoretic universe is downward directed. This breakthrough settles several of what were the main open questions of set-theoretic geology. It implies, for example, that the mantle is a model of ZFC and is identical to the generic mantle and that it is therefore the largest forcing-invariant class. Usuba’s analysis also happens to show that the existence of certain very large cardinals outright implies that there is a smallest ground model of the universe, an unexpected connection between large cardinals and forcing. In addition to these results, I shall present several other instances of pluralism-inspired mathematics, including a few elementary but surprising results that I hope will be entertaining.

SlidesSet-theoretic Pluralism Network | Conference program

My very first lemma, which also happened to involve a philosophical dispute

Doppelverh-zentralproj.svgLet me recall the time of my very first lemma, which also happened to involve a philosophical dispute.

It was about 35 years ago; I was a kid in ninth grade at McKinley Junior High School, taking a class in geometry, taught by a charismatic math teacher. We were learning how to do proofs, which in that class always consisted of a numbered list of geometrical assertions, with a specific reason given for each assertion, either stating that it was “given” or that it followed from previous assertions by a theorem that we had come to know. Only certain types of reasons were allowed.

My instructor habitually used the overhead projector, writing on a kind of infinite scroll of transparency film, which he could wind up on one end of the projector, so as never to run out of room. During the semester, he had filled enough spools, it seemed, to fill the library of Alexandria.

One day, it came to be my turn to present to the rest of the class my proof of a certain geometrical theorem I had been assigned. I took the black marker and drew out my diagram and theorem statement. In my proof, I had found it convenient to first establish a certain critical fact, that two particular line segments in my diagram were congruent $\vec{PQ}\cong\vec{RS}$. In order to do so, I had added various construction lines to my diagram and reasoned with side-angle-side and so on.

Having established the congruency, I had then wanted to continue with my proof of the theorem. Since the previous construction lines were cluttering up my diagram, however, I simply erased them, leaving only my original diagram.

The class erupted with objection!  How could I sensibly continue now with my proof, claiming that $\vec{PQ}\cong\vec{RS}$, after I had erased the construction lines? After all, are those lines segments still congruent once we erase the construction lines that provided the reason we first knew this to be true? Many of the students believed that my having erased the construction lines invalidated my proof.

So there I was, in a ninth-grade math class, making a philosophical argument to my fellow students that the truth of the congruence $\vec{PQ}\cong\vec{RS}$ was independent of my having drawn the construction lines, and that we could rely on the truth of that fact later on in my proof, even if I were to erase those construction lines.

After coming to an uneasy, tentative resolution of this philosophical dispute, I was then allowed to continue with the rest of my proof, establishing the main theorem.

I realized only much later that this had been my very first lemma, since I had isolated a mathematically central fact about a certain situation, proving it with a separate argument, and then I had used that fact in the course of proving a more general theorem.

The Vopěnka principle is inequivalent to but conservative over the Vopěnka scheme

  • J. D. Hamkins, “The Vopěnka principle is inequivalent to but conservative over the Vopěnka scheme.” (manuscript under review)  
    @ARTICLE{Hamkins:The-Vopenka-principle-is-inequivalent-to-but-conservative-over-the-Vopenka-scheme,
    author = {Joel David Hamkins},
    title = {The Vop\v{e}nka principle is inequivalent to but conservative over the Vop\v{e}nka scheme},
    journal = {},
    year = {},
    volume = {},
    number = {},
    pages = {},
    month = {},
    note = {manuscript under review},
    abstract = {},
    keywords = {},
    source = {},
    eprint = {1606.03778},
    archivePrefix = {arXiv},
    primaryClass = {math.LO},
    url = {http://jdh.hamkins.org/vopenka-principle-vopenka-scheme},
    }

Abstract. The Vopěnka principle, which asserts that every proper class of first-order structures in a common language admits an elementary embedding between two of its members, is not equivalent over GBC to the first-order Vopěnka scheme, which makes the Vopěnka assertion only for the first-order definable classes of structures. Nevertheless, the two Vopěnka axioms are equiconsistent and they have exactly the same first-order consequences in the language of set theory. Specifically, GBC plus the Vopěnka principle is conservative over ZFC plus the Vopěnka scheme for first-order assertions in the language of set theory.

Indras Net-03

The Vopěnka principle is the assertion that for every proper class $\mathcal{M}$ of first-order $\mathcal{L}$-structures, for a set-sized language $\mathcal{L}$, there are distinct members of the class $M,N\in\mathcal{M}$ with an elementary embedding $j:M\to N$ between them. In quantifying over classes, this principle is a single assertion in the language of second-order set theory, and it makes sense to consider the Vopěnka principle in the context of a second-order set theory, such as Godel-Bernays set theory GBC, whose language allows one to quantify over classes. In this article, GBC includes the global axiom of choice.

In contrast, the first-order Vopěnka scheme makes the Vopěnka assertion only for the first-order definable classes $\mathcal{M}$ (allowing parameters). This theory can be expressed as a scheme of first-order statements, one for each possible definition of a class, and it makes sense to consider the Vopěnka scheme in Zermelo-Frankael ZFC set theory with the axiom of choice.

Because the Vopěnka principle is a second-order assertion, it does not make sense to refer to it in the context of ZFC set theory, whose first-order language does not allow quantification over classes; one typically retreats to the Vopěnka scheme in that context. The theme of this article is to investigate the precise meta-mathematical interactions between these two treatments of Vopěnka’s idea.

Main Theorems.

  1. If ZFC and the Vopěnka scheme holds, then there is a class forcing extension, adding classes but no sets, in which GBC and the Vopěnka scheme holds, but the Vopěnka principle fails.
  2. If ZFC and the Vopěnka scheme holds, then there is a class forcing extension, adding classes but no sets, in which GBC and the Vopěnka principle holds.

It follows that the Vopěnka principle VP and the Vopěnka scheme VS are not equivalent, but they are equiconsistent and indeed, they have the same first-order consequences.

Corollaries.

  1. Over GBC, the Vopěnka principle and the Vopěnka scheme, if consistent, are not equivalent.
  2. Nevertheless, the two Vopěnka axioms are equiconsistent over GBC.
  3. Indeed, the two Vopěnka axioms have exactly the same first-order consequences in the language of set theory. Specifically, GBC plus the Vopěnka principle is conservative over ZFC plus the Vopěnka scheme for assertions in the first-order language of set theory. $$\text{GBC}+\text{VP}\vdash\phi\qquad\text{if and only if}\qquad\text{ZFC}+\text{VS}\vdash\phi$$

These results grew out of my my answer to a MathOverflow question of Mike Shulman, Can Vopěnka’s principle be violated definably?, inquiring whether there would always be a definable counterexample to the Vopěnka principle, whenever it should happen to fail. I interpret the question as asking whether the Vopěnka scheme is necessarily equivalent to the Vopěnka principle, and the answer is negative.

The proof of the main theorem involves the concept of a stretchable set $g\subset\kappa$ for an $A$-extendible cardinal, which has the property that for every cardinal $\lambda>\kappa$ and every extension $h\subset\lambda$ with $h\cap\kappa=g$, there is an elementary embedding $j:\langle V_\lambda,\in,A\cap V_\lambda\rangle\to\langle V_\theta,\in,A\cap V_\theta\rangle$ such that $j(g)\cap\lambda=h$. Thus, the set $g$ can be stretched by an $A$-extendibility embedding so as to agree with any given $h$.

The hierarchy of logical expressivity

I’d like to give a simple account of what I call the hierarchy of logical expressivity for fragments of classical propositional logic. The idea is to investigate and classify the expressive power of fragments of the traditional language of propositional logic, with the five familiar logical connectives listed below, by considering subsets of these connectives and organizing the corresponding sublanguages of propositional logic into a hierarchy of logical expressivity.

  • conjunction (“and”), denoted $\wedge$
  • disjunction (“or”), denoted  $\vee$
  • negation (“not”), denoted $\neg$
  • conditional (“if…, then”), denoted  $\to$
  • biconditional (“if and only if”), denoted  $\renewcommand\iff{\leftrightarrow}\iff$

With these five connectives, there are, of course, precisely thirty-two ($32=2^5$) subsets, each giving rise to a corresponding sublanguage, the language of propositional assertions using only those connectives. Which sets of connectives are equally as expressive or more expressive than which others? Which sets of connectives are incomparable in their expressivity? How many classes of expressivity are there?

Before continuing, let me mention that Ms. Zoë Auerbach (CUNY Graduate Center), one of the students in my logic-for-philosophers course this past semester, Survey of Logic for Philosophers, at the CUNY Graduate Center in the philosophy PhD program, had chosen to write her term paper on this topic.  She has kindly agreed to make her paper, “The hierarchy of expressive power of the standard logical connectives,” available here, and I shall post it soon.

To focus the discussion, let us define what I call the (pre)order of logical expressivity on sets of connectives. Namely, for any two sets of connectives, I define that $A\leq B$ with respect to logical expressivity, just in case every logical expression in any number of propositional atoms using only connectives in $A$ is logically equivalent to an expression using only connectives in $B$. Thus, $A\leq B$ means that the connectives in $B$ are collectively at least as expressive as the connectives in $A$, in the sense that with the connectives in $B$ you can express any logical assertion that you were able to express with the connectives in $A$. The corresponding equivalence relation $A\equiv B$ holds when $A\leq B$ and $B\leq A$, and in this case we shall say that the sets are expressively equivalent, for this means that the two sets of connectives can express the same logical assertions.

Expressivity hierarchy

The full set of connectives $\{\wedge,\vee,\neg,\to,\iff\}$ is well-known to be complete for propositional logic in the sense that every conceivable truth function, with any number of propositional atoms, is logically equivalent to an expression using only the classical connectives. Indeed, already the sub-collection $\{\wedge,\vee,\neg\}$ is fully complete, and hence expressively equivalent to the full collection, because every truth function can be expressed in disjunctive normal form, as a disjunction of finitely many conjunction clauses, each consisting of a conjunction of propositional atoms or their negations (and hence altogether using only disjunction, conjunction and negation). One can see this easily, for example, by noting that for any particular row of a truth table, there is a conjunction expression that is true on that row and only on that row. For example, the expression $p\wedge\neg r\wedge s\wedge \neg t$ is true on the row where $p$ is true, $r$ is false, $s$ is true and $t$ is false, and one can make similar expressions for any row in any truth table. Simply by taking the disjunction of such expressions for suitable rows where a $T$ is desired, one can produce an expression in disjunctive normal form that is true in any desired pattern (use $p\wedge\neg p$ for the never-true truth function). Therefore, every truth function has a disjunctive normal form, and so $\{\wedge,\vee,\neg\}$ is complete.

Pressing this further, one can eliminate either $\wedge$ or $\vee$ by systematically applying the de Morgan laws
$$p\vee q\quad\equiv\quad\neg(\neg p\wedge\neg q)\qquad\qquad p\wedge q\quad\equiv\quad\neg(\neg p\vee\neg q),$$
which allow one to reduce disjunction to conjunction and negation or reduce conjunction to disjunction and negation. It follows that $\{\wedge,\neg\}$ and $\{\vee,\neg\}$ are each complete, as is any superset of these sets, since a set is always at least as expressive as any of it subsets. Similarly, because we can express disjunction with negation and the conditional via $$p\vee q\quad\equiv\quad \neg p\to q,$$ it follows that the set $\{\to,\neg\}$ can express $\vee$, and hence also is complete. From these simple observations, we may conclude that each of the following fourteen sets of connectives is complete. In particular, they are all expressively equivalent to each other.
$$\{\wedge,\vee,\neg,\to,\iff\}$$
$$\{\wedge,\vee,\neg,\iff\}\qquad\{\wedge,\to,\neg,\iff\}\qquad\{\vee,\to,\neg,\iff\}\qquad \{\wedge,\vee,\neg,\to\}$$
$$\{\wedge,\neg,\iff\}\qquad\{\vee,\neg,\iff\}\qquad\{\to,\neg,\iff\}$$
$$\{\wedge,\vee,\neg\}\qquad\{\wedge,\to,\neg\}\qquad\{\vee,\to,\neg\}$$
$$\{\wedge,\neg\}\qquad \{\vee,\neg\}\qquad\{\to,\neg\}$$

Notice that each of those complete sets includes the negation connective $\neg$. If we drop it, then the set $\{\wedge,\vee,\to,\iff\}$ is not complete, since each of these four connectives is truth-preserving, and so any logical expression made from them will have a $T$ in the top row of the truth table, where all atoms are true. In particular, these four connectives collectively cannot express negation $\neg$, and so they are not complete.

Clearly, we can express the biconditional as two conditionals, via $$p\iff q\quad\equiv\quad (p\to q)\wedge(q\to p),$$ and so the $\{\wedge,\vee,\to,\iff\}$ is expressively equivalent to $\{\wedge,\vee,\to\}$. And since disjunction can be expressed from the conditional with $$p\vee q\quad\equiv\quad ((p\to q)\to q),$$ it follows that the set is expressively equivalent to $\{\wedge,\to\}$. In light of $$p\wedge q\quad\equiv\quad p\iff(p\to q),$$ it follows that $\{\to,\iff\}$ can express conjunction and hence is also expressively equivalent to $\{\wedge,\vee,\to,\iff\}$. Since
$$p\vee q\quad\equiv\quad(p\wedge q)\iff(p\iff q),$$ it follows that $\{\wedge,\iff\}$ can express $\vee$ and hence also $\to$, because $$p\to q\quad\equiv\quad q\iff(q\vee p).$$ Similarly, using $$p\wedge q\quad\equiv\quad (p\vee q)\iff(p\iff q),$$ we can see that $\{\vee,\iff\}$ can express $\wedge$ and hence also is expressively equivalent to $\{\wedge,\vee,\iff\}$, which we have argued is equivalent to $\{\wedge,\vee,\to,\iff\}$.  For these reasons, the following sets of connectives are expressively equivalent to each other.
$$\{\wedge,\vee,\to,\iff\}$$
$$\{\wedge,\vee,\to\}\qquad\{\wedge,\vee,\iff\}\qquad \{\vee,\to,\iff\}\qquad \{\wedge,\to,\iff\}$$
$$\{\wedge,\iff\}\qquad \{\vee,\iff\}\qquad \{\to,\iff\}\qquad \{\wedge,\to\}$$
And as I had mentioned, these sublanguages are strictly less expressive than the full language, because these four connectives are all truth-preserving and therefore unable to express negation.

The set $\{\wedge,\vee\}$, I claim, is unable to express any of the other fundamental connectives, because $\wedge$ and $\vee$ are each false-preserving, and so any logical expression built from $\wedge$ and $\vee$ will have $F$ on the bottom row of the truth table, where all atoms are false. Meanwhile, $\to,\iff$ and $\neg$ are not false-preserving, since they each have $T$ on the bottom row of their defining tables. Thus, $\{\wedge,\vee\}$ lies strictly below the languages mentioned in the previous paragraph in terms of logical expressivity.

Meanwhile, using only $\wedge$ we cannot express $\vee$, since any expression in $p$ and $q$ using only $\wedge$ will have the property that any false atom will make the whole expression false (this uses the associativity of $\wedge$), and $p\vee q$ does not have this feature. Similarly, $\vee$ cannot express $\wedge$, since any expression using only $\vee$ is true if any one of its atoms is true, but $p\wedge q$ is not like this. For these reasons, $\{\wedge\}$ and $\{\vee\}$ are both strictly weaker than $\{\wedge,\vee\}$ in logical expressivity.

Next, I claim that $\{\vee,\to\}$ cannot express $\wedge$, and the reason is that the logical operations of $\vee$ and $\to$ each have the property that any expression built from that has at least as many $T$’s as $F$’s in the truth table. This property is true of any propositional atom, and if $\varphi$ has the property, so does $\varphi\vee\psi$ and $\psi\to\varphi$, since these expressions will be true at least as often as $\varphi$ is. Since $\{\vee,\to\}$ cannot express $\wedge$, this language is strictly weaker than $\{\wedge,\vee,\to,\iff\}$ in logical expressivity. Actually, since as we noted above $$p\vee q\quad\equiv\quad ((p\to q)\to q),$$ it follows that $\{\vee,\to\}$ is expressively equivalent to $\{\to\}$.

Meanwhile, since $\vee$ is false-preserving, it cannot express $\to$, and so $\{\vee\}$ is strictly less expressive than $\{\vee,\to\}$, which is expressively equivalent to $\{\to\}$.

Consider next the language corresponding to $\{\iff,\neg\}$. I claim that this set is not complete. This argument is perhaps a little more complicated than the other arguments we have given so far. What I claim is that both the biconditional and negation are parity-preserving, in the sense that any logical expression using only $\neg$ and $\iff$ will have an even number of $T$’s in its truth table. This is certainly true of any propositional atom, and if true for $\varphi$, then it is true for $\neg\varphi$, since there are an even number of rows altogether; finally, if both $\varphi$ and $\psi$ have even parity, then I claim that $\varphi\iff\psi$ will also have even parity. To see this, note first that this biconditional is true just in case $\varphi$ and $\psi$ agree, either having the pattern T/T or F/F. If there are an even number of times where both are true jointly T/T, then the remaining occurrences of T/F and F/T will also be even, by considering the T’s for $\varphi$ and $\psi$ separately, and consequently, the number of occurrences of F/F will be even, making $\varphi\iff\psi$ have even parity. If the pattern T/T is odd, then also T/F and F/T will be odd, and so F/F will have to be odd to make up an even number of rows altogether, and so again $\varphi\iff\psi$ will have even parity. Since conjunction, disjunction and the conditional do not have even parity, it follows that $\{\iff,\neg\}$ cannot express any of the other fundamental connectives.

Meanwhile, $\{\iff\}$ is strictly less expressive than $\{\iff,\neg\}$, since the biconditional $\iff$ is truth-preserving but negation is not. And clearly $\{\neg\}$ can express only unary truth functions, since any expression using only negation has only one propositional atom, as in $\neg\neg\neg p$. So both $\{\iff\}$ and $\{\neg\}$ are strictly less expressive than $\{\iff,\neg\}$.

Lastly, I claim that $\iff$ is not expressible from $\to$. If it were, then since $\vee$ is also expressible from $\to$, we would have that $\{\vee,\iff\}$ is expressible from $\to$, contradicting our earlier observation that $\{\to\}$ is strictly less expressive than $\{\vee,\iff\}$, as this latter set can express $\wedge$, but $\to$ cannot, since every expression in $\to$ has at least as many $T$’s as $F$’s in its truth table.

These observations altogether establish the hierarchy of logical expressivity shown in the diagram displayed above.

It is natural, of course, to want to extend the hierarchy of logical expressivity beyond the five classical connectives. If one considers all sixteen binary logical operations, then Greg Restall has kindly produced the following image, which shows how the hierarchy we discussed above fits into the resulting hierarchy of expressivity. This diagram shows only the equivalence classes, rather than all $65536=2^{16}$ sets of connectives.

Full binary expressivity lattice

If one wants to go beyond merely the binary connectives, then one lands at Post’s lattice, pictured below (image due to Emil Jeřábek), which is the countably infinite (complete) lattice of logical expressivity for all sets of truth functions, using any given set of Boolean connectives. Every such set is finitely generated.Post-lattice.svg

Drunk Science: Infinity, special guest, June 23, 2016

Drunk Science

I shall be special guest at Drunk Science: Infinity, an experimental comedy show in Brooklyn, during which three intoxicated comedians will compete to offer the best dissertation defense on the topic of my research.

The event will take place Thursday, June 23, 2016, (doors 7pm, show 8pm) at the Littlefield performance and art space, 622 Degraw Street between 3rd and 4th Avenue in Brooklyn. Tickets from $5.  (Get tickets now, since the shows often sell out.)

Update: What a riot it was! I really had a lot of fun.

 

The pirate treasure division problem

Pg 076 - Buried Treasure

In my logic course this semester, as a part of the section on the logic of games, we considered the pirate treasure division problem.

Imagine a pirate ship with a crew of fearsome, perfectly logical pirates and a treasure of 100 gold coins to be divided amongst them. How shall they do it? They have long agreed upon the pirate treasure division procedure: The pirates are linearly ordered by rank, with the Captain, the first Lieutenant, the second Lieutenant and so on down the line; but let us simply refer to them as Pirate 1, Pirate 2, Pirate 3 and so on. Pirate 9 is swabbing the decks in preparation. For the division procedure, all the pirates assemble on deck, and the lowest-ranking pirate mounts the plank. Facing the other pirates, she proposes a particular division of the gold — so-and-so many gold pieces to the captain, so-and-so many pieces to Pirate 2 and so on.  The pirates then vote on the plan, including the pirate on the plank, and if a strict majority of the pirates approve of the plan, then it is adopted and that is how the gold is divided. But if the pirate’s plan is not approved by a pirate majority, then regretfully she must walk the plank into the sea (and her death) and the procedure continues with the next-lowest ranking pirate, who of course is now the lowest-ranking pirate.

Suppose that you are pirate 10: what plan do you propose?  Would you think it is a good idea to propose that you get to keep 94 gold pieces for yourself, with the six remaining given to a few of the other pirates? In fact, you can propose just such a thing, and if you do it correctly, your plan will pass!

Before explaining why, let me tell you a little more about the pirates. I mentioned that the pirates are perfectly logical, and not only that, they have the common knowledge that they are all perfectly logical. In particular, in their reasoning they can rely on the fact that the other pirates are logical, and that the other pirates know that they are all logical and that they know that, and so on.

Furthermore, it is common knowledge amongst the pirates that they all share the same pirate value system, with the following strictly ordered list of priorities:

Pirate value system:

  1. Stay alive.
  2. Get gold.
  3. Cause the death of other pirates.
  4. Arrange that other’s gold goes to the most senior pirates.

That is, at all costs, each pirate would prefer to avoid death, and if alive, to get as much gold as possible, but having achieved that, would prefer that as many other pirates die as possible (but not so much as to give up even one gold coin for additional deaths), and if all other things are equal, would prefer that whatever gold was not gotten for herself, that it goes as much as possible to the most senior pirates, for the pirates are, in their hearts, conservative people.

So, what plan should you propose as Pirate 10? Well, naturally, the pirates will consider Pirate 10’s plan in light of the alternative, which will be the plan proposed by Pirate 9, which will be compared with the plan of Pirate 8 and so on. Thus, it seems we should propagate our analysis from the bottom, working backwards from what happens with a very small number of pirates.

One pirate. If there is only one pirate, the captain, then she mounts the plank, and clearly she should propose “Pirate 1 gets all the gold”, and she should vote in favor of this plan, and so Pirate 1 gets all the gold, as anyone would have expected.

Two pirates. If there are exactly two pirates, then Pirate 2 will mount the plank, and what will she propose? She needs a majority of the two pirates, which means she must get the captain to vote for her plan. But no matter what plan she proposes, even if it is that all the gold should go to the captain, the captain will vote against the plan, since if Pirate 2 is killed, then the captain will get all the gold anyway, and because of pirate value 3, she would prefer that Pirate 2 is killed off.  So Pirate 2’s plan will not be approved by the captain, and so unfortunately, Pirate 2 will walk the plank.

Three pirates. If there are three pirates, then what will Pirate 3 propose? Well, she needs only two votes, and one of them will be her own. So she must convince either Pirate 1 or Pirate 2 to vote for her plan. But actually, Pirate 2 will have a strong incentive to vote for the plan regardless, since otherwise Pirate 2 will be in the situation of the two-pirate case, which ended with Pirate 2’s death. So Pirate 3 can count on Pirate 2’s vote regardless, and so Pirate 3 will propose:  Pirate 3 gets all the gold! This will be approved by both Pirate 2 and Pirate 3, a majority, and so with three pirates, Pirate 3 gets all the gold.

Four pirates. Pirate 4 needs to have three votes, so she needs to get two of the others to vote for her plan. She notices that if she is to die, then Pirates 1 and 2 will get no gold, and so she realizes that if she offers them each one gold coin, they will prefer that, because of the pirate value system. So Pirate 4 will propose to give one gold coin each to Pirates 1 and 2, and 98 gold coins to herself. This plan will pass with the votes of 1, 2 and 4.

Five pirates. Pirate 5 needs three votes, including her own. She can effectively buy the vote of Pirate 3 with one gold coin, since Pirate 3 will otherwise get nothing in the case of four pirates. And she needs one additional vote, that of Pirate 1 or 2, which she can get by offering two gold coins. Because of pirate value 4, she would prefer that the coins go to the highest ranking pirate, so she offers the plan:  two coins to Pirate 1, nothing to pirate 2, one coin to pirate 3, nothing to Pirate 4 and 97 coins to herself.  This plan will pass with the votes of 1, 3 and 5.

Six pirates. Pirate 6 needs four votes, and she can buy the votes of Pirates 2 and 4 with one gold coin each, and then two gold coins to Pirate 3, which is cheaper than the alternatives. So she proposes:  one coin each to 2 and 4, two coins to 3 and 96 coins for herself, and this passes with the votes of 2, 3, 4 and 6.

Seven pirates. Pirate 7 needs four votes, and she can buy the votes of Pirates 1 and 5 with only one coin each, since they get nothing in the six-pirate case. By offering two coins to Pirate 2, she can also get another vote (and she prefers to give the extra gold to Pirate 2 than to other pirates in light of the pirate values).

Eight pirates. Pirate 8 needs five votes, and she can buy the votes of Pirates 3, 4 and 6 with one coin each, and ensure another vote by giving two coins to Pirate 1, keeping the other 96 coins for herself. With her own vote, this plan will pass.

Nine pirates. Pirate 9 needs five votes, and she can buy the votes of Pirates 2, 5 and 7 with one coin each, with two coins to Pirate 3 and her own vote, the plan will pass.

Ten pirates. In light of the division offered by Pirate 9, we can now see that Pirate 10 can ensure six votes by proposing to give one coin each to Pirates 1, 4, 6 and 8, two coins to Pirate 2, and the remaining 94 coins for herself. This plan will pass with those pirates voting in favor (and herself), because they each get more gold this way than they would under the plan of Pirate 9.

We can summarize the various proposals in a table, where the $n^{\rm th}$ row corresponds to the proposal of Pirate $n$.

1 2 3 4 5 6 7 8 9 10
One pirate 100
Two pirates * X
Three pirates 0 0 100
Four pirates 1 1 0 98
Five pirates 2 0 1 0 97
Six pirates 0 1 2 1 0 96
Seven pirates 1 2 0 0 1 0 96
Eight pirates 2 0 1 1 0 1 0 95
Nine pirates 0 1 2 0 1 0 1 0 95
Ten pirates 1 2 0 1 0 1 0 1 0 94

There are a few things to notice, which we can use to deduce how the pattern will continue. Notice that in each row beyond the third row, the number of pirates that get no coins is almost half (the largest integer strictly less than half), exactly one pirate gets two coins, and the remainder get one coin, except for the proposer herself, who gets all the rest. This pattern is sustainable for as long as there is enough gold to implement it, because each pirate can effectively buy the votes of the pirates getting $0$ under the alternative plan with one fewer pirate, and this will be at most one less than half of the previous number; then, she can buy one more vote by giving two coins to one of the pirates who got only one coin in the alternative plan; and with her own vote this will be half plus one, which is a majority. We can furthermore observe that by the pirate value system, the two coins will always go to either Pirate 1, 2 or 3, since one of these will always be the top-ranked pirate having one coin on the previous round. They each cycle with the pattern of 0 coins, one coin, two coins in the various proposals. At least until the gold becomes limited, all the other pirates from Pirate 4 onwards will alternate between zero coins and one coin with each subsequent proposal, and Pirate $n-1$ will always get zero from Pirate $n$.

For this reason, we can see that the pattern continues upward until at least Pirate 199, whose proposal will follow the pattern:

199 Pirates: 1 2 0 0 1 0 1 0 1 0 1 0 1 $\dots$ 1 0 1 0 0

It is with Pirate 199, specifically, that for the first time it takes all one hundred coins to buy the other votes, since she must give ninety-eight pirates one coin each, and two coins to Pirate 2 in order to have one hundred votes altogether, including her own, leaving no coins left over for herself.

For this reason, Pirate 200 will have a successful proposal, since she no longer needs to spend two coins for one vote, as the proposal of Pirate 199 has one hundred pirates getting zero. So Pirate 200 can get 100 votes by proposing one coin to everyone who would get zero from 199, plus her own vote, for a majority of 101 votes.

200 pirates: 0 0 1 1 0 1 0 1 0 1 0 $\dots$ 0 1 0 1 1 0

Pirate 201 also needs 101 votes, which she can get by giving all the zeros of the 200 case one coin each, plus her own vote. The unfortunate Pirate 202, however, needs 102 votes, and this will not be possible, since she has only 100 coins, and so Pirate 202 will die. The interesting thing to notice next is that Pirate 203 will therefore be able to count on the vote of Pirate 202 without paying any gold for it, and so since she needs only 100 additional votes (after her own vote and Pirate 202’s vote), she will be able to buy 100 votes for one coin each. Pirate 204 will again be one coin short, and so she will die. Although Pirate 205 will be able to count on that one additional free vote, this will be insufficient to gain a passing proposal, because she will be able to buy one hundred votes with the coins, plus her own vote and the free vote of Pirate 204, making 102 votes altogether, which is not a majority. Similarly, Pirate 206 will fall short, because even with her vote and the free votes of 204 and 205, she will be able to get at most 103 votes, which is not a majority. Thus, Pirate 207 will be able to count on the votes of Pirates 204, 205, and 206, which with her own vote and 100 more votes gotten by giving one coin each to the pirates who would otherwise get nothing, we can obtain 104 votes, which is a majority.

The reader is encouraged to investigate further to see how the pattern continues. It is a fun problem to work out! What emerges is the phenomenon by which longer and longer sequences of pirates in a row find themselves unable to make a winning proposal, and then suddenly a pirate is able to survive by counting on their votes.

It is very interesting also to work out what happens when there is a very small number of coins. For example, if there is only one gold coin, then already Pirate 4 is unable to make a passing proposal, since she can buy only one other vote, and with her own this will make only two votes, falling short of a majority. With only one coin, Pirate 5 will survive by buying a vote from Pirate 1 and counting on the vote of Pirate 4 and her own vote, for a majority.

Even the case of zero coins is interesting to think about! In this case, there is no gold to distribute, and so the voting is really just about whether the pirate should walk the plank or not. If only one pirate, she will live. Pirate 2 will die, since Pirate 1 will vote against. But for that reason, Pirate 2 will vote in favor of Pirate 3, who will live. The pattern that emerges is:

lives, dies, lives, dies, dies, dies, lives, dies, dies, dies, dies, dies, dies, dies, lives, ….

After each successful proposal, where the pirates lives, for subsequently larger numbers of pirates, there must be many deaths in a row in order for the proposal to count on enough votes. So after each “lives” in the pattern, you have to double the length with many “dies” in a row, before there will be enough votes to support the next pirate who lives.

See also the Pirate Game entry on Wikipedia, which is a slightly different formulation of the puzzle, since tie-votes are effectively counted as success in that variation. For this reason, the outcomes are different in that variation. I prefer the strict-majority variation, since I find it interesting that one must sometimes use two gold coins to gain the majority, and also because the death of Pirate 2 arrives right away in an interesting way, rather than having to wait for 200 or more pirates as with the plurality version.

Another (inessential) difference in presentation is that in the other version of the puzzle, they have the captain on the plank first, and then always the highest-ranking pirate making the proposal, rather than the lowest-ranking pirate. This corresponds simply to inverting the ranking, and so it doesn’t change the results.

The puzzle appears to have been around for some time, but I am unsure of the exact provenance. Ian Stewart wrote a popular 1998 article for Scientific American analyzing the patterns that arise when the number of pirates is large in comparison with the number of gold pieces.

How does a slinky fall?

Have you ever observed carefully how a slinky falls? Suspend a slinky from one end, letting it hang freely in the air under its own weight, and then, let go! The slinky begins to fall. The top of the slinky, of course, begins to fall the moment you let go of it. But what happens at the bottom of the slinky? Does it also start to fall at the same moment you release the top? Or perhaps it moves upward, as the slinky contracts as it falls? Or does the bottom of the slinky simply hang motionless in the air for a time?

The surprising fact is that indeed the bottom of the slinky doesn’t move at all when you release the top of the slinky! It hangs momentarily motionless in the air in exactly the same coiled configuration that it had before the drop. This is the surprising slinky drop effect.

My son (age 13, eighth grade) took up the topic for his science project this year at school.  He wanted to establish the basic phenomenon of the slinky drop effect and to investigate some of the subtler aspects of it.  For a variety of different slinky types, he filmed the slinky drops against a graded background with high-speed camera, and then replayed them in slow motion to watch carefully and take down the data.  Here are a few sample videos. He made about a dozen drops altogether.  For the actual data collection, the close-up videos were more useful. Note the ring markers A, B, C, and so on, in some of the videos.

 

See more videos here.

For each slinky drop video, he went through the frames and recorded the vertical location of various marked rings (you can see the labels A, B, C and so on in some of the videos above) into a spreadsheet. From this data he then produced graphs such as the following for each slinky drop:

Small metal slinky graph

 

Large metal slinky graph

Plastic Slinky graph

In each case, you can see clearly in the graph the moment when the top of the slinky is released, since this is the point at which the top line begins to descend. The thing to notice next — the main slinky drop effect — is that the lower parts of the slinky do not move at the same time. Rather, the lower lines remain horizontal for some time after the drop point. Basically, they remain horizontal until the bulk of the slinky nearly descends upon them. So the experiments clearly establish the main slinky drop phenomenon: the bottom of the slinky remains motionless for a time hanging in the air unchanged after the top is released.

In addition to this effect, however, my son was focused on investigating a much more subtle aspect of the slinky drop phenomenon. Namely, when exactly does the bottom of the slinky start to move?  Some have said that the bottom moves only when the top catches up to it; but my son hypothesized, based on observations, as well as discussions with his father and uncles, that the bottom should start to move slightly before the bulk of the slinky meets it. Namely, he thought that when you release the top of the slinky, a wave of motion travels through the slinky, and this wave travels slightly fast than the top of the slinky falls. The bottom moves, he hypothesized, when the wave front first gets to the bottom.

His data contains some confirming evidence for this subtler hypothesis, but for some of the drops, the experiment was inconclusive on this smaller effect. Overall, he had a great time undertaking the science project.

June 2016 Update: On the basis of his science fair poster and presentation, my son was selected as nominee to the Broadcom Masters national science fair competition! He is now competing against other nominees (top 10% of participating science fairs) for a chance to present his research in Washington at the final national competition next October.

Slinky drop on YouTube | Modeling a falling slinky (Wired)
Explaining an astonishing slinky | Slinky drop on physics.stackexchange
Cross & Wheatland, “Modeling a falling slinky”

Jacob Davis, PhD 2016, Carnegie Mellon University

Jacob Davis successfully defended his dissertation, “Universal Graphs at $\aleph_{\omega_1+1}$ and Set-theoretic Geology,” at Carnegie Mellon University on April 29, 2016, under the supervision of James Cummings. I was on the dissertation committee (participating via Google Hangouts), along with Ernest Schimmerling and Clinton Conley.

Jacob Davis

CMU web pageGoogle+ profile | ar$\chi$iv

The thesis consisted of two main parts. In the first half, starting from a model of ZFC with a supercompact cardinal, Jacob constructed a model in which $2^{\aleph_{\omega_1}} = 2^{\aleph_{\omega_1+1}} = \aleph_{\omega_1+3}$ and in which there is a jointly universal family of size $\aleph_{\omega_1+2}$ of graphs on $\aleph_{\omega_1+1}$.  The same technique works with any uncountable cardinal in place of $\omega_1$.  In the second half, Jacob proved a variety of results in the area of set-theoretic geology, including several instances of the downward directed grounds hypothesis, including an analysis of the chain condition of the resulting ground models.

An equivalent formulation of the GCH

Aleph0 new.svgThe continuum hypothesis CH is the assertion that the size of the power set of a countably infinite set $\aleph_0$ is the next larger cardinal $\aleph_1$, or in other words, that $2^{\aleph_0}=\aleph_1$. The generalized continuum hypothesis GCH makes this same assertion about all infinite cardinals, namely, that the power set of any infinite cardinal $\kappa$ is the successor cardinal $\kappa^+$, or in other words, $2^\kappa=\kappa^+$.

Yesterday I received an email from Geoffrey Caveney, who proposed to me the following axiom, which I have given a name.   First, for any set $F$ of cardinals, define the $F$-restricted power set operation $P_F(Y)=\{X\subseteq Y\mid |X|\in F\}$ to consist of the subsets of $Y$ having a cardinality allowed by $F$.  The only cardinals of $F$ that matter are those that are at most the cardinality of $Y$.

The Alternative GCH is the assertion that for every cardinal number $\kappa$, there is a set $F$ of cardinals such that the $F$-restricted power set $P_F(\kappa)$ has size $\kappa^+$.

Caveney was excited about his axiom for three reasons. First, a big part of his motivation for considering the axiom was the observation that the equation $2^\kappa=\kappa^+$ is simply not correct for finite cardinals $\kappa$ (other than $0$ and $1$) — and this is why the GCH makes the assertion only for infinite cardinals $\kappa$ — whereas the alternative GCH axiom makes a uniform statement for all cardinals, including the finite cardinals, and it gets the right answer for the finite cardinals. Specifically, for any natural number $n$, we can let $F=\{0,1\}$, and then note that $n$ has exactly $n+1$ many subsets of size in $F$. Second, Caveney had also observed that the GCH implies his axiom, since as we just mentioned, it is true for the finite cardinals and for infinite $\kappa$ we can take $F=\{\kappa\}$, using the fact that every infinite cardinal $\kappa$ has $2^\kappa$ many subsets of size $\kappa$ (we are working in ZFC). Third, Caveney had noticed that his axiom implies the continuum hypothesis, since in the case that $\kappa=\aleph_0$, there would be a family $F$ for which $P_F(\aleph_0)$ has size $\aleph_1$. But since there are only countably many finite subsets of $\aleph_0$, it follows that $F$ must include $\aleph_0$ itself, and so this would mean that $\aleph_0$ has only $\aleph_1$ many infinite subsets, and this implies CH.

To my way of thinking, the natural question to consider was whether Caveney’s axiom was actually weaker than GCH or not. At first I noticed that the axiom implies $2^{\aleph_1}=\aleph_2$ and similarly $2^{\aleph_n}=\aleph_{n+1}$, getting us up to $\aleph_\omega$. Then, after a bit I noticed that we can push the argument through all the way.

Theorem. The alternative GCH is equivalent to the GCH.

Proof. We’ve already argued for the converse implication, so it remains only to show that the alternative GCH implies the GCH. Assume that the alternative GCH holds.

We prove the GCH by transfinite induction. For the anchor case, we’ve shown already above that the GCH holds at $\aleph_0$, that is, that CH holds. For the successor case, assume that the GCH holds at some $\delta$, so that $2^\delta=\delta^+$, and consider the case $\kappa=\delta^+$. By the alternative GCH, there is a family $F$ of cardinals such that $|P_F(\kappa)|=\kappa^+$. If every cardinal in $F$ is less than $\kappa$, then $P_F(\kappa)$ has size at most $\kappa^{<\kappa}=(\delta^+)^\delta=2^\delta=\delta^+=\kappa$, which is too small. So $\kappa$ itself must be in $F$, and from this it follows that $\kappa$ has at most $\kappa^+$ many subsets of size $\kappa$, which implies $2^\kappa=\kappa^+$. So the GCH holds at $\kappa$, and we’ve handled the successor case. For the limit case, suppose that $\kappa$ is a limit cardinal and the GCH holds below $\kappa$. So $\kappa$ is a strong limit cardinal. By the alternative GCH, there is a family $F$ of cardinals for which $P_F(\kappa)=\kappa^+$. It cannot be that all cardinals in $F$ are less than the cofinality of $\kappa$, since in this case all the subsets of $\kappa$ in $P_F(\kappa)$ would be bounded in $\kappa$, and so it would have size at most $\kappa$, since $\kappa$ is a strong limit. So there must be a cardinal $\mu$ in $F$ with $\newcommand\cof{\text{cof}}\cof(\kappa)\leq\mu\leq\kappa$. But in this case, it follows that $\kappa^\mu=\kappa^+$, and this implies $\kappa^{\cof(\kappa)}=\kappa^+$, since by König’s theorem it is always at least $\kappa^+$, and it cannot be bigger if $\kappa^\mu=\kappa^+$. Finally, since $\kappa$ is a strong limit cardinal, it follows easily that $2^\kappa=\kappa^{\cof(\kappa)}$, since every subset of $\kappa$ is determined by it’s initial segments, and hence by a $\cof(\kappa)$-sequence of bounded subsets of $\kappa$, of which there are only $\kappa$ many. So we have established that $2^\kappa=\kappa^+$ in the limit case, completing the induction. So we get all instances of the GCH.
QED

Same structure, different truths, Stanford University CSLI, May 2016

This will be a talk for the Workshop on Logic, Rationality, and Intelligent Interaction at the CSLI, Stanford University, May 27-28, 2016.

Abstract. To what extent does a structure determine its theory of truth? I shall discuss several surprising mathematical results illustrating senses in which it does not, for the satisfaction relation of first-order logic is less absolute than one might have expected. Two models of set theory, for example, can have exactly the same natural numbers and the same arithmetic structure $\langle\mathbb{N},+,\cdot,0,1,<\rangle$, yet disagree on what is true in this structure; they have the same arithmetic, but different theories of arithmetic truth; two models of set theory can have the same natural numbers and a computable linear order in common, yet disagree on whether it is a well-order; two models of set theory can have the same natural numbers and the same reals, yet disagree on projective truth; two models of set theory can have a rank initial segment of the universe $\langle V_\delta,{\in}\rangle$ in common, yet disagree about whether it is a model of ZFC. These theorems and others can be proved with elementary classical model-theoretic methods, which I shall explain. Indefinite arithmetic truthOn the basis of these observations, Ruizhi Yang (Fudan University, Shanghai) and I argue that the definiteness of the theory of truth for a structure, even in the case of arithmetic, cannot be seen as arising solely from the definiteness of the structure itself in which that truth resides, but rather is a higher-order ontological commitment.

Slides | Main article: Satisfaction is not absolute | CLSI 2016 | Abstract at CLSI