This is a talk for the Logik Kolloquium at the University of Konstanz, spanning the departments of mathematics, philosophy, linguistics, and computer science. 19 July 2021 on Zoom. 15:15 CEST (2:15 pm BST).
Abstract: An enduring mystery in the foundations of mathematics is the observed phenomenon that our best and strongest mathematical theories seem to be linearly ordered and indeed well-ordered by consistency strength. For any two of the familiar large cardinal hypotheses, one of them generally proves the consistency of the other. Why should this be? Why should it be linear? Some philosophers see the phenomenon as significant for the philosophy of mathematics—it points us toward an ultimate mathematical truth. Meanwhile, the linearity phenomenon is not strictly true as mathematical fact, for we can prove that the hierarchy of consistency strength is actually ill-founded, densely ordered, and nonlinear. The counterexample statements and theories, however, are often dismissed as unnatural. Linearity is thus a phenomenon only for the so-called “naturally occurring” theories. But what counts as natural? Is there a mathematically meaningful account of naturality? In this talk, I shall criticize this notion of naturality, and attempt to undermine the linearity phenomenon by presenting a variety of natural hypotheses that reveal ill-foundedness, density, and incomparability in the hierarchy of consistency strength.
The talk should be generally accessible to university logic students.
Consider this fascinating vision of recursive chess — the etching below was created by Django Pinter, a tutorial student of mine who has just completed his degree in the PPL course here at Oxford, given to me as a parting gift at the conclusion of his studies. Django’s idea was to play chess, but in order for a capture to occur successfully on the board, as here with the black Queen attempting to capture the opposing white knight, the two pieces would themselves sit down for their own game of (recursive) chess. The idea was that the capture would be successful only in the event that the subgame was won. Notice in the image that not only is there a smaller recursive game of chess, but also a still tinier subrecursive game below that (if you inspect closely), while at the same time larger pieces loom in the background, indicating that the main board itself is already several levels deep in the recursion.
The recursive chess idea may seem clear enough initially, and intriguing. But with further reflection, we might wonder how does it work exactly? What precisely is the game of recursive chess? This is my question here.
My goal with this post is to open a discussion about what ultimately should be the precise the rules and operations of recursive chess. I’m not completely sure what the best rule set will be, although I do offer several proposals here. I welcome further proposals, commentary, suggestions, and criticism about how to proceed. Once we settle upon a best or most natural version of the game, then I shall hope perhaps to prove things about it. Will you help me decide what is the game of recursive chess?
Let me describe several natural proposals:
Naïve recursion. Although this seems initially to be a simple, sound proposal, ultimately I find it problematic. The naïve idea is that when one piece wants to capture another in the game at hand, then the two pieces themselves play a game of chess, starting from the usual chess starting position. I would find it natural that the attacking piece should play as white in this game, going first, and if that player wins the subgame, then the capture in the current game is successful. If the subgame is a loss, then the capture is unsuccessful.
There seem, however, to be a variety of ways to handle the losing subgame outcome, and since these will apply with several of the other proposals, let me record them here:
Failed-capture. If the subgame is lost, then the capture in the current game simply does not occur. Both pieces remain on their original squares, and the turn of play passes to the opponent. Notice that this will have serious affects in certain chess situations involving zugzwang, a position where a player has no good move — because if one of them is a capture, then the player can aim to play badly in the subgame and thereby legally pass the turn of play to the opponent without having made any move. This situation will in effect cause the subgame players to attempt to lose, rather than win, which seems undesirable.
Failed-capture-with-penalty. If the subgame is lost, then the capture does not occur, but furthermore, the attacking piece is itself lost, removed from the board, and the turn of play passes to the opponent. In effect, under this rule, every attempt at capture is putting the life of the capturing piece at risk, which makes a certain amount of sense from a military point of view. I think perhaps this is a good rule.
Failed-capture-with-retry. If the subgame is lost, then the capture does not occur, but both pieces remain on their original squares, and the attacking player is allowed to proceed with another (different) move. Attempting the same attack from the same board position multiple times is subject to the three-fold repetition rule. This interpretation amounts in effect to the game play searching a large part of the game tree, exploring the possible capturing moves, but with the first successful option fixed as official. It invites manipulation by the opponent, who might play badly against a misguided capture attempt, causing it to be fixed as the official move.
Drawn subgame. A further complication arises from the fact that the subgame can itself be drawn, rather than won. Is this sufficient to cause the penalty or the retry? Or does this count as a failed capture?
As I see it, however, the principal problem with the naïve recursion rule is that it seems to be ill-founded. After all, we can have a game with a capture, which leads to a subgame with a capture, which leads to a deeper subgame with a capture, and so on descending forever. How is the outcome determined in this infinitely descending situation? None of the subgames is ever resolved with a definite conclusion until all of them are, and there seems no coherent way to assign resolutions to them. All infinitely many subgames are simply left hanging mid-play, and indeed mid-move. For this reason, the naïve recursion idea seems ultimately incoherent to me as a game rule.
What we would seem to need instead is a well-founded recursion, one which would ultimately bottom-out in a base case. With such a recursion, every outcome of the game would be well-defined. Such a well-founded recursion would be achieved, for example, if on every subgame there were strictly fewer pieces. Eventually, the subgames would reduce to king versus king, a drawn game, and then the drawn subgame rule would be invoked to whatever affect it cause. But the recursion would definitely terminate. And perhaps most recursions would terminate because the stronger player was ultimately mating in all his attacks, without requiring any invocation of the drawn subgame rule.
We can easily describe several natural subgame positions with one fewer piece. For example, when one piece attacks another, we may naturally consider the positions that would result if we performed the capture, or if we removed the attacking piece; and we might further consider swapping the roles of the players in these positions. Such cases would constitute a well-founded recursion, because the subgame position would have fewer pieces than the main position. In this way, we arrive at several natural recursion rules for recursive chess.
Proof-of-sufficiency recursion. The motivating idea of this recursion rule is that in order for an attack to be successful, the attacking player must prove that it will be sufficient for the attack to be successful. So, when piece A attacks piece B in the game, then a subgame is set up from the position that would result if A were successfully to capture B, and the players proceed in the game in which the attack has occurred. This is the same as proceeding in the main game, under the assumption that the attack was successful. If the attacking player can win this subgame, then it shows in a sense the sufficiency of the attack as a means to win, and so on the proof-of-sufficiency idea, we allow this as a reason for the attack to have been successful.
One might object that this recursion seems at first to amount to just playing chess as usual, since the subgame is the same as the original game with the attack having succeeded. But there is a subtle difference. For a misguided attack, the attacked player can play suboptimally in the subgame, intentionally losing that game, and then play differently in the main game. There is, of course, no obligation that the players respond the same at the higher-level games as in the lower games, and this is all part of their strategic calculation.
Proof-of-necessity recursion. The motivating idea of this recursion rule, in contrast, is that in order for an attack to be successful, the attacking player must prove that it is necessary that the attack take place. When piece A attacks piece B in the main game, then a subgame is set up in which the attack has not succeeded, but instead the attacking piece is lost, but the color sides of the players are swapped. If a black Queen attacks a white knight, for example, then in the subgame position, the queen is removed, and the players proceed from that positions, but with the opposite colors. By winning this subgame, the attacking player is in effect demonstrating that if the attack were to fail, then it would be devastating for them. In other words, they are demonstrating the necessity of the success of the attack.
For the both the proof-of-sufficiency and the proof-of-necessity versions of the recursion, it seems to me that any of the three failed-capture rules would be sensible. And so we have quite a few different interpretations here for what is the game of recursive chess.
What is your proposal? Please let me know in the comments. I am interested to hear any comments or criticism.
Abstract. Zermelo famously proved that second-order ZFC is quasi-categorical—the models of this theory are precisely the rank-initial segments of the set-theoretic universe cut off at an inaccessible cardinal. Which are the fully categorical extensions of this theory? This question gives rise to the notion of categorical large cardinals, and opens the door to several puzzling philosophical issues, such as the conflict between categoricity as a fundamental value in mathematics and reflection principles in set theory. (This is joint work with Robin Solberg, Oxford.)
This is an excerpt from my book-in-progress on diverse elementary topics in logic, from the chapter on model theory. My view is that Boolean-valued models should be elevated to the status of a standard core topic of elementary model theory, adjacent to the ultrapower construction. The theory of Boolean-valued models is both beautiful and accessible, both generalizing and explaining the ultrapower method, while also partaking of deeper philosophical issues concerning multi-valued truth.
Boolean-valued models
Let us extend our familiar model concept from classical predicate logic to the comparatively fantastic realm of multi-valued predicate logic. We seek a multi-valued-truth concept of model, with a domain of individuals of some kind and interpretations for the relations, functions, and constants from a first-order language, but in which the truths of the model are not necessarily black and white, but exhibit their truth values in a given multi-valued logic. Let us focus especially on the case of Boolean-valued models, using a complete Boolean algebra , since this case has been particularly well developed and successful; the set-theoretic method of forcing, for example, amounts to using -valued models of set theory with carefully chosen complete Boolean algebras and has been used impressively to establish a sweeping set-theoretic independence phenomenon, showing that numerous set-theoretic principles such as the axiom of choice and the continuum hypothesis are independent of the other axioms of set theory.
The main idea of Boolean-valued models, however, is applicable with any theory, not just set theory, and there are Boolean-valued graphs, Boolean-valued orders, Boolean-valued groups, rings, and fields. So let us develop a little of this theory, keeping in mind that the basic construction and ideas also often work fruitfully with other multi-valued logics, not just Boolean algebras.
Definition of Boolean-valued models
We defined in an earlier section what it means to be a model in classical first-order predicate logic—one specifies the domain of the model, a set of individuals that will constitute the universe of the model over which all the quantifiers will range, and then provides interpretations on that domain for each relation symbol, function symbol, and constant symbol in the language. For each relation symbol , for example, and any suitable tuple of individuals from the domain, one specifies whether is to hold or fail in the model.
The main idea for defining what it means to be a model in multi-valued predicate logic is to replace these classical yes-or-no atomic truths with multi-valued atomic truth assertions. Specifically, for any Boolean algebra , a -valued model in the language consists of a domain , whose elements are called names, and an assignment of -valued truth values for all the simple atomic formulas using parameters from that domain: The double-bracketed expressions here denote an element of the Boolean algebra —one thinks of as the -truth value of in the model, and the model is defined by specifying these values for the simple atomic formulas.
We shall insist in any Boolean-valued model that the atomic truth values obey the laws of equality: And if the language includes functions symbols, then we also insist on the functionality axioms: These requirements assert respectively in the Boolean-valued context that the equality axiom holds for function application, that the function takes on some value, and that the function takes on at most one value. In effect, the Boolean-valued model treatment of functions regards them as as defined by their graph relations, and these functionality axioms ensure that the relation has the features that would be expected of such a graph relation.
In summary, a -valued model is a domain of names together with -valued truth assignments for the simple atomic assertions about those names, which obey the equality and functionality axioms.
Boolean-valued equality
The nature of equality in the Boolean-valued setting offers an interesting departure from the usual classical treatment that is worth discussing explicitly. Namely, in classical predicate logic with equality, distinct elements of the domain are taken to represent distinct individuals in the model—the assertion is true in the model only when and are identical as elements of the domain. In the Boolean-valued setting, however, we want to relax this in order to allow equality assertions to have an intermediate Boolean value. For this reason, distinct elements of the domain can no longer be taken to represent definitely distinct individuals, since the equality assertion might have some nonzero or intermediate degree of truth. The elements of the domain of a Boolean-valued model are allowed in effect a bit of indecision or indeterminacy about who they are and whether they might be identical. This is why we think of the elements of the domain as names for individuals, rather than as the individuals themselves. Two different names can have an intermediate truth-value possibility of naming the same or different individuals.
Extending to Boolean-valued truth
By definition, every -valued model provides -valued truth assertions for the simple atomic formulas. We now extend these Boolean-valued truth assignments to all assertions of the language through the following natural recursion: On the right-hand side, we make the logical calculations in each case using the operations of the Boolean algebra . In the existential case, the supremum may range over an infinite set of Boolean values, if the domain of the model is infinite, and so this supremum might not be realized as an element of , if it is not complete. This is precisely why one commonly considers -valued models especially in the case of a complete Boolean algebra —the completeness of ensures that this supremum exists and so the recursion has a solution.
The reader may check by induction on that the general equality axiom now has Boolean value one. The Boolean-valued structure is full if for every formula in and in , there is some such that . That is, a model is full when it is rich enough with names that the Boolean value of every existential statement is realized by a particular witness, reducing the infinitary supremum in the recursive definition to a single largest element.
A simple example
For every natural number let be the graph on vertices forming an -cycle with edge relation . So we have a triangle , a square , a pentagon , and so on. Let be the power set of these indices, which forms a complete Boolean algebra. We define a -valued graph model as follows. We take as names the sequences with for every .
We define the equality and edge relations on these names as follows: Thus, two names are equal exactly to extent that they have the same coordinates, and two names are connected by the edge relation exactly to the extent to that this is true on the coordinates. It is now easy to verify the equality axioms, since is true to at least the extent that , , and are true, since if , , and in , then also . So this is a -valued graph model.
So we have a Boolean-valued graph. Is it connected? Does that question make sense? Of course, we don’t have an actual graph here, but only a -valued graph, and so in principle we only know how to compute the Boolean value of statements that are expressible in the language of graph theory. Since connectivity is not formally expressible, except in the bounded finite cases, this question might not be sensible.
Let me argue that it is indeterminate whether this graph is a complete graph with every pair of distinct nodes connected by an edge. After all, is the complete graph on three vertices, and it will follow from the theorem below that the Boolean value of the statement contains the element and therefore this Boolean value is not zero. Meanwhile, the assertion that the graph is not complete, however, also gets a nonzero Boolean value, since every except has distinct nodes with no edge between them. In a robust sense, the graph-theoretic truths of combine the truths of all the various graphs .
Note also that as increases, the graphs have nodes that are increasingly far apart. Fix any name and choose such that the distance between and in is not bounded by any particular uniform bound. In , it follows that and have no particular definite finite distance, and this can be viewed as a sense in which is not connected.
A combination of many models
Let us flesh out the previous example with a more general analysis. Suppose we have a family of models in a common language . Let be the power set of the set of indices , a complete Boolean algebra. We define a -valued model as follows. The set of names will be precisely the product of the models , which is to say, the -tuples with for every , and the simple atomic truth values are defined like this: One can now prove the equality and functionality axioms, and so this is a -valued model.
Theorem. The Boolean-valued model described above is full, and Boolean-valued truth can be computed coordinatewise:
Proof. We prove this by induction on , for assertions using only simple atomic formulas, , , and . The simple atomic case is part of what it means to be a Boolean-valued model. If the theorem is true for , then it will be true for , since negation in is complementation in , as follows: If the theorem is true for and , then it will be true for as follows: For the quantifier case , choose for which , if possible. It follows that and from this it follows that , since whenever succeeds as a witness then so also will . Consequently, the model is full, and we see that as desired.
Boolean ultrapowers
Every Boolean-valued model can be transformed into a classical model, a -valued model, by means of a simple quotient construction. Suppose that is a -valued model for some Boolean algebra and that is an ultrafilter on the Boolean algebra.
Define an equivalence relation on the set of names by The quotient structure will consist of equivalence classes of names by this equivalence relation. We define the atomic truths of the quotient structure similarly by reference to whether these truths hold with a value in . Namely, For function symbols , we define These definitions are well-defined modulo precisely because of the equality axiom properties of the Boolean-valued model . For example, if , then , but by the equality axiom, and so if is in , then so will be .
We defined atomic truth in the quotient structure by reference to the truth value being -large. In fact, this reduction will continue for all truth assertions in the quotient structure, which we prove as follows.
Theorem. (Łoś theorem for Boolean ultrapowers) Suppose that is a full -valued model for a Boolean algebra , and that is an ultrafilter. Then a statement is true in the Boolean quotient structure if and only if the Boolean value of the statement is in .
Proof. We prove this by induction on . The simple atomic case holds by the definition of the quotient structure. Boolean connectives , follow easily using that is a filter and that it is an ultrafilter. Consider the quantifier case , where by induction the theorem holds for itself. If the quotient structure satisfies the existential , then there is a name for which it satisfies , and so by induction , in which case also , since this latter Boolean value is at least as great as the former. Conversely, if , then by fullness this existential Boolean value is realized by some name , and so , which by induction implies that the quotient satisfies and consequently also , as desired.
Note how fullness was used in the existential case of the inductive argument.
This will be a talk for the Department Colloquium of the Philosophy Department of the University of Notre Dame, 26 March 12 pm EST (4pm GMT).
Abstract: Potentialism is the view, originating in the classical dispute between actual and potential infinity, that one’s mathematical universe is never fully completed, but rather unfolds gradually as new parts of it increasingly come into existence or become accessible or known to us. Recent work emphasizes the modal aspect of potentialism, while decoupling it from arithmetic and from infinity: the essence of potentialism is about approximating a larger universe by means of universe fragments, an idea that applies to set-theoretic as well as arithmetic foundations. The modal language and perspective allows one precisely to distinguish various natural potentialist conceptions in the foundations of mathematics, whose exact modal validities are now known. Ultimately, this analysis suggests a refocusing of potentialism on the issue of convergent inevitability in comparison with radical branching. I shall defend the theses, first, that convergent potentialism is implicitly actualist, and second, that we should understand ultrafinitism in modal terms as a form of potentialism, one with surprising parallels to the case of arithmetic potentialism.
This will be an event for the -Math Reading Group at the Institute for Logic, Language, and Computation (ILLC) at the University of Amsterdam, 19 March 2021 6pm CET (5pm GMT). Zoom access here.
I shall make a brief presentation of the overall contents of the book, including a discussion of my perspective on the subject, and then get into some of the detailed issues with which the book engages. After this, we shall open up for discussion and comments.
I was so glad to be involved with this project of Hannah Hoffman. She had inquired on Twitter whether mathematicians could provide a proof of the irrationality of root two that rhymes. I set off immediately, of course, to answer the challenge. My wife Barbara Gail Montero and our daughter Hypatia and I spent a day thinking, writing, revising, rewriting, rethinking, rewriting, and eventually we had a set lyrics providing the proof, in rhyme and meter. We had wanted specifically to highlight not only the logic of the proof, but also to tell the fateful story of Hippasus, credited with the discovery.
Hannah proceeded to create the amazing musical version:
The diagonal of a square is incommensurable with its side
an astounding fact the Pythagoreans did hide
but Hippasus rebelled and spoke the truth
making his point with irrefutable proof
it’s absurd to suppose that the root of two
is rational, namely, p over q
square both sides and you will see
that twice q squared is the square of p
since p squared is even, then p is as well
now, if p as 2k you alternately spell
2q squared will to 4k squared equate
revealing, when halved, q’s even fate
thus, root two as fraction, p over q
must have numerator and denomerator with factors of two
to lowest terms, therefore, it can’t be reduced
root two is irrational, Hippasus deduced
as retribution for revealing this irrationality
Hippasus, it is said, was drowned in the sea
but his proof live on for the whole world to admire
a truth of elegance that will ever inspire.
This was a talk for the Kobe Set Theory Workshop, held on the occasion of Sakaé Fuchino’s retirement, 9-11 March 2021.
Abstract. I shall discuss senses in which set-theoretic forcing can be seen as a computational process on the models of set theory. Given an oracle for the atomic or elementary diagram of a model of set theory , for example, one may in various senses compute -generic filters and the corresponding forcing extensions . Meanwhile, no such computational process is functorial, for there must always be isomorphic alternative presentations of the same model of set theory that lead by the computational process to non-isomorphic forcing extensions . Indeed, there is no Borel function providing generic filters that is functorial in this sense.
This is joint work with Russell Miller and Kameryn Williams.
Abstract. The principle of open determinacy for class games — two-player games of perfect information with plays of length ω, where the moves are chosen from a possibly proper class, such as games on the ordinals — is not provable in Zermelo-Fraenkel set theory ZFC or Gödel-Bernays set theory GBC, if these theories are consistent, because provably in ZFC there is a definable open proper class game with no definable winning strategy. In fact, the principle of open determinacy and even merely clopen determinacy for class games implies Con(ZFC) and iterated instances Con(Con(ZFC)) and more, because it implies that there is a satisfaction class for first-order truth, and indeed a transfinite tower of truth predicates for iterated truth-about-truth, relative to any class parameter. This is perhaps explained, in light of the Tarskian recursive definition of truth, by the more general fact that the principle of clopen determinacy is exactly equivalent over GBC to the principle of elementary transfinite recursion ETR over well-founded class relations. Meanwhile, the principle of open determinacy for class games is strictly stronger, although it is provable in the stronger theory GBC+-comprehension, a proper fragment of Kelley-Morse set theory KM.
This is currently a draft version only of my article-in-progress on the topic of linearity in the hierarchy of consistency strength, especially with large cardinals. Comments are very welcome, since I am still writing the article. Please kindly send me comments by email or just post here.
This article will be the basis of the Weeks 7 & 8 discussion in the Graduate Philosophy of Logic seminar I am currently running with Volker Halbach at Oxford in Hilary term 2021.
I present instances of nonlinearity and illfoundedness in the hierarchy of large cardinal consistency strength—as natural or as nearly natural as I can make them—and consider philosophical aspects of the question of naturality with regard to this phenomenon.
It is a mystery often mentioned in the foundations of mathematics, a fundamental phenomenon to be explained, that our best and strongest mathematical theories seem to be linearly ordered and indeed well-ordered by consistency strength. Given any two of the familiar large cardinal hypotheses, for example, generally one of them will prove the consistency of the other.
Why should it be linear? Why should the large cardinal notions line up like this, when they often arise from completely different mathematical matters? Measurable cardinals arise from set-theoretic issues in measure theory; Ramsey cardinals generalize ideas in graph coloring combinatorics; compact cardinals arise with compactness properties of infinitary logic. Why should these disparate considerations lead to principles that are linearly related by direct implication and consistency strength?
The phenomenon is viewed by many in the philosophy of mathematics as significant in our quest for mathematical truth. In light of Gödel incompleteness, after all, we must eternally seek to strengthen even our best and strongest theories. Is the linear hierarchy of consistency strength directing us along the elusive path, the “one road upward” as John Steel describes it, toward the final, ultimate mathematical truth? That is the tantalizing possibility.
Meanwhile, we do know as a purely formal matter that the hierarchy of consistency strength is not actually well-ordered—it is ill-founded, densely ordered, and nonlinear. The statements usually used to illustrate these features, however, are weird self-referential assertions constructed in the Gödelian manner via the fixed-point lemma—logic-game trickery, often dismissed as unnatural.
Many set theorists claim that amongst the natural assertions, consistency strengths remain linearly ordered and indeed well ordered. H. Friedman refers to “the apparent comparability of naturally occurring logical strengths as one of the great mysteries of [the foundations of mathematics].” Andrés Caicedo says,
It is a remarkable empirical phenomenon that we indeed have comparability for natural theories. We expect this to always be the case, and a significant amount of work in inner model theory is guided by this belief.
Stephen G. Simpson writes:
It is striking that a great many foundational theories are linearly ordered by <. Of course it is possible to construct pairs of artificial theories which are incomparable under <. However, this is not the case for the “natural” or non-artificial theories which are usually regarded as significant in the foundations of mathematics. The problem of explaining this observed regularity is a challenge for future foundational research.
John Steel writes “The large cardinal hypotheses [the ones we know] are themselves wellordered by consistency strength,” and he formulates what he calls the “vague conjecture” asserting that
If T is a natural extension of ZFC, then there is an extension H axiomatized by large cardinal hypotheses such that T ≡ Con H. Moreover, ≤ Con is a prewellorder of the natural extensions of ZFC. In particular, if T and U are natural extensions of ZFC, then either T ≤ Con U or U ≤ Con T.
Peter Koellner writes
Remarkably, it turns out that when one restricts to those theories that “arise in nature” the interpretability ordering is quite simple: There are no descending chains and there are no incomparable elements—the interpretability ordering on theories that “arise in nature” is a wellordering.
Let me refer to this position as the natural linearity position, the assertion that all natural assertions of mathematics are linearly ordered by consistency strength. The strong form of the position, asserted by some of those whom I have cited above, asserts that the natural assertions of mathematics are indeed well-ordered by consistency strength. By all accounts, this view appears to be widely held in large cardinal set theory and the philosophy of set theory.
Despite the popularity of this position, I should like in this article to explore the contrary view and directly to challenge the natural linearity position.
Main Question. Can we find natural instances of nonlinearity and illfoundedness in the hierarchy of consistency strength?
I shall try my best.
You have to download the article to see my candidates for natural instances of nonlinearity in the hierarchy of large cardinal consistency strength, but I can tease you a little by mentioning that there are various cautious enumerations of the ZFC axioms which actually succeed in enumerating all the ZFC axioms, but with a strictly weaker consistency strength than the usual (incautious) enumeration. And similarly there are various cautious versions of the large cardinal hypothesis, which are natural, but also incomparable in consistency strength.
(Please note that it was Uri Andrews, rather than Uri Abraham, who settled question 16 with the result of theorem 17. I have corrected this from an earlier draft.)
This brief unpublished note (11 pages) contains an overview of the Gödel fixed-point lemma, along with several generalizations and applications, written for use in the Week 3 lecture of the Graduate Philosophy of Logic seminar that I co-taught with Volker Halbach at Oxford in Hilary term 2021. The theme of the seminar was self-reference, truth, and consistency strengths, and in this lecture we discussed the nature of Gödel’s fixed-point lemma and generalizations, with various applications in logic.
Gödel’s fixed-point lemma An application to the Gödel incompleteness theorem
Finite self-referential schemes An application to nonindependent disjunctions of independent sentences
Gödel-Carnap fixed point lemma Deriving the double fixed-point lemma as a consequence An application to the provability version of Yablo’s paradox
Kleene recursion theorem An application involving computable numbers An application involving the universal algorithm An application to Quine programs and Ouroborous chains
This is a graduate seminar in the Philosophy of Logic at the University of Oxford, run jointly by myself and Volker Halbach in Hilary Term 2021.
The theme will be self-reference, truth, and the hierarchy of consistency strength.
A detailed schedule, including the list of topics and readings is available on Volker’s web site.
The seminar will be held Fridays 9-11 am during term, online via Zoom at 812 2300 3837.
The final two sessions of term will be specifically on the hierarchy of consistency strength, based on my current article in progress concerning the possibility of natural instances of incomparability and ill-foundedness in the hierarchy of large cardinal consistency strength.
This will be a talk for a new mathematical logic seminar at the University of Warsaw in the Department of Hhilosophy, entitled Epistemic and Semantic Commitments of Foundational Theories, devoted to formal truth theories and implicit commitments of foundational theories as well as their conceptual surroundings.
Abstract. According to the math tea argument, perhaps heard at a good afternoon tea, there must be some real numbers that we can neither describe nor define, since there are uncountably many real numbers, but only countably many definitions. Is it correct? In this talk, I shall discuss the phenomenon of pointwise definable structures in mathematics, structures in which every object has a property that only it exhibits. A mathematical structure is Leibnizian, in contrast, if any pair of distinct objects in it exhibit different properties. Is there a Leibnizian structure with no definable elements? We shall discuss many interesting elementary examples, eventually working up to the proof that every countable model of set theory has a pointwise definable extension, in which every mathematical object is definable.
The talk will be held online via Zoom ID: 998 6013 7362.
Abstract. It is a mystery often mentioned in the foundations of mathematics that our best and strongest mathematical theories seem to be linearly ordered and indeed well-ordered by consistency strength. Given any two of the familiar large cardinal hypotheses, for example, generally one of them proves the consistency of the other. Why should this be? The phenomenon is seen as significant for the philosophy of mathematics, perhaps pointing us toward the ultimately correct mathematical theories. And yet, we know as a purely formal matter that the hierarchy of consistency strength is not well-ordered. It is ill-founded, densely ordered, and nonlinear. The statements usually used to illustrate these features are often dismissed as unnatural or as Gödelian trickery. In this talk, I aim to overcome that criticism—as well as I am able to—by presenting a variety of natural hypotheses that reveal ill-foundedness in consistency strength, density in the hierarchy of consistency strength, and incomparability in consistency strength.
The talk should be generally accessible to university logic students, requiring little beyond familiarity with the incompleteness theorem and some elementary ideas from computability theory.