Joel David Hamkins, “How the continuum hypothesis could have been a fundamental axiom,” Journal for the Philosophy of Mathematics (2024), DOI:10.36253/jpm-2936, arxiv:2407.02463.
Abstract. I describe a simple historical thought experiment showing how we might have come to view the continuum hypothesis as a fundamental axiom, one necessary for mathematics, indispensable even for calculus.
See also this talk I gave on the topic at the University of Oslo:
I agree with Solovay that ~CH could also have been seen as fundamental via an intuition for the existence of a countably additive real valued measure.
But there’s an important difference. None of the alternative foundational schemes you treat in this paper buy us anything new in the arithmetical realm (or more generally anything covered by Shoenfield absoluteness), unless you want to make it second-order and get a little bit further to Grothendieck universes. But taking RVM as fundamental has far more in the way of concrete consequences, up to consistency of measurables for arithmetical sentences and settling most of the wall-known descriptive set theory questions left open by ZFC.
It’s also interesting to take this in the other direction and ask what alternative historical developments could have led us to adopt axioms inconsistent with ZFC. This is well-covered ground in the case of alternatives to AC, but those don’t get us anything concrete (Shoenfield absoluteness again). But what developments might have made ZF implausible?
The constructivists/intuitionists like Brouwer and Weyl and Heyting win the philosophical war in the early 20th century and excluded middle itself is viewed with suspicion by mainstream mathematicians. People instead work in IZF + dependent/countable choice or some similar set theory.
Thesis: no conceivably plausible alternative historical development of mathematics would contradict ZF about any *arithmetical* statements.
This seems reasonable, although it is an implicit commitment to Con(ZF), which some have doubted. For example, Silver conjectured the negation, and someone with that view would support, say, PA + not Con(ZF). We know that this is consistent relative to ZF, so one can’t really object to the basic coherence of it. Certainly it is plausible. And if one had a strong form of it, denying consistency at the level of $\Sigma_{1000}$-replacement, for example, then this would be an arithmetic statement directly contradicting ZF.
Why is it plausible? What is the best reason Silver and others could articulate for believing that ZF had an inconsistency?
More generally, what is the longest known interval between a theory being proved inconsistent and having first been seriously suspected of being inconsistent? There’s got to be some point where you just say “by now you should have been able to turn your intuitions into a proof”. ZF has been with us for a century now.
On the other hand, I am a fan of models that limit the set theoretic universe. I don’t mind even assuming there is no standard model, or, more specifically, that V=M (Shepherdson’s strongly constructible sets, built by a transfinite process that stops at Shepherdson’s minimal model if a standard model exists, but otherwise is just L).
I think there’s a strong case to be made for “the only sets that exist are the ones that must exist”, for reasons of parsimony. But of course that principle doesn’t mean ZF is *inconsistent*.
Pingback: The continuum hypothesis could have been a fundamental axiom, CFORS Grad Conference, Oslo, June 2024 | Joel David Hamkins
Pingback: How the continuum hypothesis could have been a fundamental axiom, UC Irvine Logic & Philosoph of Science Colloquium, March 2024 | Joel David Hamkins
Can infinity have a measure or adegree of approximation? or we shall leave it abstract as mathematicians!!
Perhaps the consideration of demonstrability of the continuum hypothesis could have led mathematical sciences along a different path of development: https://doi.org/10.1007/s10699-022-09875-9
I agree with you. CH should have been taken as the most fundamental axiom in mathematics. For historical reasons (we arrived at CH through countable numbers), we have the present view. Countable numbers are arbitrary units. If we cut ‘one’ candy into hundred pieces, we can say that we have hundred candies, only the unit ‘one’ differs. The unit ‘one’ can be either extremely small or extremely large, always finite, but can never be infinitesimal or infinite. The unit is just an arbitrary finite piece of the continuum.
But in physics (my interest is in theoretical physics), ‘quantum hypothesis’ is the most fundamental axiom (in my opinion). Matter is quantized, and so universe made up of such units is finite. That is the fundamental difference between mathematics and physics.
Does ‘countably saturated’ mean that the set of hyperreals between any two real numbers has cardinality aleph-0?
No, it means that every countably defined gap is filled. That is, if $x_0\leq x_1\leq x_2\leq\cdots y_2\leq y_1\leq y_0$ with $x_i
Pingback: How we might have viewed the continuum hypothesis as a fundamental axiom necessary for mathematics, Oxford Phil Maths seminar, May 2025 | Joel David Hamkins
Hi Joel. VeryHi Joel. Very interesting paper. I’d like to raise a possibly new objection.
Simply: What about the Levi-Civita field? https://en.wikipedia.org/wiki/Levi-Civita_field
This is defined as the field of finitely-supported formal power series $\sum_{q \in \mathbb{Q}} a_q \varepsilon^q$ with real coefficients $a_q$. It is *not* countably saturated, and of course $\omega$-cofinal. However, it has several virtues, including being real-closed and (astonishingly, to me) useful in computational analysis (see the wikipedia article and links inside).
Additionally, it is characterized by a second-order categorical theory. This just piggybacks on the categoricity of $\mathbb R$ and $\mathbb N$.
Now, I don’t know if it has all the advantages of a hyperreal field. But being real-closed, it at least is elementarily equivalent to $\mathbb R$. One can imagine that this relatively concrete field, if it were conceived by early analysts, might have sufficed for the infinitesimal applications they were after.
Levi-Civita is only one among many fields that extend the reals but are not given by an ultrapower. But it seems like a particularly elegant one.
Thanks for the comment, and I’m glad you like the paper. That field is concrete enough that one can imagine it coming very early, and perhaps giving substance to the infinitesimal ideas. For my argument, however, I don’t think I am obligated to show that the solution I describe is the only possible thing that could have happened (since it didn’t in fact happen that way in any case), but rather my task was only to argue that it *could* have happened in the way I describe in my thought experiment. And I think I have done that much, so have explain how it could have been that CH is viewed as a fundamental axiom, necessary for mathematics.
I see, thanks. I agree that you’ve successfully argued that the scenario you described *could have* played out that way. But my impression was that the claim was a bit stronger, that if the early analysts had been blessed with rigor about number systems, the hyperreals would have been the most natural thing to use, leading to CH as a way to secure categoricity. I think examples like Levi-Civita, given their simplicity (in hindsight, admittedly) mitigate the sense of inevitability of the alternate history.
I think it could be plausibly argued that the notion of countable saturation may have been too abstract for early analysts, and maybe the concreteness of Levi-Civita would have won more favor, if there had been a contest between them.
I agree that the real field is canonical, but not because as a complete ordered field it is
unique up-to-isomorphism, but rather because it is unique-up-to-unique-isomorphism.
One can characterize N with the successor function, and the field of rationals
in a similar way, and also the complex field if one takes the real field as a distinguished subfield and singles out i as well (which is the basis of complex analysis).
No and No(omega_1) are likewise canonical, provided they are construed as ordered fields with the extra binary relation of “simpler than” which is at the root of constructing surreals and defining functions on them. As to the qualifier “canonical”, it is heavily used by Bourbaki, usually (perhaps always) in connection with some universal property, where the universal objects constructed are unique-up-to-unique-isomorphism, not just unique-up-to-isomorphism.
The usefulness of the field R* of hyper reals has nothing to do with R* being unique-up-to-isomorphism (under CH), to my knowledge. And R* is certainly not unique-up-to-unique isomorphism, in fact, R* contains properly an isomorphic copy of itself, which make it hard to consider it as ‘minimal’. Anyway, I do not view R* as canonical, even assuming CH. This is not to deny that R* and other nonstandard structures can be useful tools to study more canonical structures.
Thank you, Lou, for your insightful comments, with which I mostly agree. Your comments make a lot of sense to me coming from someone with our current perspective on mathematics. However, I feel that you have not engaged with the historical-thought-experiment aspect of my argument. In the imaginary world I describe, the hyperreals R* were introduced early on as one of the fundamental number systems with which we become familiar, embedded into the core of mathematics and even calculus. In that world, perspectives on the need for a categorical account of R* would be felt more keenly than you or I may find the need in our current actual mathematical world. So I find it unconvincing to object to the argument using only our current views on R*; rather, we must imagine how people would view R* when it is considered a fundamental structure as in my imaginary world.
Meanwhile, let me also object a little to your insistence that categoricity doesn’t matter if the structure isn’t rigid (rigidity is equivalent to uniqueness of isomorphisms). There are numerous structures in mathematics that are considered canonical, but are not rigid and hence not unique up to unique isomorphisms, including: the complex field, the countable random graph, the Euclidean plane, the symmetric group of a given finite size, the integer order, equilateral triangles, and so on. Sometimes canonical structure comes with symmetry.
Thanks for the reaction, Joel. Let me address for now just the last part of it.
Categoricity in some infinite cardinal of a first-order theory (in a countable language)
is clearly of interest, because of the strong combinatorial constraints it implies (finite boolean algebras of 0-definable sets in every arity in the case of omega-categoricity, omega-stability and much more for uncountable categoricity).
Anyway, I didn’t say that in the absence of rigidity categoricity doesn’t matter. In the case of omega-categoricity, these models typically come also with an interesting automorphism group, and the model together with this group acting on it
certainly has a ”canonical” feel to it. Anyway, the term “canonical” is ambiguous, I think, but I did point out one kind of use, by Bourbaki, where it does seem to come with rigidity. But just categoricity under CH and rather strong model-theoretic (not first-order) conditions and without rigidity doesn’t strike me as “canonical”.
Another way for me to reply is that under CH, the unique smallest countably-closed real-closed field also admits your desired further structure of the simplicity relation, since in this case it is isomorphic to $\text{No}(\omega_1)$. That is, under CH the categorical structure I identify exhibits the extra structure you had wanted for canonicity, while also exhibiting full saturation (in the language of ordered fields). But without CH, the nature of $\text{No}(\omega_1)$ is much less satisfactory, since it will not be fully saturated (although it is still initial amongst the countably saturated RCFs), and there can be other fully saturated non-isomorphic structures of the same size.
This situation is something like the nature of the complex field, which has a huge automorphism group in the mere language of fields, but seen as the algebraic closure of the real field, only complex conjugation, and as the complex plane (with coordinate structure), it is rigid.
Here are some comments on other parts of Hamkins’ essay. My view is that the scenario sketched in that paper is rather implausible. Also, if it had been realized, it might not have been a good development, in distracting from real applications by what seem to me irrelevant set-theoretic issues. One problem with nonstandard analysis is that there was too much hype around it in the beginning. So far the applications to main stream math without significant other input have been rather modest. As a tool in the much more extensive toolkit of model theory and its applications, it certainly has its place. For example, the work of Hrushovski and Tao-Green-Breuillard on approximate groups is really significant, and uses in Hrushovski’s hands a lot more model theory than just nonstandard methods. Disagreeing with G”odel may be risky, but what he says about nonstandard analysis seems to me wrong. By the way, I am a great admirer of A. Robinson, but think that his work around model completeness is at least (probably more) as important than his nonstandard analysis.