# A brief history of meta-mathematics

Page last updated 14 Mar 2023

Meta-mathematics can be considered the subject whose fundamental characteristic is the analysis of a mathematical system from *outside* of the system - by making a logical argument outside of the system as opposed to making a logical argument *within* the system itself according to that system’s own grammar and rules. The language outside of the system is called the “*meta-language*” and the language of the system that the meta-language is referring to is called the *“sub-language”* or the *“object-language”*. As such, it should be obvious that when practicing meta-mathematics, it is necessary to be scrupulously careful to distinguish between when a statement is a statement made *outside* the system and when a statement is made *within* the system. But, remarkably, this simple admonition is observed not only by its adherence, but also frequently by its transgression.

Meta-mathematics only became a subject in its own right towards the end of the 19^{th} century and in the first part of the 20^{th} century, initially promoted in large part by David Hilbert. In 1898 -1899 Hilbert gave a series of lectures on geometry, (Footnote:
The lectures were subsequently published as:

David Hilbert, “Grundlagen der Geometrie”, Leipzig, Teubner, 1899.

There is an English translation by E. J. Townsend, available online at PDF The Foundations of Geometry by David Hilbert , Open Court, 1902.)
which emphasized the need for geometry to have a logical axiomatic basis that would eliminate any hidden reliance on intuition, since it was evident that intuition can sometimes be wrong. These lectures were the launching pad from which the subject area of meta-mathematics developed, where the fundamental characteristic is talking about a given mathematical system from outside of the system rather than making statements within it. By reducing the essential content of any branch of mathematics to a well-defined axiomatic system consisting of a few well-defined axioms and rules, any proof * within* such a system then becomes a purely mechanical process where human intuition plays no part. This aspect of axiomatic systems was rather disturbing to most mathematicians of the time, who wanted to believe that humans were inherently superior to machines, and that human intuition and insight could never be replicated by any machine. (Footnote:
It is interesting to note the similarity to the case of Garry Kasparov and Deep Blue in the 1997; prior to his defeat by Deep Blue over a challenge of six games of chess, Kasparov was convinced that humans would always be superior to machines. After his defeat, he was forced to change his viewpoint and he admitted that he saw in Deep Blue’s playing aspects that were at least comparable to human creativity and intelligence in the field of chess, demonstrating that his views prior to his defeat were based on emotional hunches rather than any logical basis.)

## König and enumerations of real numbers

We now digress for a moment, to 1905, when Julius König, who was thinking on a quite different tack, reasoned that if you consider all well formed mathematical definitions, then it should be possible that all such definitions could be enumerated. (Footnote:Julius König,
König raised this matter in a paper entitled *Über die Grundlagen der Mengenlehre und das Kontinuumproblem*, Mathematische Annalen 61 (1905) 156-160. For an online English translation, see On the foundations of set theory and the continuum problem.) And in fact, there is no difficulty in principle in creating a dictionary style lexicographical enumeration of all possible sequences of a given well-defined collection of symbols. Such a definition is a function *f* (*n*) with one free variable *n* whose domain is natural numbers.

König then reasoned that Cantor’s diagonal proof shows that there is no enumeration function that can enumerate all real numbers, and that since it would seem that all definable numbers can be enumerated, then there must “exist” real numbers that cannot be defined - what are now called “indefinable numbers”. König, like most mathematicians of the time, was a Platonist, and he had no difficulty with the notion that there could “exist” numbers that must be “indefinable” but somehow “exist”, and, indeed, according to a Platonist viewpoint, must always have “existed” independently for all time, including prior to any human definition.

Because of this mindset, König did not consider it necessary to analyze in depth the implications of his argument and its implicit assumptions. One of these assumptions was that there was no need to consider whether different levels of language were a crucial aspect of the argument. But in fact, König’s lexicographical enumeration function actually * introduces* the notion of a meta-language, since such an enumeration can

*be defined in a language that is a meta-language to the language whose alphabetical sequences are being enumerated. (Footnote: This is easily shown by considering that any lexicographical enumeration function requires at least two variables that cannot be part of the alphabet of the language system being enumerated: a variable whose domain is symbols of the alphabet, and a variable whose domain is sequences of symbols of the alphabet. Neither of these variables can be either a symbol nor a sequence of symbols of the alphabet of the system being enumerated, since then such symbols/sequences of symbols would be at the same time both variables and non-variables. This is covered in full detail at Non-Diagonal Proofs: Enumerations in different language systems.)*

**only**

As such, König’s Platonist beliefs served to deflect him from any realization that his arguments involved his hidden assumption that different levels of language were irrelevant. And since most mathematicians of the time were also Platonists, his argument became generally accepted as it was, and with the passing of years the result of that assumption - that “indefinable” real numbers must “exist” - has now become unchallengeable dogma.

Indeed, the history of meta-mathematics throughout the years contains at its core a remarkable irony: from the outset it failed to apply any rigorous requirements to the field itself, and it is even more remarkable that this situation continues even to this day. For quite some time, every field of science has rejected the notion that one might base a theory on an assumption that certain things, although they do not exist in any physical way, might “exist” in some non-physical realm even though there is no evidential nor logical basis for such “existence”. But mathematics remains the odd man out, still clinging to its Platonist past and the beliefs in the independent non-physical “existence” of things referred to by mathematical statements. See also Platonism, The Myths of Platonism, Platonism’s Logical Blunder, Moderate Platonism, Descartes’ Platonism and Numbers, chairs and unicorns.

## The early years of the 20^{th} century

At this point, let us now consider the overall development of meta-mathematics in the first 30 years or so of the 20^{th} century. During that time, one of David Hilbert’s concerns was how one might prove whether a given set of axioms is consistent, that is, how one could prove that a given system of axioms could never result in a contradiction. Clearly, if a system could actually produce a proof that it itself is consistent, then that would raise the question as to whether any of the axioms of the system are generating that result erroneously, that is, one would have to assume from the outset that the system is consistent in order to prove that it is consistent - which would render any such proof worthless. On the other hand, the suggestion that one might generate a proof in a meta-language that a given system is consistent simply pushes the problem to the meta-language, since you then have to assume that that meta-language is consistent.

However, despite the obvious impossibility of a non-circular proof of consistency of a system that could not involve any assumption that might unwittingly entail a hidden inconsistency, Hilbert still clung to the notion that the problem could be overcome, and the futile attempts by himself and others to do so continued.

### The fateful year of 1931

However, in 1931, all such attempts were abruptly terminated by a sudden blow from a completely unexpected direction. In the late 1920s, Kurt Gödel was thinking about definitions of enumerations of all the sequences of the symbols of the alphabet of a specific formal mathematical system. (Footnote: This was an essential part of Gödel’s proof of incompleteness, which you can see online at English translation of Gödel’s original proof. The original paper was in German: Kurt Gödel, “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I”, Monatshefte für mathematik und physik 38, no. 1 (1931): 173-198.)

Unlike König’s earlier lexicographical ideas, which were conceptual and only vaguely formulated, Gödel wanted to actually create a fully detailed definition of such an enumeration. Gödel knew that any such defined enumeration could not itself be expressed in that formal mathematical system itself, as explained above. Any such definition would be a function *f* (*n*) with one free variable *n* whose domain is natural numbers. If this function could be an expression within the formal mathematical system itself then the function itself would have to be one of the items in its own enumeration, that is, we would have *f* (*n*) = *f* (N) , for some specific natural number N, implying that the variable *n* is the same as the non-variable N, which would be an illogical absurdity. (Footnote:
See also Non-Diagonal Proofs: Enumerations in different language systems for a detailed analysis of enumerations both within and outside of a given language system.)

There was no obvious way in which one could devise a formal mathematical system to make a statement that makes an assertion about that statement itself. But Gödel thought that he could devise a way to get around this difficulty, and trick the formal mathematical system into making a statement that makes an assertion about itself. He reasoned that it is easy in principle to define a function outside of the system that assigns a unique number to every statement of the formal mathematical system (as König had suggested), and in fact he defined such a function, and his numbering system is now known as the Gödel numbering system, and the number given by this numbering for any given statement of the formal system is called a Gödel number. (Footnote:
Although Gödel’s numbering function does not enumerate the expressions of a formal system as a one-to-one correspondence where every natural number has a corresponding expression, there is no difficulty in principle in defining such an enumeration function from Gödel’s definitions that gives a one-to-one correspondence.)
Gödel then figured that, given a relationship **A** between two statements of a formal system, one could make another relationship **B** between the Gödel numbers so that the two relationships **A** and **B** would correspond exactly, and one could always derive one from the other. And he also figured that once such a correspondence is established, then surely it must be possible to create a statement within the system that refers to its own Gödel number.

Now, there is no restriction on what numbers may occur in arithmetical assertions, that is to say, a number might appear in an arithmetical statement, and simply by chance happens to be the value of a certain Gödel number, but where no correspondence to a statement of the formal system is to be implied by its occurrence. Gödel’s problem was therefore how to create an assertion about numbers so that the assertion was * specifically* making a statement about itself, rather than having a number simply happening to be in the assertion and happening to be the Gödel number of the assertion itself. (Footnote: For example, with natural numbers, given any number

*a*in an expression, one can always manipulate that expression to refer to a number

*b*: if

*b*is greater than

*a*, then there is another number

*c*such that

*a = c - b*, so you can substitute

*c - b*for

*a*in the expression without changing the meaning of the expression, but now

*b*does appear in the expression. Similarly, if

*b*is less than

*a*, then there is a number c such that

*a = c + b*.)

One way would be to have the assertion include the Gödel numbering function within itself so that it is specifically stating in effect: “this number is a Gödel number”, but that presents the problem that the Gödel numbering function cannot be expressed in the formal mathematical system itself (as explained earlier). But Gödel thought he could get around the difficulty by defining * another* function, which he called Z(

*x*), where the variable

*x*has a domain only of natural numbers, so that as long as only natural numbers are used to substitute for the free variable in both the Z function and the Gödel numbering function, then he could claim that the Gödel numbering function and the Z function were equivalent in that context.

But at this point Gödel demonstrated a complete failure to understand the intricacies of meta-mathematics, since he completely ignored * any* consideration of what mathematical system the assumed equivalence of the two functions might exist in - he simply assumed that the “equivalence” must “exist”, perhaps in some Platonist sense completely independent of language. Gödel was so confident of his intuition that he considered that he did not need to provide any proof of equivalence, instead relying on an intuitive assumption of equivalence.

In fact, it’s easy to demonstrate that that can be no such equivalence, since the formats of any two independent mathematical systems are completely independent, which means, among other considerations, that the symbols used for numbers in any two mathematical systems are completely independent - they do not have to be identical, and if they simply happen to be identical, that does not indicate that the distinction between different language systems has somehow vanished. But Gödel’s assumption of equivalence demands that the symbols used in numbers in the Gödel numbering function and the Z function * must* be identical - which is a logical absurdity. (Footnote:
For details, see Gödel’s flawed assumption of equivalence, and The Flaw in Gödel’s proof of his Incompleteness theorem.)

### Uncritical Acceptance

However, when Gödel’s 1931 paper suddenly appeared on the scene, it included two very unexpected results:

- A proof that a system could not prove itself consistent. (Footnote: This is an instance of how a result can be correct, even if the proof method itself is flawed.)
- A proof that every formal system includes assertions that, although they are true, cannot be proved true within that system. (Footnote: Here Gödel’s result is contradictory in itself, see Gödel’s contradiction.)

And although these results were completely unexpected, they were lapped up with uncritical acclaim for one very simple reason: they appeared to show that human intuitive reasoning could never be replaced by formal systems, that human intuitive reasoning was forever to be the king of mathematics. The mathematicians and logicians of the time were so enamored with this notion that they failed to subject Gödel’s paper to a strictly rigorous logical analysis, and ignored the fact that a crucial step of the proof isn’t actually proved - instead it relies on an intuitive assumption. Even more remarkable is that this was not an assumption that was hidden, to be discovered at some later date; no, in fact Gödel actually stated in the paper that he wasn’t going to bother proving it since the assumed step seemed to be obvious, and on that account it wasn’t worth actually providing what would be a rather lengthy proof, as Gödel in fact remarked:

*“We are content here with indicating the proof of this proposition in outline, since it offers no difficulties in principle and is somewhat involved.”* (Footnote:
For the actual text in context, see Gödel’s Proposition V.)

In the years immediately following Gödel’s paper, the uncritical acceptance of Gödel’s proof opened the floodgates where critical logical analysis was pushed aside in favor of producing a morass of exciting new results that were enabled by the uncritical inclusion of Gödel’s results. Over the next few years, a huge collection of unfounded assumptions were accepted without challenge, and meta-mathematics became a subject area where the lines of distinction between different levels of language became increasingly blurred instead of remaining as unbreachable bulwarks

In the new field of meta-mathematics, although its primary objective was purportedly to be able to make statements in one language system about statements in another language system, the need to maintain a clear delineation of different languages was ignored. Instead, assumptions were made that had no logical nor evidential basis, and any difficulties were glossed over rather than confronted.

One of the worst offenders in the 1930s and 1940s in this respect was Kleene, who smoothed the path for uncritical acceptance of illogical assumptions. An insight into Kleene’s approach can be seen from his book *“Introduction to metamathematics”*, which despite its title, demonstrates no deep insight into the principles involved in meta-mathematics, instead making cursory assumptions to achieve various ends where the means override any detailed logical consideration of delineation of different levels of language. Here is an extract which sums up his nonchalant attitude to rigor (my emphasis):

*“The meta-theory belongs to intuitive and informal mathematics (unless the meta-theory is itself formalized from a meta-meta-theory, which here we leave out of account). The meta-theory will be expressed in ordinary language, with mathematical symbols, such as metamathematical variables, introduced according to need. The assertions of the meta-theory must be understood. The deductions must carry conviction. They must proceed by intuitive inferences, and not, as the deductions in the formal theory, by applications of stated rules… We shall understand this to mean that the ultimate appeal to justify a metamathematical inference must be to the meaning and evidence rather than to any set of conventional rules. It will not prevent us in practice from systematizing our metamathematical results in theorems or rules, which can then be applied quasi-formally to abbreviate the intuitive reasoning. This is a familiar procedure in informal mathematics…”* (Footnote:
From Chapter 3, A Critique of Mathematical Reasoning, in the book “Introduction to metamathematics” by Stephen Cole Kleene, North-Holland, 1952.)

But the challenges of analyzing a formal system from outside of the system cannot ever absolve one from a strict application of the fundamental rules of logic. Kleene, by failing to observe this admonition, demonstrates how this sort of blurring of the boundaries between intuition and strict logic led him to a shocking disregard for the fundamental principles of logic, see an analysis of two of his papers at PDF A Fundamental Flaw in Incompleteness Proofs by S. C. Kleene, and an analysis at Errors in incompleteness proofs by Kleene and Rogers of a proof in a textbook by Kleene. (Footnote: For a similar example from the same period, see Church’s “An Unsolvable Problem”)

Since then, mathematicians and logicians have followed blindly down the same path. The result is that today, the notion that all formal systems that include a certain level of arithmetic include statements that can reference themselves has become a dogma that is not permitted to be questioned; this dogma is exemplified by the so-called “Diagonal Lemma”, which is used as the basis of many incompleteness proofs, yet it includes a very obvious conflation of levels of language - see the page The Diagonal Lemma.

## A Continuing Failure of Rigor

It is astonishing that uncovering the flaws in various academic articles and books that conflate different language systems is not at all difficult, yet mathematicians and logicians appear to be not only completely oblivious to the existence of these errors and untenable assumptions, but aggressively hostile to any suggestion that these errors exist. This may in part be due to the fact that so many paradoxes and contradictions were found within mathematics in the early years of the 20^{th} century that mathematicians, logicians and philosophers begin to believe that such paradoxes and contradictions were part and parcel of mathematics. The result has been that over time the belief has become ingrained that these problems are inevitable integral aspects of mathematics rather than what they actually are: indicators that something is inherently wrong with its foundations. The initial drive of the first part of the 20^{th} century to discover the root of the paradoxes and contradictions gradually faded out, and today the default mathematical mindset is that the myriad of paradoxes, conundrums and contradictions that arise in certain parts of modern mathematics are an unavoidable intrinsic characteristic of it and must simply be accepted.

But in fact, as shown on this site, almost all of the conundrums, paradoxes and contradictions of today’s mathematics can be eliminated by applying a strict observance of the delineation of different languages systems. (Footnote: Note that this also obliterates the notion that “indefinable” mathematical entities somehow independently “exist” in some non-physical sense.)

It may not happen within 10 years, or even 100, but at some point in time it must surely happen that the errors and untenable assumptions will be seen for what they are. Someday a new generation of mathematicians and logicians will arrive, and who want to create a clean, consistent and rigorous mathematics free of the tortuous misunderstandings that arise from the untenable assumptions of previous generations, and who refuse to accept unquestioningly the proclamations of the past that were inspired more by mysticism and religion than by logic. And the future generations of mathematicians and logicians will look back on the machinations of the current era with astonishment and horror in the same way as we now look back aghast at the actions of humans in the past that were prompted by nonsensical superstitious beliefs.

Footnotes:

Rationale: Every logical argument must be defined in some language, and every language has limitations. Attempting to construct a logical argument while ignoring how the limitations of language might affect that argument is a bizarre approach. The correct acknowledgment of the interactions of logic and language explains almost all of the paradoxes, and resolves almost all of the contradictions, conundrums, and contentious issues in modern philosophy and mathematics.Site MissionPlease see the menu for numerous articles of interest. Please leave a comment or send an email if you are interested in the material on this site.

Interested in supporting this site?You can help by sharing the site with others. You can also donate at

_{}where there are full details.