Chaitin’s Constant Error
Chaitin’s ‘constant’, often referred to as Chaitin’s Omega ‘number’, is a notion described by Gregory Chaitin, who works tirelessly at self-promotion (see the list of books and articles below). According to Chaitin, his Omega definition defines an irrational number in terms of an infinite sequence of the digits 0 and 1, where the digits are determined on the basis of whether a theoretical computer will stop or not stop when set to run programs of a given computer language (The Halting problem is the problem of determining if, given any computer program, whether a theoretical computer with unlimited memory will terminate, or never terminate when it runs the program, see also The Halting Problem).
The details of the Omega definition are not important here. It is sufficient to know that the definition is such that there is no method (even in principle) that can continue to determine, on the basis of the Omega definition, the digits of the binary expansion of the ‘number’ that is claimed to be given by the Omega definition (note that different computer languages will result in different sequences of 0s and 1s, but this does not affect the overall argument). Chaitin claims that his Omega number is “algorithmically irreducible”, (Footnote:
Definition of Algorithmically Irreducible as given in the book Information, Randomness & Incompleteness: Papers on Algorithmic Information by Gregory Chaitin, Vol. 8. World Scientific, 1990:
The main concept of algorithmic information theory is that of the program-size complexity or algorithmic information content of an object (usually just called its “complexity”). This is defined to be the size in bits of the shortest computer program that calculates the object, i.e., the size of its minimal algorithmic description. Note that we consider computer programs to be bit strings and we measure their size in bits. If the object being calculated is itself a finite string of bits, and its minimal description is no smaller than the string itself, then the bit string is said to be algorithmically incompressible, algorithmically irreducible, or algorithmically random. Such strings have the statistical properties that one would expect. For example, 0’s and l’s must occur with nearly equal relative frequency; otherwise the bit string could be compressed. An infinite bit string is said to be algorithmically incompressible, algorithmically irreducible, or algorithmically random if all its initial segments are algorithmically random finite bit strings. ) and that because of that, this Omega is a tremendously important concept. He has continued to proclaim the importance and significance of this notion over the past 40 years (besides proclaiming his own importance for discovering this Omega number).
But, amazingly, despite his banging on and on relentlessly about this Omega number, (Footnote: Chaitin has published at least 20 articles and 5 books based on this Omega number (see the lists on this page) and has given numerous talks on it, many of them now on YouTube. He has also published many other articles, and given talks, that refer to the importance of his Omega number. ) Chaitin has never actually produced a valid proof that this Omega number is, in fact, algorithmically irreducible - as will be demonstrated below.
We can analyze Chaitin’s Omega number in two ways:
- We can assert that for any given computer program, there is a truth value for the proposition “A theoretical computer with limitless memory will either stop or not stop when set to run the specified program”, even if we can never determine it. This is the Platonist stance, and this is the stance that Chaitin adopts.
- We do not assume that there is necessarily a truth value for the proposition “A theoretical computer with limitless memory will either stop or not stop when set to run the specified program”. This is the non-Platonist stance.
As will be shown below, it is actually immaterial which stance one takes when considering the question as to whether Chaitin has actually provided a valid proof that his Omega number is algorithmically irreducible. To show this, we will consider both stances, taking Chaitin’s Platonist stance first.
Case 1: The Platonist stance (Chaitin’s stance)
In taking the Platonist stance, we assume that there is a “true” value for every computer program as to whether it will stop or not stop, and so we assume that there is a definitive sequence of digits that corresponds to the Omega definition, even if we can never determine that sequence. However, the Omega definition does not actually prove that there cannot be a different definition of that precise sequence, and which does not make any mention of computers and whether they stop on certain computer programs. Given the Platonist stance, the Omega definition merely tells us that there is a number that cannot be determined from the Omega definition, but it does not tell us that there cannot be any alternative definition of that number. Hence the Omega definition does not prove that there is not a definition that is a formula of some mathematical system, and that defines an algorithm that generates the sequence of the digits of the number.
Of course, Chaitin claims that he has a proof that his Omega number is what he calls algorithmically irreducible. You can see his proof in the paper, An Invitation to Algorithmic Information Theory, in the book Gödel’s Way and in numerous other places in Chaitin’s output, since a lot of his output is repetitive (see references below). The basis of his proof is that if it were possible to have some algorithm that could actually calculate the digits of the Omega number precisely, then we have a method of solving the Halting problem - but the Halting problem is unsolvable, therefore we cannot determine this Omega number. The obvious flaw in this proof is that it does not prove that there cannot be an algorithm that generates the digits of the Omega number, but where its definition makes no reference to computer programs or to the halting of computer programs. Since that is the case, then it is possible that there could be an algorithm for the Omega number, but we would not know that it was an algorithm for the Omega number, and because of that we would not have a method of solving the Halting problem.
Hence Chaitin, contrary to his grandiose claims, has not established that Omega is a random algorithmically irreducible sequence (given the assumption that Omega ‘exists’ as a definitive sequence of digits) - in the sense that there exists no possible finite algorithm that will generate such an expansion - since his proof fails to prove that there can be no such algorithm.
Case 2: The non-Platonist stance
In taking the non-Platonist stance, we do not assume that there must be a truth value for the question as to whether a particular computer program will stop or not stop. In this case, Chaitin’s Omega definition does not in fact define a single specific sequence of digits but merely a constrained set of such sequences that satisfy certain properties. For example, it can be the case that a set of sequences have the first n digits in common, then the definition of such a set of sequences simply defines all sequence that begin with the same sequence of n digits. In binary terms, this would be that we can determine a certain number of 0s and 1s of a sequence, but only up to certain digit - if this sequence is, for example, 0.1010010110, then the definition simply defines a set of values between 0.1010010110 and 0.1010010111.
So if we do not assume that there must be a truth value for the question as to whether a particular computer program will stop or not stop, then Chaitin’s Omega definition does not define a single number and Chaitin’s implicit claim that it defines a single specific number is incorrect.
At first glance, it might appear that there must be a ‘true’ or ‘false’ value, since the usual assumption is that every computer program either stops or continues to run forever. But computer programs don’t stop or continue to run forever. It is the computers that run the programs that stop or continue to run. One should not fall into the trap of assuming an extension from a real physical computer to a hypothetical abstract notion, and also assuming that all properties of the real computer extend to the abstract notion. Real physical computers will always stop at some point in time – they will eventually fail. And when they might fail is unpredictable.
It is no coincidence that the programs for which it might not be straightforward to determine if the computer they are set to run on will ‘stop’ or not ‘stop’ are those programs that involve a recursive loop, where there is at least one variable that increases in size at each iteration of the loop; in such a program the amount of memory required to run the loop and the increasing size of the variable increases every time the loop iterates. Since there are infinitely many programs, and infinitely many such loops, and infinitely many such variables, all of different maximum sizes, then no actual physical computer could run every such program, since it would require more than any fixed amount of memory. It follows that the notion that when a real physical computer that is set to run a program, it either ‘stops’ or ‘does not stop’, cannot apply to every such program, since we can only have a computer with a limited amount of memory.
It follows that when an actual physical computer is set to run a program, it is not a case that the computer either ‘stops’ or ‘does not stop’; it is essentially the case that it:
- runs the program correctly and stops, or
- it runs out of memory when it is set to run the program, or
- runs the program correctly, does not run out of memory, and does not stop (in this case it is stuck in a repetitive loop that does not require increasing amounts of memory).
Of course, it is obvious that Chaitin’s description does not depend on real physical computers; the important point is that Chaitin’s description assumes a hypothetical computer that has a limitless amount of memory and which is 100% reliable for all time. Such things do not actually exist. Of course, one can conceive of an abstraction where one assumes that there are abstract concepts that correspond precisely to the physical notions of ‘stop’ and ‘continue to run indefinitely’. This entails the assumption that one can apply either a ‘true’ or ‘false’ value to the purely abstract concepts that are assumed to correspond to the terms ‘stop’ and ‘continue to run indefinitely’. Of course, one can make such assumptions - provided that one doesn’t assume such assumptions are axiomatically true.
In summary, regardless of whether one applies the Platonist or non-Platonist interpretation of the Omega ‘number’, nothing that Chaitin has written in 40 years of relentless promotion of his Omega concept has provided a proof that he has actually defined an infinite sequence of digits for which there can be no finite algorithm that can generate that sequence.
Panu Raatikainen writes in a review of two of Chaitin’s books, Exploring Randomness and The Unknowable:
“Chaitin’s claims are nearly megalomaniacal. What else can one think of statements such as the following?: ‘AIT will lead to the major breakthrough of 21st century mathematics, which will be information-theoretic and complexity based characterizations and analyses of what is life, what is mind, what is intelligence, what is consciousness, of why life has to appear spontaneously and then to evolve.’ (Exploring Randomness, p. 163).”
He also says in the same review:
“It has been shown conclusively (see [the references below]) that Chaitin’s philosophical interpretations of his work are unfounded and false; they are based on various fatal confusions.”
There are a number of published papers that are critical of Chaitin’s papers on incompleteness proofs that are based on information-theoretic complexity (see also the paper A Fundamental Flaw in Incompleteness Proofs by Gregory Chaitin, which shows that Chaitin’s proof depends on an unproven assumption):
- Germano D’Abramo. “On the information-theoretic approach to Gödel’s incompleteness theorem”. 2002, Arxiv 0211078
- Torkel Franzen. Gödel’s Theorem: An Incomplete Guide to its Use and Abuse. A K Peters, 2005. ISBN: 1568812388.
- Michiel Van Lambalgen, “Algorithmic information theory”, Journal of Symbolic Logic 54 (1989), pp. 1389–1400.
- Panu Raatikainen, “On interpreting Chaitin’s incompleteness theorem”, Journal of Philosophical Logic 27 (1998), pp. 269–586.
- Panu Raatikainen, “Algorithmic information theory and undecidability”, Synthese 123 (2000), pp. 217–225.
Some books by Chaitin based on his Omega number
Note: much of the content of these books is rehashed from previously published articles
- “Information, randomness & incompleteness: papers on algorithmic information theory ” Vol. 8. World Scientific, 1990.
- “The Limits of Mathematics”, Springer-Verlag Singapore. 1998.
- “Meta Math! The Quest for Omega” Vintage Books USA. 2006 ISBN‑13: 978‑1400077977
- “Thinking about Gödel and Turing: Essays on Complexity, 1970-2007 ”. World Scientific, 2007.
- “Algorithmic information theory ”. Cambridge University Press (2008) ISBN‑13: 978‑0521616041
- “Gödel’s way: exploits into an undecidable world ”, (with Francisco A. Doria, and Newton CA Da Costa), CRC Press, 2011.
Some articles by Chaitin based on his Omega number
- “Information-theoretic computation complexity” IEEE Transactions on Information Theory, 20.1 (1974): pp. 10-15.
- “A theory of program size formally identical to information theory” Journal of the ACM (JACM) 22.3 (1975): pp. 329-340.
- “Algorithmic information theory” IBM journal of research and development 21.4 (1977): pp. 350-359.
- “Gödel’s theorem and information” International Journal of Theoretical Physics 21.12 (1982): pp. 941-954.
- “Randomness and Gödel’s Theorem”, Mondes en Developpement, No. 54-55 (1986), pp. 125-128
- “An algebraic equation for the halting probability”, Information, randomness & incompleteness: papers on algorithmic information theory 8 (1987): 70.
- “Randomness in Arithmetic”, Scientific American 259, No. 1 (July 1988), pp. 80-85
- “Undecidability and randomness in pure mathematics”, Information, Randomness & Incompleteness, 2nd Edition, World Scientific, 1990, pp. 307-313.
- “A Random Walk in Arithmetic” New Scientist 125, No. 1709 (Mar 1990), pp. 44-46
- “Algorithmic Information & Evolution”, Perspectives on Biological Complexity, IUBS
Press, 1991, pp. 51-60.
- “Information-theoretic incompleteness”, Applied Mathematics and Computation 52.1 (1992): pp. 83-101.
- “Randomness in Arithmetic and the Decline & Fall of Reductionism in Pure Mathematics”
EATCS Bulletin, No. 50 (June 1993), pp. 314-328
- “The Berry Paradox” Complexity 1.1 (1995), pp. 26-30.
- “An invitation to algorithmic information theory” DMTCS 1996 Proceedings, Springer-Verlag Singapore, 1997, pp. 1-23
- “How to run algorithmic information theory on a computer: Studying the limits of mathematical reasoning”, Complexity 2.1 (1996): pp. 15-21.
- “The limits of mathematics” Journal of Universal Computer Science 2.5 (1996): pp. 270-305.
- “A Century of Controversy over the Foundations of Mathematics” Finite vs. Infinite, Springer-Verlag, 2000, pp. 75-100.
- “Mathematics: Randomness everywhere”, (with Cristian S. Calude) Nature 400.6742 (1999): pp. 319-320.
- “Paradoxes of randomness” Complexity 7.5 (2002): pp. 14-21.
- “Omega and why maths has no TOEs” Plus online magazine, December 2005
- “The limits of reason”, Scientific American 294.3 (2006): pp. 74-81.
- “How Much Information Can There Be in a Real Number?” International Journal of Bifurcation and Chaos 17 (2007), pp. 1933-1935.
- “An Algebraic Characterization of the Halting Probability” Fundamenta Informaticae 79 (2007), pp. 17-23.
- “The Halting Probability Omega: Irreducible Complexity in Pure Mathematics” Milan Journal of Mathematics 75 (2007) pp. 291-304.
- “What is a halting probability?” (with Cristian S. Calude) AMS Notices 57 (2010), pp. 236-237.