Footnotes:
The Halting Problem and Incompleteness Proofs
A reader wrote in wondering about claims of proofs of Incompleteness using arguments based around the ‘Halting Problem’, (Footnote: The halting problem is the problem of determining if any given computer program will terminate, or if it will never terminate on a theoretical computer with unlimited memory. See The Halting Problem.) stating that he had not found any articles that gave a good explanation of such proofs. I have seen many claims of such ‘proofs’, but in any that I have examined, the ‘proofs’ included very obvious unproven assumptions that immediately invalidate the claims.
The reader in question mentioned one source - Scott Aaronson, an Associate Professor in the Department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology. So I decided that it might be useful to give a brief analysis of Aaronson’s claims, which are fairly typical of this genre of ‘proof’. Interestingly, Aaronson states that he has a “conviction that computer programs are ‘really’ at the core of Gödel’s Theorem, whatever any tradition-minded philosopher or logician might claim to the contrary.”
Scott Aaronson’s Incompleteness ‘Proof’ No.1
Aaronson has a webpage entitled Gödel, Turing, and Friends where he claims that he can provide a very short proof of the Incompleteness Theorem based on the result of the ‘Halting Problem’.
On this page, Aaronson says:
“Once you have Turing’s results, Gödel’s results fall out for free as a bonus. Why? Well, suppose the Incompleteness Theorem was false -- that is, there existed a consistent, computable proof system F from which any statement about integers could be either proved or disproved. Then given a computer program, we could simply search through every possible proof in F, until we found either a proof that the program halts or a proof that it doesn’t halt. (This is possible because the statement that a particular computer program halts is ultimately just a statement about integers.) But this would give us an algorithm to solve the halting problem, which we already know is impossible. Therefore F can’t exist.”
Let’s set out clearly and methodically the essence of what Aaronson is saying here.
- Assume that there exists a complete consistent, computable proof system F from which any statement about integers can be proved or its negation can be proved.
- Take any computer program P.
- Search through all proofs of the system F.
- Eventually either:
a proof of the system F will be found for which there is a corresponding statement that the computer program P halts or a proof of the system F will be found for which there is a corresponding statement that the computer program P does not halt. - Since the above process is an algorithm, there can be a computer program Q that can execute this process.
- Therefore, there can be a computer program Q that can take as an input any computer program and which can tell if any computer program halts or does not halt.
- This, says Aaronson, is a contradiction, since the halting problem is unsolvable.
- Therefore, says Aaronson, there cannot be any such system F.
But in any proof by contradiction, the presence of a contradiction only tells us that at least one of the prior assumptions is untenable. It does not tell us if there is more than one untenable assumption, or which one might be untenable.
Aaronson claims that “a statement that a particular computer program halts is ultimately just a statement about integers”. One presumes he means here that whether a computer stops on a particular program depends on a conditional, and all conditionals can be reduced to conditionals about numbers. So that means that for every conditional in a computer program, there is a corresponding statement about integers - and vice-versa.
What Aaronson fails to address is the question of how this correspondence is included within the purported algorithm Q. It must be included, because otherwise, given any proof of the system F (which is a proof of a statement about numbers), there would be no information as to what the corresponding statement regarding a computer program might be.
This means that Aaronson is assuming that the computer program Q can include the entire information of this mapping from every proof of the system F to every program of the computer language of any program P, including the program Q itself. It therefore comes as no surprise that Aaronson can then pull out of the hat the desired result - because Aaronson has simply assumed - without providing any logical proof - that the program Q is a valid program that can self-reference itself in the desired manner.
In any proof, if there is more than one assumption leading to a contradiction, then the contradiction only tells us that at least one assumption prior to that assumption is invalid. It does not show which assumption is invalid. Aaronson’s argument fails to show which of the prior assumptions was incorrect. Aaronson’s ‘proof’ is just one more claim of an incompleteness proof that is based on an unfounded assumption of self-reference; such claimed ‘proofs’ are regrettably common and are completely worthless; for once the self-reference is assumed, the remainder of the so-called ‘proof’ is a triviality.
Scott Aaronson’s Incompleteness ‘Proof’ No.2
On another webpage entitled Rosser’s Theorem via Turing machines, Aaronson claims to provide another Incompleteness proof. (Footnote: ‘Rosser’s Theorem’ is a term commonly used to refer to a different theorem than that which Aaronson refers to. Aaronson is here referring to John Rosser’s paper PDF Extensions of some theorems of Gödel and Church, published in the Journal of Symbolic Logic vol. 1, 1936, reprinted in the book The Undecidable: Basic papers on undecidable propositions, unsolvable problems and computable functions, ed Martin Davis, Raven Press, 1965.)
He bases his argument on what he calls “The Consistent Guessing Problem”. The full text is readily accessible on the aforementioned webpage. Here we rewrite the text in order to make it more clear what the structure of Aaronson’s argument is.
Note that Aaronson uses the terms ‘accept’ and ‘reject’. Since the terms ‘accept’ and ‘reject’ can be replaced by any two different computer actions (other than halting), here we simply replace these terms ‘accept’ and ‘reject’ by ‘Does X’ and ‘Does Y’ respectively - this makes no difference to the actual argument, but I think it makes it easier to read.
Assumption: There exists a Turing machine P such that, given as input a description ⟨M⟩ of a Turing machine M:
- If M does X on a blank tape, then the machine P does X and halts.
- If M does Y on a blank tape, then the machine P does Y and halts.
- If M runs forever on a blank tape, then the machine P does either X or Y and halts.
If the machine P exists, then there exists a Turing machine Q that, given as input a description ⟨M⟩ of another Turing machine M:
- If M does X on a blank tape, then the machine Q does Y and halts.
- If M does Y on a blank tape, then the machine Q does X and halts.
- If M runs forever on a blank tape, then the machine Q does either X or Y and halts.
Now run Q on the input of its own description ⟨Q⟩.
If Q(⟨Q⟩) does X and halts, then it does Y and halts; if Q(⟨Q⟩) does Y and halts, then it does X and halts. In either case there is a contradiction, so neither case 1 nor case 2 can apply.
That leaves the remaining case 3, that Q(⟨Q⟩) runs forever. Again this results in a contradiction.
Hence one or more of the preceding assumptions is invalid.
Conclusion: Therefore there can be no such machine P.
At this point, Aaronson introduces another assumption (the assumption which the proof is intended to prove to be invalid):
Assume: there exists a complete and consistent formal system F, which is powerful enough to talk about Turing machines.
Then there exists an algorithm that operates as follows:
Enumerate all proofs and disproofs of the system F.
Examine each proof/
If a proof of the statement “The machine M does X and halts on a blank tape.” is found, then do Y and halt.
If a disproof of the statement “The machine M does X and halts on a blank tape.” is found, then do X and halt.
Because the system F is complete, there exists either a proof or a disproof of the statement, “The machine M does X on a blank tape.”
Because F is consistent, if M does Y then there cannot be a proof in F that M does X, while if M does X then there cannot be a proof in F that M does Y.
Therefore if M does X and halts, the algorithm does Y and halts; if M does Y, the algorithm does X and halts. (If M does not halt, then there must be a disproof of the statement “The machine M does X and halts on a blank tape.”, so the algorithm does X and halts).
But if there is algorithm to do the above, then there is a Turing machine that can do the above. But such a Turing machine would be the Turing machine Q as described above; and it has been shown that there can be no such machine Q.
This is a contradiction.
Hence there cannot be any such system F.
So, if one accepts Aaronson’s argument, it only proves that there cannot be a formal system that complete and consistent and is “powerful enough to talk about Turing machines”. It says nothing about formal systems that cannot do so.
But Rosser’s result (as in Gödel’s proof) applies to any formal system that includes a certain amount of number theory.
So how can it be that Aaronson claims that his result is the same as Rosser’s result? The only explanation is that Aaronson simply assumes that every formal system that contains a certain amount of arithmetic is inherently “powerful enough to talk about Turing machines”. But of course that is complete nonsense. But with incompleteness “proofs” you will find time and time again bizarre assumptions that a basic arithmetical system can by itself make meaningful statements about all sorts of things other than numbers.
Calling the result of such unfounded assertions a ‘proof’ is absurd - it is a mockery of the principles of logical proof. It is a case of assuming whatever is needed in order to achieve the desired result, where the end is more important than the means. No-one should be deceived by sloppy arguments such as these.
Consider what a system that is “powerful enough to talk about Turing machines” entails; such a system is able to make statements about any conditional statement that can occur in any Turing machine. And a conditional statement can be based on any proposition that can be stated in that system. So such a system that Aaronson refers to is a system that is powerful enough to make statements about any proposition in that system; and so such a system would be a system where there can be a proposition that can talk about any proposition in that system. And so such a system would be a system where there can be a proposition that makes a statement about itself. Such a system would be a self-referential system. So the essence of Aaronson’s argument is that there can be no complete and consistent formal system that can also self-reference itself in a certain manner. But it says nothing about formal systems that do not self-reference themselves in this way.
So, ultimately, both of Aaronson’s ‘proofs’ rely on the assumption of self-reference. As I have noted elsewhere on this site, this is a very common way of sidestepping the difficulties of actually trying to prove such self-reference. Once this overwhelmingly inconvenient impediment is assumed away, the rest of an incompleteness ‘proof’ is a triviality.
See also A Flawed Incompleteness Proof by Sebastian Oberhoff which also makes invalid assumptions about Turing machines.
Rationale: Every logical argument must be defined in some language, and every language has limitations. Attempting to construct a logical argument while ignoring how the limitations of language might affect that argument is a bizarre approach. The correct acknowledgment of the interactions of logic and language explains almost all of the paradoxes, and resolves almost all of the contradictions, conundrums, and contentious issues in modern philosophy and mathematics.
Site Mission
Please see the menu for numerous articles of interest. Please leave a comment or send an email if you are interested in the material on this site.
Interested in supporting this site?
You can help by sharing the site with others. You can also donate at where there are full details.