Logic and Language

Logic and Language

Copyright © James R Meyer 2012 - 2017 www.jamesrmeyer.com

This page is keyboard accessible:

• Use**Tab**, **Shift + Tab **keys to traverse the main menu. To enter a sub-menu use the **Right Arrow** key. To leave a sub-menu use the **Left Arrow** or the **Escape** key.

• The**Enter** or the **Space** key opens the active menu item.

• To skip the menu and move to the main content, press**Tab** after the page loads to reveal a skip button.

• To get back to the top of the page anytime, press the**Home** key.

• For more information, click here: Accessibility Close this tip.

• Use

• The

• To skip the menu and move to the main content, press

• To get back to the top of the page anytime, press the

• For more information, click here: Accessibility Close this tip.

Note: Full functionality of this web page requires JavaScript to be enabled in your browser.

A reader wrote in wondering about claims of proofs of Incompleteness using arguments based around the ‘Halting Problem’ (Footnote: The halting problem is the problem of determining if any given computer program will terminate, or if it will never terminate on a theoretical computer with unlimited memory.), stating that he had not found any articles that gave a good explanation of such proofs. I have seen many claims of such ‘proofs’, but in any that I have examined, the ‘proofs’ included very obvious unproven assumptions that immediately invalidate the claims.

The reader in question mentioned one source - Scott Aaronson, an Associate Professor in the Department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology. So I decided that it might be useful to give a brief analysis of Aaronson’s claims, which are fairly typical of this genre of ‘proof’. Interestingly, Aaronson states that he has a *“conviction that computer programs are ‘really’ at the core of Gödel’s Theorem, whatever any tradition-minded philosopher or logician might claim to the contrary.”*

Aaronson has a webpage entitled Gödel, Turing, and Friends where he claims that he can provide a very short proof of the Incompleteness Theorem based on the result of the ‘Halting Problem’.

On this page, Aaronson says:

“*Once you have Turing’s results, Gödel’s results fall out for free as a bonus. Why? Well, suppose the Incompleteness Theorem was false -- that is, there existed a consistent, computable proof system F from which any statement about integers could be either proved or disproved. Then given a computer program, we could simply search through every possible proof in F, until we found either a proof that the program halts or a proof that it doesn’t halt. (This is possible because the statement that a particular computer program halts is ultimately just a statement about integers.) But this would give us an algorithm to solve the halting problem, which we already know is impossible. Therefore F can’t exist.*”

Let’s set out clearly and methodically the essence of what Aaronson is saying here.

But in any proof by contradiction, the presence of a contradiction only tells us that at least one of the prior assumptions is untenable. It does not tell us which one, or if there are more than one untenable assumptions.

Arronson claims that “*a statement that a particular computer program halts is ultimately just a statement about integers*”. One presumes he means here that whether a computer stops on a particular program depends on a conditional, and all conditionals can be reduced to conditionals about numbers. So that means that for every conditional in a computer program, there is a corresponding statement about integers - and vice-versa.

What Arronson fails to address is the question of how this correspondence is included within the purported algorithm * Q*. It must be included, because otherwise, given any proof of the system

This means that Arronson is assuming that the computer program * Q* can include the entire information of this mapping from every proof of the system

In any proof, if there is more than one assumption leading to a contradiction, then the contradiction only tells us that at least one assumption prior to that assumption is invalid. It does not show which assumption is invalid. Aaronson’s argument fails to show which of the prior assumptions was incorrect. Aaronson’s ‘proof’ is just one more claim of an incompleteness proof that is based on an unfounded assumption of self-reference; such claimed ‘proofs’ are regrettably common and are completely worthless; for once the self-reference is assumed, the remainder of the so-called ‘proof’ is a triviality.

On another webpage entitled Rosser’s Theorem via Turing machines, Aaronson claims to provide another Incompleteness proof. (Footnote: ‘Rosser’s Theorem’ is a term commonly used to refer to a different theorem than that which Aaronson refers to. Aaronson is here referring to John Rosser’s paper *“Extensions of some theorems of Gödel and Church”*, published in the Journal of Symbolic Logic vol. 1, 1936, reprinted in the book *The Undecidable: Basic papers on undecidable propositions, unsolvable problems and computable functions*, ed Martin Davis, Raven Press, 1965.)

He bases his argument on what he calls *“The Consistent Guessing Problem”*. The full text is readily accessible on the aforementioned webpage. Here we rewrite the text in order to make it more clear what the structure of Aaronson’s argument is.

Note that Aaronson uses the terms *‘accept’* and *‘reject’*. Since the terms *‘accept’* and *‘reject’* can be replaced by any two different computer actions (other than halting), here we simply replace these terms *‘accept’* and *‘reject’* by *‘Does X’* and *‘Does Y’* respectively - this makes no difference to the actual argument, but I think it makes it easier to read.

Assumption: There exists a Turing machine **P** such that, given as input a description **⟨M⟩** of a Turing machine **M**:

- If
**M**does X on a blank tape, then the machine**P**does X and halts. - If
**M**does Y on a blank tape, then the machine**P**does Y and halts. - If
**M**runs forever on a blank tape, then the machine**P**does either X or Y and halts.

If the machine **P** exists, then there exists a Turing machine **Q** that, given as input a description **⟨M⟩** of another Turing machine **M**:

- If
**M**does X on a blank tape, then the machine**Q**does Y and halts. - If
**M**does Y on a blank tape, then the machine**Q**does X and halts. - If
**M**runs forever on a blank tape, then the machine**Q**does either X or Y and halts.

Now run **Q** on the input of its own description **⟨Q⟩**.

If **Q**(**⟨Q⟩**) does X and halts, then it does Y and halts; if **Q**(**⟨Q⟩**) does Y and halts, then it does X and halts. In either case there is a contradiction, so neither case 1 nor case 2 can apply.

That leaves the remaining case 3, that **Q**(**⟨Q⟩**) runs forever. Again this results in a contradiction.

Hence one or more of the preceding assumptions is invalid.

Conclusion: Therefore there can be no such machine **P**.

At this point, Aaronson introduces another assumption (the assumption which the proof is intended to prove to be invalid):

Assume: there exists a complete and consistent formal system **F**, which is powerful enough to talk about Turing machines.

Then there exists an algorithm that operates as follows:

Enumerate all proofs and disproofs of the system **F**.

Examine each proof/disproof in order of the enumeration.

If a proof of the statement *“The machine M does X and halts on a blank tape.”* is found, then do Y and halt.

If a disproof of the statement *“The machine M does X and halts on a blank tape.”* is found, then do X and halt.

Because the system **F** is complete, there exists either a proof or a disproof of the statement, “The machine **M** does X on a blank tape.”

Because **F** is consistent, if **M** does Y then there cannot be a proof in **F** that **M** does X, while if **M** does X then there cannot be a proof in **F** that **M** does Y.

Therefore if **M** does X and halts, the algorithm does Y and halts; if **M** does Y, the algorithm does X and halts. (If **M** does not halt, then there must be a disproof of the statement *“The machine M does X and halts on a blank tape.”*, so the algorithm does X and halts).

But if there is algorithm to do the above, then there is a Turing machine that can do the above. But such a Turing machine would be the Turing machine **Q** as described above; and it has been shown that there can be no such machine **Q**.

This is a contradiction.

Hence there cannot be any such system **F**.

So, if one accepts Aaronson’s argument, it only proves that there cannot be a formal system that complete and consistent and is *“powerful enough to talk about Turing machines”*. It says nothing about formal systems that cannot do so.

But Rosser’s result (as in Gödel’s proof) applies to ** any** formal system that includes a certain amount of number theory.

So how can it be that Aaronson claims that his result is the same as Rosser’s result? The only explanation is that Aaronson simply assumes that there cannot be a consistent and complete formal system that includes such number theory but which is not *“powerful enough to talk about Turing machines”*. Calling the result of such unfounded assertions a ‘proof’ is absurd - it is a mockery of the principles of logical proof. It is a case of assuming whatever is needed in order to achieve the desired result, where the end is more important than the means. No-one should be deceived by sloppy arguments such as these.

Consider what a system that is *“powerful enough to talk about Turing machines”* entails; such a system is able to make statements about ** any** conditional statement that can occur in

So, ultimately, both of Aaronson’s ‘proofs’ rely on the assumption of self-reference. As I have noted elsewhere on this site, this is a very common way of sidestepping the difficulties of actually trying to prove such self-reference. Once this overwhelmingly inconvenient impediment is assumed away, the rest of an incompleteness ‘proof’ is a triviality.

Footnotes:

Diverse opinions and criticisms are welcome, but messages that are frivolous, irrelevant or devoid of logical basis will be blocked (comments will be checked before appearing on this site). Difficulties in understanding the site content are usually best addressed by contacting me by e-mail. Note: you will be asked to provide an e-mail address - this will only be used to notify you of replies to your comments - it will never be used for any other purpose, will never be displayed and does not require verification. Comments are common to the entire website, so please indicate what section of the site you are commenting on.

If you cannot see any comments below, it may be that a plug-in on your browser is blocking Disqus comments from loading. Avast anti-virus in particular is known to do this, especially with Internet Explorer and Safari. See Disqus Browser plug-in/extension conflicts or Why isn’t the comment box loading?.

Please wait for comments to load …

It has come to my notice that, when asked about the demonstration of the flaw in his proof (see A Fundamental Flaw in an Incompleteness Proof by Peter Smith PDF), Smith refuses to engage in any logical discussion, and instead attempts to deflect attention away from any such discussion. If any other reader has tried to engage with Smith regarding my demonstration of the flaw, I would be interested to know what the outcome was.

There is a new addition to the page Yet another flawed incompleteness proof, where Berto’s proof of incompleteness in his book *There’s something about Gödel* comes under scrutiny.

I found that making, adding or deleting footnotes in the traditional manner proved to be a major pain. So I developed a different system for footnotes which makes inserting or changing footnotes a doddle. You can check it out at Easy Footnotes for Web Pages (Accessibility friendly).

I have now added a new section to my paper on Russell O’Connor’s claim of a computer verified incompleteness proof. This shows that the flaw in the proof arises from a reliance on definitions that include unacceptable assumptions - assumptions that are not actually checked by the computer code. See also the new page Representability.

There is now a new page on Chaitin’s Constant (Chaitin’s Omega), which demonstrates that Chaitin has failed to prove that it is actually algorithmically irreducible.

13 May 2015 Good Math, Bad Math?

16th Mar 2015 Bishops Dancing with Pixies?

23rd Feb 2015 Artificial Intelligence

31 Mar 2015 Cranks and Crackpots

For convenience, there are now two pages on this site with links to various material relating to Gödel and the Incompleteness Theorem

– a page with general links:

– and a page relating specifically to the Gödel mind-machine debate:

All pages on this website are printer friendly, and will print the main content in a convenient format. Note that the margins are set by your browser print settings.

Note: for some browsers JavaScript must be enabled for this to operate correctly.

Comments on this site are welcome, please see the comment section.

Please note that this web site, like any other is a collection of various statements. Not all of this web site is intended to be factual. Some of it is personal opinion or interpretation.

If you prefer to ask me directly about the material on this site, please send me an e-mail with your query, and I will attempt to reply promptly.

Feedback about site design would also be appreciated so that I can improve the site.

Copyright © James R Meyer 2012 - 2017

www.jamesrmeyer.com

from which any statement about integers can be proved or its negation can be proved.F.P.Fa proof of the system

will be found for which there is a corresponding statement that the computer programFhalts orPa proof of the system

will be found for which there is a corresponding statement that the computer programFdoes not halt.PQthat can execute this process.Qthat can take as an input any computer program and which can tell if any computer program halts or does not halt..F