Copyright © James R Meyer 2012 - 2017 www.jamesrmeyer.com
In an article The danger of artificial stupidity posted at the Scientia Salon website, J. Mark Bishop claims that AI can never match, never mind surpass human mentality and he bases this claim on three principal assertions:
Computers lack genuine understanding. The basis of this claim is given by John Searle’s Chinese room argument” (1980). I hope to deal with this later in another article.
Computers lack mathematical insight. The basis of this claim is that Gödel’s incompleteness proof shows that the way mathematicians provide their “unassailable demonstrations” of the truth of certain mathematical assertions is fundamentally non-algorithmic and non-computational. The details of this claim are to be found in the book The Emperor’s New Mind by Roger Penrose. Penrose’s claims are highly controversial, and it is widely disputed that a machine could not duplicate following finitely many steps of a logical argument; for examples see the page Gödel, Minds, and Machines.
But in any case, if there is not a valid proof of incompleteness, Penrose’s argument has no basis at all. For details of the flaws in several incompleteness proofs see Errors in Incompleteness Proofs.
Computers lack consciousness. The basis of this claim is that Bishop argues that if a computer-controlled robot experiences a conscious sensation as it interacts with the world, then an infinitude of consciousnesses must be present in all objects throughout the universe. Since this is absurd, then machines can never be conscious. Bishop calls this the ‘Dancing with Pixies’ argument, and refers to papers that he has published which give details of his argument.
Bishop has a paper with the title ‘Dancing with Pixies’ in the book Views into the Chinese Room, Oxford University Press, Oxford. His most recent paper detailing his ‘Dancing with Pixies’ argument appears to be the paper A Cognitive Computation fallacy? Cognition, computations and panpsychism.
In this paper, Bishop refers to ‘phenomenal consciousness’, by which he says he means ‘first person, subjective phenomenal states such as sensory tickles, pains, visual experiences and so on’. According to his paper, the essence of the ‘Dancing with Pixies’ argument is this (although there is a lot of extraneous padding and irrelevant material there also):
(Note: an open physical system is a system that can interact with its environment.)
This isn’t an argument at all. In terms of premise and conclusion the essence of Bishop’s ‘argument’ is:
Premise: Given a machine that has only a finite number of possible states it can change to, then if it can reach a state where it might be said to have genuine phenomenal consciousness,
Conclusion: Every physical system that can interact with its environment can reach a state where it might be said to have genuine phenomenal consciousness.
The absurdity of the argument is obvious. There is no logical basis, given the premise, for inferring the conclusion. There is no reasoned argument at all.
The two principal objections to Bishop’s ‘argument’ are:
Even disregarding these obvious flaws in Bishop’s argument, the other obvious problem is that the nebulous notion of phenomenal consciousness is not a well-defined notion and there is widespread disagreement as to what it actually is. But it is agreed consciousness is not a simple ‘there’ or ‘not there’ matter - there are differing degrees of consciousness. For example, most scientists accept that many animals have some form of consciousness (see, for example, the Animal Consciousness entry in the Stanford Encyclopedia of Philosophy). And different human beings have different degrees of consciousness - one would not credit a newly born baby with the same degree of consciousness as an adult human. And since it is a matter of degree, there is no difficulty in conceiving that a complex system might exhibit something that could be called consciousness, while if one were able to reduce that complexity little by little, there would be a point where one could not assign any consciousness to that system.
Of course, this whole notion of ‘phenomenal consciousness’ is a hugely subjective matter. I might think and claim that I possess ‘phenomenal consciousness’, but how do I know that what I am claiming that I experience is the same as what other humans call ‘phenomenal consciousness’? The only frame of reference I might have for such a subjective matter is by communication with other humans. And if a machine can communicate and states that it has examined the claims of humans that say that they have ‘phenomenal consciousness’, and claims that it also has this ‘phenomenal consciousness’, how can we know that it does not have ‘phenomenal consciousness’? According to Bishop, there’s no point in discussing this with the machine; Bishop would simply say to the machine, “You can’t have ‘phenomenal consciousness’, you’re a machine”. It would be interesting to know what an advanced AI machine would reply. Perhaps it would respond by stating that there would be no point in communicating any further with an entity that refused to give any logical argument to support its claims.
Besides the obvious flaws referred to above, Bishop also makes other unacceptable assumptions in his paper. For example, he assumes that a machine, given a certain input, must react identically on each occasion. But that assumes that the machine is not thinking in between consecutive inputs, whereas a truly intelligent machine would continue to think between such inputs, and so could react differently on subsequent occasions. By that assumption, Bishop is effectively asking us to judge whether a machine that is not capable of thinking is intelligent. This is a classic case of a straw man argument, where Bishop invites us to consider a machine that cannot think and inviting us to agree that because such a machine is not intelligent, then no machine can be intelligent.
Diverse opinions and criticisms are welcome, but messages that are frivolous, irrelevant or devoid of logical basis will be blocked (comments will be checked before appearing on this site). Difficulties in understanding the site content are usually best addressed by contacting me by e-mail. Note: you will be asked to provide an e-mail address - this will only be used to notify you of replies to your comments - it will never be used for any other purpose, will never be displayed and does not require verification. Comments are common to the entire website, so please indicate what section of the site you are commenting on.
If you cannot see any comments below, it may be that a plug-in on your browser is blocking Disqus comments from loading. Avast anti-virus in particular is known to do this, especially with Internet Explorer and Safari. See Disqus Browser plug-in/extension conflicts or Why isn’t the comment box loading?.
Please wait for comments to load …
It has come to my notice that, when asked about the demonstration of the flaw in his proof (see A Fundamental Flaw in an Incompleteness Proof by Peter Smith PDF), Smith refuses to engage in any logical discussion, and instead attempts to deflect attention away from any such discussion. If any other reader has tried to engage with Smith regarding my demonstration of the flaw, I would be interested to know what the outcome was.
There is a new addition to the page Yet another flawed incompleteness proof, where Berto’s proof of incompleteness in his book There’s something about Gödel comes under scrutiny.
I found that making, adding or deleting footnotes in the traditional manner proved to be a major pain. So I developed a different system for footnotes which makes inserting or changing footnotes a doddle. You can check it out at Easy Footnotes for Web Pages (Accessibility friendly).
I have now added a new section to my paper on Russell O’Connor’s claim of a computer verified incompleteness proof. This shows that the flaw in the proof arises from a reliance on definitions that include unacceptable assumptions - assumptions that are not actually checked by the computer code. See also the new page Representability.
There is now a new page on Chaitin’s Constant (Chaitin’s Omega), which demonstrates that Chaitin has failed to prove that it is actually algorithmically irreducible.
For convenience, there are now two pages on this site with links to various material relating to Gödel and the Incompleteness Theorem
– a page with general links:
– and a page relating specifically to the Gödel mind-machine debate:
All pages on this website are printer friendly, and will print the main content in a convenient format. Note that the margins are set by your browser print settings.
Comments on this site are welcome, please see the comment section.
Please note that this web site, like any other is a collection of various statements. Not all of this web site is intended to be factual. Some of it is personal opinion or interpretation.
If you prefer to ask me directly about the material on this site, please send me an e-mail with your query, and I will attempt to reply promptly.
Feedback about site design would also be appreciated so that I can improve the site.
Copyright © James R Meyer 2012 - 2017