Copyright © James R Meyer 2012 - 2017 www.jamesrmeyer.com
In 1980, the philosopher John Searle published a paper that claimed to show that artificial intelligence machines could never have ‘understanding’, regardless of their reasoning abilities. The fundamental idea of John Searle’s argument (commonly called the Chinese Room argument), taken down to its bare bones, is as follows:
Consider a thing with the following properties:
From this Searle claims:
Premise: the processing part of the thing does not require any understanding of the language in order to process the input and generate an output according to the instructions,
Conclusion: the thing does not understand the language.
The absurdity of the argument is obvious. There is no logical basis, given the premise, for inferring the conclusion. There is no reasoned argument at all.
It is quite astonishing that there has been ongoing controversy over Searle’s Chinese room argument for so long when a logical analysis of the argument readily demonstrates the absurdity of the argument.
Perhaps the reason is that the argument which Searle presents dresses up the above fundamental argument by adding in quite extraneous details which do not affect the basic premises. In Searle’s account, the thing is a closed room with no windows, the inputs and outputs are slits in the wall through which paper can be pushed, the language is Chinese, the set of instructions are written on paper, and the processing part of the thing is a human who does not understand Chinese. If the instructions are good enough, the responses of the room will be indistinguishable from the responses of a fluent Chinese human.
Searle concludes that, although the responses are indistinguishable from the responses of a fluent Chinese human, since the human in the room does not understand Chinese, then the entire room does not understand Chinese.
But when the extra baggage is removed from the argument, we see that Searle’s conclusion is simply the conclusion that he wants, reached by an appeal to intuition. Searle’s Chinese Room does not provide any logical basis for his conclusion, but serves as a smoke screen that has obscured the fact that Searle has no logical basis whatsoever for his conclusion.
Searle admits that there might be objections to his arguments on various grounds, and gives counter-arguments to those objections. None of the counter-arguments that he refers to address the issue of the lack of any logical inference of Searle’s conclusion from his premise.
It is unfortunate that responses to Searle seem to have concentrated on every other aspect of his argument other than his failure to provide a logical reason for drawing his conclusion from the premises given. Such responses have unfortunately created the impression that there might be some substance in his argument, whereas a logical analysis shows his argument to be hopelessly subjective and without any logical foundation.
The failure of Searle and others to perceive that the argument fails to logically derive its conclusion from the premise appears to result from the lack of definition of ‘understanding a language’ or any measurement of such. An intuitive notion of ‘understanding’ is applied instead. Nowhere does Searle make any attempt to move towards an objective definition of what he intends ‘to understand a language’ to mean, nor does he give any consideration to the question of formulating an objective method of measurement of ‘understanding’ of a language. Instead Searle boldly asserts that he knows what ‘to understand a language’ means, but refuses to define it, with the result no-one can be sure what he is actually talking about. Searle asserts that his intuition and subjective judgment are to be the criteria by which it is to be known that no machine could ever be made that could understand a language, where ‘to understand a language’ means only what Searle wants it to mean in any given context, and is not be subjected to any objective measurement.
While Searle’s Chinese Room scenario cannot be used to prove that no machine can ever ‘understand’ a human language, it nevertheless raises interesting issues regarding what we mean by ‘understanding’. Searle gives no objective measure of understanding of Chinese that we can apply to the processor and the entire thing. Searle blithely states that the entire thing understands nothing of Chinese, but does not consider how the entire thing might respond to an objective test of understanding of Chinese.
What Searle calls the ‘other minds reply’ is perhaps the closest he comes to considering the definition of understanding. Searle states that an objection might be:
“Searle has not given any information as to how one might determine whether another mind understands Chinese, that one might expect that such determination would have to be by the behaviour of that mind, that Searle has not demonstrated that a machine can never pass such behavioural tests as well as a human.”
Searle dismisses this in a few sentences, without addressing the issue of measurement of understanding, and confuses the issue by referring to ‘cognitive states’ rather than ‘understanding’ (continually changing the terms of reference is a common method of deflecting an inconvenient objection to an argument).
Searle states that (here I have reworded his arguments since his original arguments are so abstruse and poorly worded)
Here Searle deflects the question of an objective measurement of understanding, and dismisses it as unimportant. He considers his subjective judgment to be sufficient for determining what understanding is.
Here Searle argues that since computational processes and their output can exist without understanding, that no combination of computational processes and their outputs cannot ever be considered to have ‘understanding’. This argument is absurd. One might as well say that since parts of the brain can exist without having understanding, then the brain as a complete entity cannot have understanding. Or that since muscles cannot walk, then legs which use muscles cannot walk.
Here Searle simply argues that one must assume that there is a ‘mental state’ that is real. But that ‘mental state’ is not itself a physical object, rather it is the property of a physical object, so the ‘mental state’ is dependent for its existence on the physical attributes of the physical object. Searle does not explain how this might refute the objection, unless he is assuming that no machine, regardless of its physical attributes, can have a ‘mental state’. But that is simply introducing another ill-defined term, ‘mental state’, which adds nothing to Searle’s argument and serves only to confuse. Other attempts by Searle to deal with objections to his argument show similar flawed logic and similar appeals to intuition.
Another interesting viewpoint on the Chinese Room argument can also be seen at Conscious Entities - Against the Chinese Room.
Diverse opinions and criticisms are welcome, but messages that are frivolous, irrelevant or devoid of logical basis will be blocked (comments will be checked before appearing on this site). Difficulties in understanding the site content are usually best addressed by contacting me by e-mail.Note: you will be asked to provide an e-mail address - this will only be used to notify you of replies to your comments - it will never be used for any other purpose, will never be displayed and does not require verification. Comments are common to the entire website, so please indicate what section of the site you are commenting on.
If you cannot see any comments below, it may be that a plug-in on your browser is blocking Disqus comments from loading. Avast anti-virus in particular is known to do this, especially with Internet Explorer and Safari. See Disqus Browser plug-in/extension conflicts or Why isn’t the comment box loading?.
Please wait for comments to load …
It has come to my notice that, when asked about the demonstration of the flaw in his proof (see A Fundamental Flaw in an Incompleteness Proof by Peter Smith PDF), Smith refuses to engage in any logical discussion, and instead attempts to deflect attention away from any such discussion. If any other reader has tried to engage with Smith regarding my demonstration of the flaw, I would be interested to know what the outcome was.
There is a new addition to the page Yet another flawed incompleteness proof, where Berto’s proof of incompleteness in his book There’s something about Gödel comes under scrutiny.
I found that making, adding or deleting footnotes in the traditional manner proved to be a major pain. So I developed a different system for footnotes which makes inserting or changing footnotes a doddle. You can check it out at Easy Footnotes for Web Pages (Accessibility friendly).
I have now added a new section to my paper on Russell O’Connor’s claim of a computer verified incompleteness proof. This shows that the flaw in the proof arises from a reliance on definitions that include unacceptable assumptions - assumptions that are not actually checked by the computer code. See also the new page Representability.
There is now a new page on Chaitin’s Constant (Chaitin’s Omega), which demonstrates that Chaitin has failed to prove that it is actually algorithmically irreducible.
For convenience, there are now two pages on this site with links to various material relating to Gödel and the Incompleteness Theorem
– a page with general links:
– and a page relating specifically to the Gödel mind-machine debate:
All pages on this website are printer friendly, and will print the main content in a convenient format. Note that the margins are set by your browser print settings.
Comments on this site are welcome, please see the comment section.
Please note that this web site, like any other is a collection of various statements. Not all of this web site is intended to be factual. Some of it is personal opinion or interpretation.
If you prefer to ask me directly about the material on this site, please send me an e-mail with your query, and I will attempt to reply promptly.
Feedback about site design would also be appreciated so that I can improve the site.
Copyright © James R Meyer 2012 - 2017