This page is keyboard accessible:
• Use Tab, Shift + Tab keys to traverse the main menu. To enter a sub-menu use the Right Arrow key. To leave a sub-menu use the Left Arrow or the Escape key.
• The Enter or the Space key opens the active menu item.
• To skip the menu and move to the main content, press Tab after the page loads to reveal a skip button.
• To get back to the top of the page anytime, press the Home key.
• For more information, click here: Accessibility   Close this tip.

Note: Full functionality of this web page requires JavaScript to be enabled in your browser.

Blog Archive

30 Apr 2015    The Chinese Room

In 1980, the philosopher John Searle published a paper that claimed to show that artificial intelligence machines could never have ‘understanding’, regardless of their reasoning abilities. The fundamental idea of John Searle’s argument (commonly called the Chinese Room argument), taken down to its bare bones, is as follows:

Consider a thing with the following properties:

Picture: chinese art
  1. The thing can take, as an input, sentences of a given language.
  2. The thing includes the ability to process the input and generate output sentences of that same language according to a set of instructions.
  3. The thing includes as an integral part of itself this set of instructions. The instructions determine what the output will be for any valid input. The instructions are such that, for valid inputs, outputs are generated that would be an appropriate response for a human fluent in that language.
  4. The thing always follows the instructions.

From this Searle claims:

Premise: the processing part of the thing does not require any understanding of the language in order to process the input and generate an output according to the instructions,


Conclusion: the thing does not understand the language.


The absurdity of the argument is obvious. There is no logical basis, given the premise, for inferring the conclusion. There is no reasoned argument at all.


It is quite astonishing that there has been ongoing controversy over Searle’s Chinese room argument for so long when a logical analysis of the argument readily demonstrates the absurdity of the argument.


Perhaps the reason is that the argument which Searle presents dresses up the above fundamental argument by adding in quite extraneous details which do not affect the basic premises. In Searle’s account, the thing is a closed room Picture: Chinese symbols with no windows, the inputs and outputs are slits in the wall through which paper can be pushed, the language is Chinese, Picture: Chinese symbolsthe set of instructions are written on paper, and the processing part of the thing is a human who does not understand Chinese. If the instructions are good enough, the responses of the room will be indistinguishable from the responses of a fluent Chinese human.


Searle concludes that, although the responses are indistinguishable from the responses of a fluent Chinese human, since the human in the room does not understand Chinese, then the entire room does not understand Chinese.


But when the extra baggage is removed from the argument, we see that Searle’s conclusion is simply the conclusion that he wants, reached by an appeal to intuition. Searle’s Chinese Room does not provide any logical basis for his conclusion, but serves as a smoke screen that has obscured the fact that Searle has no logical basis whatsoever for his conclusion.


Searle admits that there might be objections to his arguments on various grounds, and gives counter-arguments to those objections. None of the counter-arguments that he refers to address the issue of the lack of any logical inference of Searle’s conclusion from his premise.


It is unfortunate that responses to Searle seem to have concentrated on every other aspect of his argument other than his failure to provide a logical reason for drawing his conclusion from the premises given. Such responses have unfortunately created the impression that there might be some substance in his argument, whereas a logical analysis shows his argument to be hopelessly subjective and without any logical foundation.


The failure of Searle and others to perceive that the argument fails to logically derive its conclusion from the premise appears to result from the lack of definition of ‘understanding a language’ or any measurement of such. An intuitive notion of ‘understanding’ is applied instead. Nowhere does Searle make any attempt to move towards an objective definition of what he intends ‘to understand a language’ to mean, nor does he give any consideration to the question of formulating an objective method of measurement of ‘understanding’ of a language. Instead Searle boldly asserts that he knows what ‘to understand a language’ means, but refuses to define it, with the result no-one can be sure what he is actually talking about. Searle asserts that his intuition and subjective judgment are to be the criteria by which it is to be known that no machine could ever be made that could understand a language, where ‘to understand a language’ means only what Searle wants it to mean in any given context, and is not be subjected to any objective measurement.



While Searle’s Chinese Room scenario cannot be used to prove that no machine can ever ‘understand’ a human language, it nevertheless raises interesting issues regarding what we mean by ‘understanding’. Searle gives no objective measure of understanding of Chinese that we can apply to the processor and the entire thing. Searle blithely states that the entire thing understands nothing of Chinese, but does not consider how the entire thing might respond to an objective test of understanding of Chinese.


What Searle calls the ‘other minds reply’ is perhaps the closest he comes to considering the definition of understanding. Searle states that an objection might be:

“Searle has not given any information as to how one might determine whether another mind understands Chinese, that one might expect that such determination would have to be by the behaviour of that mind, that Searle has not demonstrated that a machine can never pass such behavioural tests as well as a human.”

Searle dismisses this in a few sentences, without addressing the issue of measurement of understanding, and confuses the issue by referring to ‘cognitive states’ rather than ‘understanding’ (continually changing the terms of reference is a common method of deflecting an inconvenient objection to an argument).


Searle states that (here I have reworded his arguments since his original arguments are so abstruse and poorly worded)

  1. One needs to consider not how I know that people/machines have understanding, but rather what it is that I am attributing to them when I attribute understanding to them.

Here Searle deflects the question of an objective measurement of understanding, and dismisses it as unimportant. He considers his subjective judgment to be sufficient for determining what understanding is.

  1. When one states that a human has ‘understanding’, the attributes that underpin ‘understanding’ cannot be merely computational processes and their output, because the computational processes and their output can exist without understanding.

Here Searle argues that since computational processes and their output can exist without understanding, that no combination of computational processes and their outputs cannot ever be considered to have ‘understanding’. This argument is absurd. One might as well say that since parts of the brain can exist without having understanding, then the brain as a complete entity cannot have understanding. Or that since muscles cannot walk, then legs which use muscles cannot walk.

  1. In ‘understanding’ one must assume that there is a reality and knowability of the mental state in the same way that in physical sciences one has to assume the reality and knowability of physical objects.

Here Searle simply argues that one must assume that there is a ‘mental state’ that is real. But that ‘mental state’ is not itself a physical object, rather it is the property of a physical object, so the ‘mental state’ is dependent for its existence on the physical attributes of the physical object. Searle does not explain how this might refute the objection, unless he is assuming that no machine, regardless of its physical attributes, can have a ‘mental state’. But that is simply introducing another ill-defined term, ‘mental state’, which adds nothing to Searle’s argument and serves only to confuse. Other attempts by Searle to deal with objections to his argument show similar flawed logic and similar appeals to intuition.


Another interesting viewpoint on the Chinese Room argument can also be seen at Conscious Entities - Against the Chinese Room.

section divider
section divider



Diverse opinions and criticisms are welcome, but messages that are frivolous, irrelevant or devoid of logical basis will be blocked. Difficulties in understanding the site content are usually best addressed by contacting me by e-mail. Note: you will be asked to provide an e-mail address - any address will do, it does not require verification. Your e-mail will only be used to notify you of replies to your comments - it will never be used for any other purpose and will not be displayed. If you cannot see any comments below, see Why isn’t the comment box loading?.

section divider

The Lighter Side


Paper on the diagonal proof

There is now a paper that deals with the matter of language and the diagonal proof, see On Considerations of Language in the Diagonal Proof.

section divider

Other recently added pages

The Myths of Platonism


Goodman’s Paradox


The Platonist Rod paradox


The Balls in the Urn Paradox


section divider

Lebesgue Measure

There is now a new page on a contradiction in Lebesgue measure theory.

section divider

Easy Footnotes

I found that making, adding or deleting footnotes in the traditional manner proved to be a major pain. So I developed a different system for footnotes which makes inserting or changing footnotes a doddle. You can check it out at Easy Footnotes for Web Pages (Accessibility friendly).

section divider

O’Connor’s “computer checked” proof

I have now added a new section to my paper on Russell O’Connor’s claim of a computer verified incompleteness proof. This shows that the flaw in the proof arises from a reliance on definitions that include unacceptable assumptions - assumptions that are not actually checked by the computer code. See also the new page Representability.

Previous Blog Posts

Moderate Platonism

Descartes’ Platonism

The duplicity of Mark Chu-Carroll

A John Searle Inanity

Man versus Machine

Fake News and Fake Mathematics

Ned Block’s Blockhead

Are we alone in the Universe?

Good Math, Bad Math?

Bishops Dancing with Pixies?

Artificial Intelligence

Cranks and Crackpots

The Chinese Room


For convenience, there are now two pages on this site with links to various material relating to Gödel and the Incompleteness Theorem


– a page with general links:

Gödel Links


– and a page relating specifically to the Gödel mind-machine debate:

Gödel, Minds, and Machines

Printer Friendly

All pages on this website are printer friendly, and will print the main content in a convenient format. Note that the margins are set by your browser print settings.

Note: for some browsers JavaScript must be enabled for this to operate correctly.


Comments on this site are welcome, please see the comment section.


Please note that this web site, like any other is a collection of various statements. Not all of this web site is intended to be factual. Some of it is personal opinion or interpretation.


If you prefer to ask me directly about the material on this site, please send me an e-mail with your query, and I will attempt to reply promptly.


Feedback about site design would also be appreciated so that I can improve the site.

Copyright © James R Meyer 2012 - 2018