Ned Block’s Blockhead Machine argument
13 Jan 2017
In a paper Psychologism and behaviourism (Footnote: Psychologism and behaviourism, Ned Block, 1981, Philosophical Review, 90, 5-43.) Ned Block conceives of a theoretical computer system (now commonly referred to as Blockhead) as part of a thought experiment. Block argues that the internal mechanism of a system is important in determining whether that system is intelligent, and claims that he can show that a non-intelligent system could pass the Turing Test.
Block asks us to imagine a conversation lasting any given amount of time. He argues that, there are only a finite number of syntactically and grammatically correct sentences that can be used to start a conversation. And from this point on there is a limit to how many valid responses can be made to this first sentence, and then again to the second sentence, and so on until the conversation ends.
Block then asks us to imagine a computer which had been programmed with every one of all these possible sentences. Although it has been claimed that the number of sentences required for an hour long conversation would be greater than the number of particles in the universe, Block argues that hypothetically such a machine could exist, so that his argument is still valid as a theoretical argument rather than one which can be applied in practice.
Given this hypothetical machine, Block invites us to agree that such a machine could continue a conversation with a person on any topic, because the computer would be programmed for not only every sentence but for every sequence of sentences that might be inputted to it. On this basis, Block claims that the hypothetical machine would be able to pass the Turing test despite the machine having no attributes that we would assign as indicative of intelligence.
Block claims that his argument shows that the internal composition of the machine must be considered in any assessment of whether that machine can be considered to be intelligent, and that the Turing test on its own cannot suffice. That is to say, his claim is:
Premise: the Blockhead machine does not use any intelligence to produce its response, yet it can pass the Turing test
Conclusion: the Turing test is not a sufficient condition for intelligence.
There are two principal flaws in Block’s argument:
1. Ignoring the time taken to react to a question
The principal assumption in Block’s argument is that the Blockhead machine, although impossibly large, is not infinitely large, and can contain all of what are considered to be intelligent responses that a human might make. Now, the Turing test is a test of whether a machine can emulate a human. The longer the period of the Turing test, the more difficult it is for a machine to pass the test. Block claims that his argument and a Blockhead machine that he describes, as long as the machine is large enough, suffices for a Turing Test of any duration.
However, Block makes no mention of the time taken for his finite but impossibly huge Blockhead machines to produce a response. He admits that his machine may be larger than the observable universe, but insists that it is a valid theoretical concept since it is nevertheless finite. But if the hypothesis results in an imaginary machine that is larger than the observable universe, it follows that there would be physical limitations with this imaginary machine. For example, the response time to at least some of the questions will be in terms of time greater than 24 hours, so that there will always be a possibility that the response time to a least one question will be greater than the time allocated for the test (It might be mentioned that Block says that there may be new discoveries in sub-atomic physics that would enable his Blockhead machine to be made on a human scale, but this is mere speculation which is completely at odds with scientific opinion at the present time).
As is well known the Turing test is, above all, a behavioural test, an assessment of the interaction of the entity being measured with a specific environment. A human judge will decide that a response time over a certain time interval (which will measured in seconds or minutes, depending on the question, rather than millions of years) is too long for it to be a normal valid human response time, and on that basis can decide that the Blockhead machine is not a human (at least not a normal one), nor a human-like intelligence. After all, we do not find it surprising that IQ tests are time-limited tests; we would class something that takes thirty minutes to solve a puzzle as being less intelligent than one that takes one minute to perform the same task.
2. Ignoring the possibility of a human introducing new words or symbols.
In any conversation, a human can introduce new words, and use such new words in subsequent conversation to refer to things that he might otherwise use natural language to refer to. A human could easily invent words of 60 characters or more; using the English alphabet of 26 letters, the number of possible words of up to 60 characters would be greater than the number of atoms estimated to be in the universe. Hence no machine of finite size could cope with every possible new word of up to 60 characters. In other words, such a machine could never exist, even hypothetically, in our universe.
Block’s description of his machine is that it will only deal with “sensible strings ”, and that the machine will be programmed using “imagination and judgment about what is to count as a sensible string ”. A sensible string would, of course, be any string that a human might use to test the respondent in a Turing test ! That is, all strings that would include words of 60 characters or more ! The size of the machine is now ridiculously enormous.
But even beyond that, Block’s argument looks even more preposterous when one can envisage that a human can also introduce new symbols and use such new symbols in subsequent conversation. The only limitations on the use of new symbols would be the overall size and that it must be possible to easily distinguish different symbols. There are thousands of Chinese characters, and a human could easily invent new ones. After all, humans must have invented all our language symbols at some point in time, so there is no reason to suppose that a human could not introduce a new symbol in a conversation.
Block’s machine is now becoming more and more ludicrously massive.
Block responds to the type of criticisms above by pleading (his reply to objection 6 in his paper) that:
“My argument requires only that the machine be logically possible, not that it be feasible or even nomologically possible.”
This, of course, is absurdity masquerading as meaningful philosophy. The reality is that humans are physical entities that are subject to the limitations of the physical world. The Turing test is a test envisaged to be applied to physical entities that are also subject to physical limitations.
When Block claims that a hypothetical non-physically realizable machine could pass a test that is designed to be applied in the real physical world to real physical entities, he is simply imagining a magic machine that happens to possess some physical attributes (such as the ability to manipulate symbols) but also possesses magical properties that have no possible physical realization. So the conclusion is?
A magic machine can do magic things that no physically realizable thing can do.
Eh - didn’t we already know that?
Interested in supporting this site?
As site owner I reserve the right to keep my comments sections as I deem appropriate. I do not use that right to unfairly censor valid criticism. My reasons for deleting or editing comments do not include deleting a comment because it disagrees with what is on my website. Reasons for exclusion include:
Frivolous, irrelevant comments.
Comments devoid of logical basis.
Comments with excessive number of different points.
Questions about matters that do not relate to the page they post on. Such posts are not comments.
Comments with a substantial amount of mathematical terms not properly formatted will not be published unless a file (such as doc, tex, pdf) is simultaneously emailed to me, and where the mathematical terms are correctly formatted.
Reasons for deleting comments of certain users:
Bulk posting of comments in a short space of time, often on several different pages, and which are not simply part of an ongoing discussion. Multiple anonymous usernames for one person.
Users, who, when shown their point is wrong, immediately claim that they just wrote it wrong and rewrite it again - still erroneously, or else attack something else on my site - erroneously. After the first few instances, further posts are deleted.
Users who make persistent erroneous attacks in a scatter-gun attempt to try to find some error in what I write on this site. After the first few instances, further posts are deleted.
Difficulties in understanding the site content are usually best addressed by contacting me by e-mail.
Note: a password enables editing of comments, an email enables notification of replies