Logic and Language
Load the menuLoad the menu


Copyright   James R Meyer    2012 - 2024 https://www.jamesrmeyer.com

BANNER CONTENT

Man versus Machine

Page last updated 13 June 2022

 

These days there is more and more discussion of artificial intelligence and the changes that it will have on our lives in the future. We see that there are more and more tasks that AI can perform better than any human can. And now we are seeing that AI systems can themselves design new and improved AI systems, see Google’s New AI Designs AI Better Than Humans Could. Elon Musk has spoken about the potential dangers of AI on multiple occasions. Ray Kurzweil, Google’s Director of Engineering, has claimed that by 2029 AI will have human-level intelligence, and comfortably pass any valid Turing test, and its responses will be at least on a par with those of any human, regardless of the questions posed to it. He has also claimed that by 2045 a ‘Singularity’ will be reached, where AI is significantly smarter than any human, see Kurzweil Claims That the Singularity Will Happen by 2045. And Masayoshi Son, CEO of Softbank has predicted that by 2047 a single computer chip will have an IQ equivalent to 10,000, see Softbank CEO: The Singularity Will Happen by 2047.

 

But, despite the evidence that AI is progressing by leaps and bounds, there are nevertheless many Luddites who prefer to believe that human brains/minds are somehow different in principle from any machine, and that human minds/brains will always have some sort of superiority over any AI machine. The two principal claims that these Luddites commonly bring to the table in support of their viewpoint are that it is impossible for any AI to fully understand “Gödel’s incompleteness theorem”, and that is impossible for any AI to achieve consciousness. Often the two claims are intertwined.

 

Book, The Emporer’s New MindHowever, despite the frequency of these claims being made, it is difficult to find attempts to provide a logically reasoned argument to support these claims. Perhaps the best in-depth attempts to give some sort of supporting argument to these claims in clear unambiguous language have been given by Roger Penrose, in his books The Emperor’s New Mind (Footnote: Roger Penrose, The Emperor’s New Mind, Oxford University Press, (1989) ISBN: 0198519737 The Emperor’s New Mind: Details.) and Shadows of the Mind. (Footnote: Roger Penrose, Shadows of the Mind, Oxford University Press, (1994) ISBN: 0198539789 Shadows of the Mind: Details.) In these books, Penrose claims that no AI can ever surpass human brains in finding mathematical “truths”. He bases his claims on Gödel’s proof of incompleteness, since, according to Penrose, a human can “see” the “truth” of Gödel’s proof of incompleteness, but no formal system can “see” that “truth”. Therefore, according to Penrose, mathematical insight cannot be mechanized, and there can be no AI that can replicate what a human does when “seeing” the “truth” of Gödel’s proof of incompleteness. Hence, claims Penrose, human brains must be inherently superior to any AI system. Penrose totally ignores the possibility that what he thinks he might be “seeing” in Gödel’s proof of incompleteness isn’t any profound truth, but a logical error, see Gödel’s theorem. But of course, he wouldn’t want to contemplate that possibility, as it would utterly destroy his claim that the human mind is superior to any possible AI.

 

Book, Shadows of the MindAnyway, let’s consider Penrose’s claims. If a person can understand a mathematical proof, where the proof proceeds without making any non-rule based assumptions other than the fundamental axioms, then for each individual step of the proof, that person must be able to deduce the result of that step from the information previous to that step. Every such deduction is given by a set of clearly defined rules, and so, any such deduction can be replicated by an algorithm. This applies to every step of a proof. The totality of all such deductions/algorithms for a given proof can be replicated by one single algorithm that goes through all the steps of the individual algorithms for that proof.

 

So, if Gödel’s “truth” result that Penrose is able to “see”, but which no AI system can ever “see”, is not given by some such algorithm, it must be the case that what Penrose calls “seeing” the “truth” of a proposition cannot be obtained by any logical deduction - which means it must be obtained by some leap of faith, where an assumption is used instead of a deduction. Penrose tries to camouflage what he is claiming by referring to mathematicians having insights, or intuition. But the crucial point that Penrose ignores is that if a mathematician has some sort of intuition or insight or hunch, his next step should always be to confirm that his hunch was correct, by subsequently proving rigorously what he had previously suspected. Otherwise, mathematics is simply a childish game where no-one knows or cares whether mathematical claims are correct.

 

Book, Dangerous IdeasAs Daniel Dennett says, “Whenever we say we solved some problem ‘by intuition’ we don’t know how we solved it”. (Footnote: Daniel C Dennett, Darwin’s dangerous idea: Evolution and the meanings of life, first ed 1995, pub Penguin (New ed Sept. 1996) ISBN: 9780140167344.) When Penrose says that he cannot conceive of any algorithm that could enable one to “see” the “truth” of a certain mathematical statement, then he is really saying that he doesn’t know how he is able to “see” the “truth” of that statement. What he does do is to suggest that there might be some quantum process occurring in the brain that accounts for this “seeing” the “truth”, but that doesn’t even begin to explain how that might bypass the need for a logical deduction of all steps that lead to that “seeing” the “truth”. The argument thus boils down to:

“We don’t understand how we “see” the “truth” of some mathematical statements - and at the quantum level things occur in a different way to things that occur at large scales - so maybe something about quantum processes enables us to “see” the “truth” of some mathematical statements?”

It’s a desperate argument, invoking a claim that there has to be some magical property that we can’t understand but which can enable us to “see” the “truth”. Penrose’s argument is so absurd that it beggars belief that a competent mathematician could make it; it is an appeal to rely on faith rather than logic and evidence. Furthermore, it doesn’t even begin to explain why, even if such a quantum process was involved in the human brain, no machine could ever use such a process. And it can be noted that in the thirty or so years since Penrose mooted his idea, there has not been any experimental evidence to support his idea despite various efforts, and a recent paper pours cold water on one of the suggested possible quantum mechanisms, see At the crossroad of the search for spontaneous radiation and the Orch OR consciousness theory.

 

Penrose also uses a fallacious argument which involves creating a straw man; he argues that there can be no universal infallible method for creating a proof of a mathematical proposition. But no-one is claiming that there is ! What Penrose does is to conflate the notion of having a universal infallible method for proving a mathematical proposition, and the simple fact that most proofs are obtained by the use of a fallible trial and error method - a method that is in principle reducible to an algorithm. The fact that this method may not be able to provide a proof of every possible mathematical statement is completely and utterly irrelevant. After all, there is no reason to suppose that human insight or intuition can provide a proof of every possible mathematical statement either. Daniel Dennett has written a perspicacious article (Footnote: Daniel C. Dennett, PDF Murmurs in the cathedral: Review of R Penrose, The Emperor’s New Mind, Times Literary Supplement, 29 September 1989.) on Penrose’s book The Emperor’s New Mind, and is not fooled by Penrose’s fallacious posturing. He points out that if we compare Penrose’s argument regarding discovering correct mathematical proofs to the notion of achieving checkmate in a chess game then we end up with this argument:

  • X is superbly capable of achieving checkmate.
  • There is no (practical) algorithm guaranteed to achieve checkmate,
    therefore
  • X does not owe its power to achieve checkmate to an algorithm.

 

The point Dennet makes is that Penrose makes an elementary error in logical deduction - simply because mathematicians sometimes produce correct mathematical proofs, that does not prove that they are not using some sort of algorithm to achieve those results. The evidence that does exist indicates that mathematicians actually use a trial and error method. Many mathematicians will attest that they perused several false trails before attaining the correct one that leads to a proof. Furthermore, there have been many cases of proofs that were published, but later found to be incorrect. (Footnote: For example, a proof of the four color map theorem was published by the prominent mathematician Alfred Kempe in 1879, and which received widespread acclaim. Eleven years later, an error was found in the proof. Another proof of the four color map theorem was published by Peter Guthrie Tait in 1880. Again, it was eleven years before anyone discovered that there was an error in the proof. See also the web-page Mathoverflow: Widely accepted mathematical results that were later shown to be wrong which details numerous instances of proofs later discovered to be erroneous.)

 

Penrose also invokes consciousness in an attempt to shore up his arguments, stating:

I believe, also, that our consciousness is a crucial ingredient in our comprehension of mathematical truth. We must “see” the truth of a mathematical argument to be convinced of its validity. This “seeing” is the very essence of consciousness. It must be present whenever we directly perceive mathematical truth. When we convince ourselves of the validity of Gödel’s theorem we not only “see” it, but by so doing we reveal the very non-algorithmic nature of the “seeing” process itself. (Footnote: From the book, The Emperor’s New Mind, Oxford University Press, (1989) ISBN: 0198519737 The Emperor’s New Mind: Details )

 

But what is consciousness? Part of the problem when trying to counter claims that AI can never have consciousness is that the claimants never define clearly and precisely what they actually mean by the term consciousness. But it seems to me that consciousness is simply the result of a person having in the brain a model of himself and his environment. Having such a model enables the person to make predictions of what is likely to happen in different scenarios. This applies also to other entities in the environment, such as other humans, so that the person can make predictions as to what another person is likely to do in a certain set of circumstances. It’s easy to see why such a model system might have evolved, as the better a person can make predictions as to what might happen in various different scenarios enables him to make a better choice as to which actions he should do next.

 

At the moment, we do not have AI systems that have complex models of themselves and their environment. But in principle there is no reason to suppose that such AI systems are impossible. It is easy to imagine AI systems that have very simple models of themselves and their environment. Given that is the case, why should we then imagine that there is some limitation on how complex such models might become? Such imagined limitations are mere wishful thinking, and which go against all the lessons of the past.

Footnotes:

Other Posts

 

Interested in supporting this site?

You can help by sharing the site with others. You can also donate at Go Get Funding: Logic and Language where there are full details.

 

 

As site owner I reserve the right to keep my comments sections as I deem appropriate. I do not use that right to unfairly censor valid criticism. My reasons for deleting or editing comments do not include deleting a comment because it disagrees with what is on my website. Reasons for exclusion include:
Frivolous, irrelevant comments.
Comments devoid of logical basis.
Derogatory comments.
Long-winded comments.
Comments with excessive number of different points.
Questions about matters that do not relate to the page they post on. Such posts are not comments.
Comments with a substantial amount of mathematical terms not properly formatted will not be published unless a file (such as doc, tex, pdf) is simultaneously emailed to me, and where the mathematical terms are correctly formatted.


Reasons for deleting comments of certain users:
Bulk posting of comments in a short space of time, often on several different pages, and which are not simply part of an ongoing discussion. Multiple anonymous user names for one person.
Users, who, when shown their point is wrong, immediately claim that they just wrote it incorrectly and rewrite it again - still erroneously, or else attack something else on my site - erroneously. After the first few instances, further posts are deleted.
Users who make persistent erroneous attacks in a scatter-gun attempt to try to find some error in what I write on this site. After the first few instances, further posts are deleted.


Difficulties in understanding the site content are usually best addressed by contacting me by e-mail.

 

Based on HashOver Comment System by Jacob Barkdull

Copyright   James R Meyer   2012 - 2024
https://www.jamesrmeyer.com