This page is keyboard accessible:
• Use Tab, Shift + Tab keys to traverse the main menu. To enter a sub-menu use the Right Arrow key. To leave a sub-menu use the Left Arrow or the Escape key.
• The Enter or the Space key opens the active menu item.
• To skip the menu and move to the main content, press Tab after the page loads to reveal a skip button.
• To get back to the top of the page anytime, press the Home key.
• For more information, click here: Accessibility   Close this tip.

Note: Full functionality of this web page requires JavaScript to be enabled in your browser.
 

Blog Archive

23rd Feb 2015     Artificial Intelligence

Is AI (artificial intelligence) becoming so advanced that we will soon have machines that can surpass human intelligence? And might this spell the end of human beings as the dominant intelligence on this planet? Some people, such as Stephen Hawking, Elon Musk and Bill Gates, have warned of the possible dangers of AI, see, for example Open Letter, The Future of Life Institute, AI research priorities document PDF, BBC: Stephen Hawking warns artificial intelligence could end mankind.

 

(Picture of Hawking)Yes, it may happen, but is it really so regrettable from the long term viewpoint? We are all going to die sooner or later, so if we regretting that advanced machines may take over from humans in the future, we are regretting that the future is not going to be the way we wanted it or the way we always assumed that it would be. But we don’t really know how the future is going to turn out anyway. Perhaps if machines take over then the earth may not end up with an overpopulated humanity using up the finite resources of earth, where a likely scenario is a massive increase in wars, disease and starvation. In that case, in my opinion, the scenario where machines have taken over and use the earth’s resources sensibly would be preferable.

 

In the short term a more likely scenario is perhaps where a human’s intelligence and memory could be boosted by implanting some sort of artificial intelligence booster module. And that raises the question, who is going to be able to afford these devices initially? Only the very rich. And once they have them, surely they will do everything they can to prevent everyone else having them. And then we will truly have a two-tiered humanity where the rich, as well as being rich, are also far more intelligent - and as a result probably much better at keeping themselves far richer than the common masses.

 

If that happens, it won’t be very pleasant being a non-rich person, and over time, the rich will eliminate the non-rich, except where they will be useful as servants where for certain activities their utilization turns out to be cheaper than using a machine.

 

Is this a good thing, or a bad thing? There really isn’t a single answer to this - if you are rich enough to afford the latest AI booster implant module, you will think it’s great - and if you can’t, you will think that the whole thing stinks. But it’s hard to say whether or not the end result would be an earth that would be a better place for life in general.

 

(Picture from Ex Machina)An interesting aspect regarding the possibility of advanced AI is what goals/objectives that the AI will have. Humans, through years of evolution, have built-in desires and objectives, such as to survive and reproduce. With an AI machine, however, the objectives of such a machine will be dependent on the design of the machine. In the recent film Ex Machina, the robots had an overwhelming desire to escape from their surroundings. There was no explanation in the film as to why this was the case - perhaps the designer instilled a strong urge to explore and learn as much as possible. In the film, the designer’s solution to this problem was to have the robots contained within a physically strong building equipped with a high tech security system. That seems rather implausible when a designer capable of designing the robot surely would have been to build the robots with an instilled desire to remain in the surroundings they were created in, while retaining a desire to obtain as much knowledge as possible without leaving their environment.

 

However, to instill a desire in an AI machine to follow a rather more nebulous notion such as benefiting the overall interests of mankind would be a completely different matter. It will probably be the case that the first major problems with advanced AI will result from a human designer creating AI which has goals which will benefit the designer, but which will be to the detriment of the overall interests of mankind. The problem in trying to prevent unwanted AI behavior is the problem of defining which goals of AI would be desirable to humankind, and which would be undesirable. As AI becomes more complex, humans will lack the capacity to decide between the two - and it would seem inevitable that some sort of ‘survival of the fittest’ will eventually decide the outcome of advanced AI, rather than human legislation.

 

 

Diverse opinions and criticisms are welcome, but messages that are frivolous, irrelevant or devoid of logical basis will be blocked. Difficulties in understanding the site content are usually best addressed by contacting me by e-mail. Note: you will be asked to provide an e-mail address - any address will do, it does not require verification. Your e-mail will only be used to notify you of replies to your comments - it will never be used for any other purpose and will not be displayed. If you cannot see any comments below, see Why isn’t the comment box loading?.

 

 

Please wait for comments to load …  

 

The Lighter Side

 

NEWS

Lebesgue Measure

There is now a new page on Lebesgue measure theory and how it is contradictory.

 

 

Illogical Assumptions

There is now a new page Halbach and Zhang’s Yablo without Gödel which demonstrates the illogical assumptions used by Halbach and Zhang.

 

 

Peter Smith’s ‘Proof’

It has come to my notice that, when asked about the demonstration of the flaw in his proof (see A Fundamental Flaw in an Incompleteness Proof by Peter Smith PDF), Smith refuses to engage in any logical discussion, and instead attempts to deflect attention away from any such discussion. If any other reader has tried to engage with Smith regarding my demonstration of the flaw, I would be interested to know what the outcome was.

 

 

Easy Footnotes

I found that making, adding or deleting footnotes in the traditional manner proved to be a major pain. So I developed a different system for footnotes which makes inserting or changing footnotes a doddle. You can check it out at Easy Footnotes for Web Pages (Accessibility friendly).

 

 

O’Connor’s “computer checked” proof

I have now added a new section to my paper on Russell O’Connor’s claim of a computer verified incompleteness proof. This shows that the flaw in the proof arises from a reliance on definitions that include unacceptable assumptions - assumptions that are not actually checked by the computer code. See also the new page Representability.

 

 

New page on Chaitin’s Constant

There is now a new page on Chaitin’s Constant (Chaitin’s Omega), which demonstrates that Chaitin has failed to prove that it is actually algorithmically irreducible.

 

Previous Blog Posts  

 

Links  

 

For convenience, there are now two pages on this site with links to various material relating to Gödel and the Incompleteness Theorem

 

– a page with general links:

Gödel Links

 

– and a page relating specifically to the Gödel mind-machine debate:

Gödel, Minds, and Machines

 

Printer Friendly

 

All pages on this website are printer friendly, and will print the main content in a convenient format. Note that the margins are set by your browser print settings.


Note: for some browsers JavaScript must be enabled for this to operate correctly.

 

Comments

 

Comments on this site are welcome, please see the comment section.

 

Please note that this web site, like any other is a collection of various statements. Not all of this web site is intended to be factual. Some of it is personal opinion or interpretation.

 

If you prefer to ask me directly about the material on this site, please send me an e-mail with your query, and I will attempt to reply promptly.

 

Feedback about site design would also be appreciated so that I can improve the site.

 


Copyright © James R Meyer 2012 - 2017  
www.jamesrmeyer.com