23rd Feb 2015
Is AI (artificial intelligence) becoming so advanced that we will soon have machines that can surpass human intelligence? And might this spell the end of human beings as the dominant intelligence on this planet? Some people, such as Stephen Hawking, Elon Musk and Bill Gates, have warned of the possible dangers of AI, see, for example Open Letter, The Future of Life Institute, AI research priorities document, BBC: Stephen Hawking warns artificial intelligence could end mankind.
Yes, it may happen, but is it really so regrettable from the long term viewpoint? We are all going to die sooner or later, so if we regretting that advanced machines may take over from humans in the future, we are regretting that the future is not going to be the way we wanted it or the way we always assumed that it would be. But we don’t really know how the future is going to turn out anyway. Perhaps if machines take over then the earth may not end up with an overpopulated humanity using up the finite resources of earth, where a likely scenario is a massive increase in wars, disease and starvation. In that case, in my opinion, the scenario where machines have taken over and use the earth’s resources sensibly would be preferable.
In the short term a more likely scenario is perhaps where a human’s intelligence and memory could be boosted by implanting some sort of artificial intelligence booster module. And that raises the question, who is going to be able to afford these devices initially? Only the very rich. And once they have them, surely they will do everything they can to prevent everyone else having them. And then we will truly have a two-tiered humanity where the rich, as well as being rich, are also far more intelligent - and as a result probably much better at keeping themselves far richer than the common masses.
If that happens, it won’t be very pleasant being a non-rich person, and over time, the rich will eliminate the non-rich, except where they will be useful as servants where for certain activities their utilization turns out to be cheaper than using a machine.
Is this a good thing, or a bad thing? There really isn’t a single answer to this - if you are rich enough to afford the latest AI booster implant module, you will think it’s great - and if you can’t, you will think that the whole thing stinks. But it’s hard to say whether or not the end result would be an earth that would be a better place for life in general.
An interesting aspect regarding the possibility of advanced AI is what goals/objectives that the AI will have. Humans, through years of evolution, have built-in desires and objectives, such as to survive and reproduce. With an AI machine, however, the objectives of such a machine will be dependent on the design of the machine. In the recent film Ex Machina, the robots had an overwhelming desire to escape from their surroundings. There was no explanation in the film as to why this was the case - perhaps the designer instilled a strong urge to explore and learn as much as possible. In the film, the designer’s solution to this problem was to have the robots contained within a physically strong building equipped with a high tech security system. That seems rather implausible when a designer capable of designing the robot surely would have been to build the robots with an instilled desire to remain in the surroundings they were created in, while retaining a desire to obtain as much knowledge as possible without leaving their environment.
However, to instill a desire in an AI machine to follow a rather more nebulous notion such as benefiting the overall interests of mankind would be a completely different matter. It will probably be the case that the first major problems with advanced AI will result from a human designer creating AI which has goals which will benefit the designer, but which will be to the detriment of the overall interests of mankind. The problem in trying to prevent unwanted AI behavior is the problem of defining which goals of AI would be desirable to humankind, and which would be undesirable. As AI becomes more complex, humans will lack the capacity to decide between the two - and it would seem inevitable that some sort of ‘survival of the fittest’ will eventually decide the outcome of advanced AI, rather than human legislation.
Interested in supporting this site?
As site owner I reserve the right to keep my comments sections as I deem appropriate. I do not use that right to unfairly censor valid criticism. My reasons for deleting or editing comments do not include deleting a comment because it disagrees with what is on my website. Reasons for exclusion include:
Frivolous, irrelevant comments.
Comments devoid of logical basis.
Comments with excessive number of different points.
Questions about matters that do not relate to the page they post on. Such posts are not comments.
Comments with a substantial amount of mathematical terms not properly formatted will not be published unless a file (such as doc, tex, pdf) is simultaneously emailed to me, and where the mathematical terms are correctly formatted.
Reasons for deleting comments of certain users:
Bulk posting of comments in a short space of time, often on several different pages, and which are not simply part of an ongoing discussion. Multiple anonymous usernames for one person.
Users, who, when shown their point is wrong, immediately claim that they just wrote it wrong and rewrite it again - still erroneously, or else attack something else on my site - erroneously. After the first few instances, further posts are deleted.
Users who make persistent erroneous attacks in a scatter-gun attempt to try to find some error in what I write on this site. After the first few instances, further posts are deleted.
Difficulties in understanding the site content are usually best addressed by contacting me by e-mail.
Note: a password enables editing of comments, an email enables notification of replies