We’ve all seen the movies where the machines take over. Terminator, The Matrix, 2001, there’s even the 1998 B movie classic ‘Dream House’ where a smart home goes rogue with (unintentionally) hilarious consequences.
But of course we all know that AI is really here to help humanity. As our twenty-first century machine learning systems munch away on all that data from our IoT sensors, they will soon provide us with the answers and the actions to our next thought before it has even occurred to us.
The machines taking over is all pure fantasy. Right?
Well in the last few months two distinguished and highly respected men have spoken out about the potential threat of AI to the human race.
Stephen Hawking told the BBC:
“The development of full artificial intelligence could spell the end of the human race… Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
While Elon Musk warned:
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful.”
When you look around at the mildy terrifying military robots starting to emerge, the sudden proliferation of drones and the impeding self driving car revolution, it’s perhaps no longer a huge stretch of the imagination to fast forward to a world where humans are no longer at the top of the food chain.
Both Hawking and Musk have signed this open letter to highlight awareness of their belief that the impact on society from AI is likely to increase and with it the need to ensure it’s serving man, not the machines.
So is it time to introduce regulatory oversight of AI systems to safeguard us all? Could Skynet really be a thing, or are you just happy that your lights come on automatically when you get home? Let us know what you think in the comments below.
More Reading: Research priorities for robust and beneficial artificial intelligence
This is also worth a read http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
This concern is nothing new —
Isaac Asimov’s “Three Laws of Robotics”
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Even Asimov’s robots eventually found the floors and loopholes in this logic.
Even extending to adding a Zeroth law to proceed the above.
0) A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Asimov’s work even includes examples where the laws themselves are the cause of harm to humans.
Jon – thanks for posting that link … an interesting attempt at bringing-together what’s going on … suspect math’s should have been brought-in too, because maybe new ways of doing the sums could mean much bigger gains in capability than just faster computers etc … rather as when fractals were discovered not so long ago, only more so … Chris