"Charlie" is an ape-like robotic system that walks on four limbs, demonstrated here in March 2014 in Hanover, Germany. The robot could conceivably be used in the kind of rough terrain found on the moon, or it could be a stepping stone toward humanity's destruction.
We’re decades away from being able to develop a sociopathic supercomputer that could enslave mankind, but artificial intelligence experts are already working to stave off the worst when — not if — machines become smarter than people.
The letter comes after experts have issued warnings about the dangers of super-intelligent machines. Ethicists, for example, worry how a self-driving car might weigh the lives of cyclists versus passengers as it swerves to avoid a collision. Two years ago, a United Nations representative called for a moratorium on the testing, production and use of so-called autonomous weapons that can select targets and begin attacks without human intervention.
Famed physicist Stephen Hawking and Tesla Motors CEO Elon Musk have also voiced their concerns about allowing artificial intelligence to run amok. "One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand," Hawking said in an article he co-wrote in May for The Independent. "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."
Musk in August tweeted "we need to be super careful with AI. Potentially more dangerous than nukes."
"I'm increasingly inclined to think there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish," he told an audience at the Massachusetts Institute of Technology in October.
The Future of Life Institute is a volunteer-only research organization whose primary goal is mitigating the potential risks of human-level manmade intelligence that could subsequently advance exponentially. It was founded by scores of mathematicians and computer science experts around the world, chiefly Jaan Tallinn, a co-founder of Skype, and MIT professor Max Tegmark.
The long-term plan is to stop treating fictional dystopias as pure fantasy and to begin readily addressing the possibility that intelligence greater than our own could one day begin acting against its programming.