Post in evidenza

Robot Apocalypse

Professor Stephen Hawking has pleaded with world leaders to keep technology under control before it destroys humanity.

domenica 25 gennaio 2015

Terminator, Westworld, Ex Machina: The evils of A.I.

The "Skynet scenario" was the background story to the popular sci-fi Terminator film series in which a fully autonomous artificial intelligence (AI) system used by the US military ended up launching a global thermonuclear war to eliminate humanity and take over the world. 
"It saw all humans as a threat; not just the ones on the other side," a character from the series helpfully explained. "It decided our fate in a microsecond: extermination."
Early this month, a group of prominent scientists, entrepreneurs and investors in the field of artificial intelligence, including physicist Stephen Hawking, billionaire businessman Elon Musk and Frank Wilczek, a Nobel laureate in physics, published an open letter warning against the potential dangers as well as benefits of AI.
Given the calibre of the people involved, the letter has generated extensive media coverage, and even lively debate among the cognoscenti. Much of the debate naturally focuses on the more sensational warnings contained in the letter and in select passages from the position paper that accompanied it.
Can AI systems become an existential threat to the human race? Reponses range over a whole spectrum. Some pundits follow the position of the distinguished American philosopher John Searle, who has questioned if the strong version of AI - that is, machines that can learn from experience and model behaviour like a human brain, except with incomparably greater computing powers and therefore cognitive abilities - is even possible at all.
Others, like science writer Andrew McAfee, argue strong AI is possible but we are still a long way from it to need to worry about it seriously.
But their arguments are rather abstract and remote.
So it's the first half of the position paper that is most relevant to people today, and also the most interesting. It asks pertinent questions and discusses likely but troublesome scenarios that already confront us today.
It's here that the group raises moral, legal and technical questions related to current and soon-to-be-available "smart" systems. These are semi-autonomous and quasi-intelligent machines or systems that are already around us: driverless cars, drones, computerised trading systems for stocks, bonds and currencies, voice and face recognition software, automatic language translators, surgical robots and automated medical diagnoses. The list is already endless.
The position paper provides useful programming rules and guidelines for AI or semi-AI systems:
  1. Verification: how to prove that a system satisfies the desired formal properties. ("Did I build the system right?")
  2. Validity: how to ensure that a system that meets its formal requirements does not have unwanted behaviour and consequences. ("Did I build the right system?")
  3. Security: how to prevent intentional manipulation by unauthorised parties.
  4. Control: how to enable meaningful human control over an AI system after it begins to operate. ("OK, I built the system wrong, can I fix it?")
These nifty criteria are not just good for AI developers, but more importantly, for their users, consumers, regulators and informed citizens, to determine the robustness and benefits of semi-AI systems that are already with us.
Tired of labour disputes and unrest on the mainland, for example, Taiwan-based electronics giant Foxconn is investing billions to build fully automated factories. Procter & Gamble, the former owner of Pringles, the snack potato chips brand, got rid of the human workforce years ago to use supercomputers to control the insertion of the chips into their container tubes to make sure they stack up and don't crack during production.
"When and in what order should we expect various jobs to become automated?" the position paper asks, warning full automation will mean high wages for those versed in the technical complexity and unemployment for most. "There is a difference between leisure and unemployment," it says.
The US has a temporary ban on research on autonomous weapon systems that require minimal or no human supervision. But it could be argued that smart weapons, free of human biases and emotions, could be more "moral" in their ability for clean strikes to minimise civilian casualties.
Four states in the US are ready to authorise driverless cars on the road. But if one crushes or hurts a person, who should be responsible and how would their insurance policies work?
Sensors are being developed to help cars avoid hitting other cars and people. But what if a driverless car is caught up in a situation where to avoid hitting a young family of four, it has to run over an old couple? This is beyond artificial intelligence, but artificial moral intelligence.

Debate over artificial intelligence raises moral, legal and technical questions Alex Lo 25 January, 2015

One of the more interesting television shows coming later in 2015 is the HBO series remake of Jurassic Park author Michael Crichton’s 1973 sci-fi movie Westworld. The series is being developed by Jonathan Nolan (Person of Interest) and his wife Lisa Joy (Pushing Daisies) and produced by J.J. Abrams and Bryan Burk.
Ostensibly about a theme park where the attractions turn on the tourists, the original Westworld was a place where adults could act out their weirdest fantasies with the help of artificial-intelligence-powered robots. Nolan and Joy’s take on the material will be far more in-depth, something they promise will be the most ambitious, subversive, f***ed-up television series.
The original film fell into a fairly straightforward three-act structure. When asked how they intended to expand the premise and universe into a weekly series, Nolan expanded on the idea of a massive world behind Crichton’s original narrative.
He knew so much about the technologies that were about to emerge, spent so much time thinking about how they would actually work. Consider the fact that the original film was written prior to the existence of even the first video game. Think about massive multiplayer role-playing games, and the complexity and richness of video game storytelling. When he wrote ‘Westworld,’ none of that existed! So it’s a film that anticipated so many advances in technology. The film has a structure that barrels forward – there’s this unstoppable android hellbent on vengeance – and it preceded ‘The Terminator’ by 10 years.
Nolan: A.I. [Artificial Intelligence] is a topic that Lisa and I are both fascinated by. And the thing about science fiction is that it’s past the golden age. The great [talents] have already taken a crack at lot of this. But it’s still very pleasurable take a swing at some of the bigger ideas.
Joy: I think the other thing that’s fascinating about doing this now is, in a short amount of time since ‘Blade Runner’ came out, the kind of science that we’re talking about has become closer to “science” than it is to the “fiction” part of “science-fiction.” I think we’re standing at an interesting precipice from which to both view the future and to hypothesize about the future. I think that all of that new information will help add new dimensions to this world.

‘Westworld’ TV Show Details: The Evils of A.I. and Brilliance of Michael Crichton 26 Gen 2015


In his 1950 paper "Computing Machinery and Intelligence", Alan Turing devised the thought experiment which he called "the imitation game" but which has come to be called the Turing test – a simple but surprisingly robust way to compare human intelligence with that of machines. Essentially, if a person cannot tell whether he is communicating with another human or with a machine, then the machine has passed the threshold for what we now call artificial intelligence.
Note that the machine does not actually have to have human intelligence, it only has to be able to deceive the tester into thinking it does. This deception is at the heart of Alex Garland's directorial debut: a tricksy sci-fi chamber piece in which the alpha-genius creator of the world's most powerful internet search engine (Oscar Isaac) lures a diffident young computer coder to his underground research facility and introduces him to his latest creation: Ava, a super-smart, sexy android played with other-worldly grace by Alicia Vikander. The coder, Caleb (Domhnall Gleeson), is supposedly there to perform the Turing test on Ava. But who is trying to fool who, and which is the more dangerous: Frankenstein or his monster?
Garland's script is as genre-literate as it is scientifically literate. It is especially deft in its handling of what Blade Runner fans might call "the Deckard problem": the doubts about his own humanity that a film's protagonist might feel after spending so much time in the company of ultra-realistic androids. Garland, who previously wrote, among other things, Danny Boyle's films 28 Days Later and Sunshine, has clearly learnt a thing or two from Boyle about how to give a low- to medium-budget film a high-gloss finish. Ex Machina is set almost wholly within the clean lines of the sort of hi-tech compound you'd get if Frank Lloyd Wright designed The Andromeda Strain.
In other words, it looks as smart as it sounds. So it might only be a pulpy sci-fi thriller, or it might be a rich, deeply considered meditation on technology, sexuality and human nature. See if you can tell which.

Ex Machina, film review: Intelligent life found in this tricksy, modern sci-fi LAURENCE PHELAN 24 January 2015

With Ex Machina, we get a more complex picture of our android future packaged in a wickedly smart and funny film that sends up our technology gods at the same time.
Oscar Isaac is gleefully arrogant as Nathan, the nightmare interpretation of a Steve Jobs/Elon Musk/Eric Schmidt character with more than a touch of a superiority complex. He mixes proclamations of his own godhood with heartfelt frat-boy utterances of “Dude ...” to the utterly bewildered and starstruck employee Caleb, played by Domhnall Gleeson, who believes he’s won a once-in-a-lifetime company competition to hang out with the reclusive founder for a week.
Nathan created the top search engine in the world, Bluebook, but lives in an isolated, idyllic and highly secure mountain home, where every door has to be accessed with a keycard and half the facility is underground. When Caleb shows up, Nathan is out the back beating a punch-bag. His request that Caleb skip the part of the visit where he’s nervous and awestruck is rather undermined by the fact that he didn’t come to the door to meet him or guide him through the odd security process.
It quickly becomes apparent that Caleb isn’t there just to hang out and drink beer – although there’ll be plenty of that going on with the almost perpetually drinking Nathan – he’s there to conduct the ultimate Turing Test on an android artificial intelligence dubbed Ava. And to keep track of it all, Nathan will be filming their sessions – along with pretty much everything else that goes on in the house.
The underground lair, doors that don’t yield to Caleb’s card and the constant monitoring plunge the whole experience into claustrophobia and paranoia from the start. Caleb is no fool and quickly begins to suspect the motives and methods of Nathan, who gets steadily stranger as the experiment progresses. And, of course, there is Ava, the pretty and perplexing robot that seems to need Caleb in ways he wasn’t expecting.

Ex Machina – a smart, suspenseful satire of our technology gods 25 Jan 2015

Posta un commento
Related Posts Plugin for WordPress, Blogger...

Ratings by outbrain