Some of the world’s leading robotics and artificial intelligence pioneers are calling on the United Nations to ban the development and us...
domenica 6 novembre 2016
WESTWORLD The Future of AI
Di Westworld si parla come del nuovo Lost, perché la trama è ingarbugliata, difficile da capire e probabilmente piena di tranelli e false piste.
Westworld è basata su una storia di Michael Crichton, prodotta da J.J. Abrams, e sviluppata da Johathan Nolan, fratello di Christopher e sceneggiatore di quasi ogni suo film.
Nelle prime cinque puntate (su dieci) della prima stagione, Westworld ha lasciato aperte una serie di questioni dando pochissime risposte: i critici che hanno già visto tutta la serie consigliano però di tenere duro, perché nella seconda metà di stagione qualche risposta arriverà. Per ora siamo al momento in cui buona parte delle carte sono state buttate sul tavolo, lasciando ognuno libero di provare a metterle nell’ordine che preferisce.
Cos’è successo finora
La storia è ambientata in un futuro non molto lontano e in un luogo non meglio definito. C’è un immenso parco tematico che sembra il West degli Stati Uniti (quello dei film western) in cui vivono dei robot con sembianze umane. Il parco è frequentato da veri uomini e vere donne, che pagano 40mila dollari al giorno per “giocare al west”, interagendo – abusandone, spesso – con i robot, che oltre a essere esteticamente indistinguibili dagli uomini sono anche piuttosto complessi psicologicamente. Le regole sono semplici: i robot (o androidi) sembrano umani e le loro azioni sono determinate da una serie di linee narrative studiate dagli “sceneggiatori” del parco. Gli uomini possono scegliere se partecipare a queste linee narrative – per esempio accettando di partecipare alla cattura di un pericoloso bandito – oppure scorrazzare liberamente per il parco. Gli uomini possono uccidere gli androidi, gli androidi non possono uccidere gli uomini.I robot, inoltre, funzionano in una specie di loop: ogni mattina si svegliano, fanno quello che devono fare, e anche se vengono stuprati, picchiati o uccisi, il giorno dopo vengono riparati, rimontati e riprogrammati per ricominciare da capo, senza ricordare ciò che gli è successo il giorno prima.
A monitorare e gestire il tutto da fuori ci sono una serie di tecnici, scienziati e esperti di vario tipo che supervisionano quello che succede, riparano e sistemano gli androidi e decidono – a grandi linee – cosa far succedere e quando. Del fuori-dal-parco si sa però pochissimo: si vede il posto da cui il parco viene gestito e si vedono alcuni di quelli che se ne occupano: ma non si sa praticamente nient’altro sul mondo vero oltre il parco.
Già nel primo episodio iniziano i problemi: alcuni androidi sembrano diventare troppo intelligenti, parzialmente coscienti e, in generale, propensi a pensieri e azioni che sembrano andare oltre quello per cui sono stati programmati.
Nel primo episodio si iniziano a capire le regole del parco e i principali personaggi della serie: c’è il dottor Robert Ford (interpretato da Anthony Hopkins), l’anziano ed enigmatico direttore fondatore del parco, che creò più di trent’anni prima dell’inizio della serie. Poi ci sono – sempre stando tra chi sta fuori dal parco – Bernard Lowe (quello che si occupa della programmazione dei robot, del modo in cui “pensano” e funzionano”) e Theresa Cullen, capo del dipartimento della sicurezza del parco. Lowe e Cullen hanno una relazione, e Ford sembra essere l’unico a saperlo.
Durante il primo episodio si scopre che Ford ha inserito delle “ricordanze” nei robot: un sistema che permette loro di ricordare alcune cose delle loro giornate. Lato positivo: si comportano in maniera ancora più realistica. Lato negativo: se iniziassero a ricordare che sono solo dei robot-marionetta che fanno sempre le stesse cose e vengono continuamente picchiati, uccisi e stuprati, potrebbero non essere più così controllabili.
I principali personaggi del parco (anche qui sono tantissimi e serve fare una sintesi) sono invece Dolores Abernathy – un robot interpretato da Evan Rachel Wood – Maeve Millay, un robot-prostituta che gestisce il proprio bordello, interpretata da Thandie Newton, William – un visitatore del parco – e un uomo senza nome vestito sempre di nero, interpretato da Ed Harris.
Dolores è l’androide più “anziano” del parco, e negli anni è stata aggiornata innumerevoli volte. È uno dei robot che inizia a fare pensieri più complessi degli altri: la scena chiave è alla fine della prima puntata, quando uccide una mosca che le va sul collo, una cosa che un androide “normale” non farebbe. Maeve è un altro robot che, forse ancora più di Dolores, capisce che c’è qualcosa di strano nella sua vita, e inizia a farsi domande, trovando pure qualche risposta (soprattutto nel quinto episodio). William è un visitatore riluttante che a differenza di molti altri non sembra essere interessato alla possibilità che il parco offre di fare tutto ciò che si vuole (di nuovo: sesso e uccisioni, soprattutto). L’Uomo in Nero è il personaggio da cui potrebbero arrivare le maggiori sorprese: è un visitatore del parco che è violentissimo ma anche molto intelligente. Non è nel parco per giocare a fare il pistolero: è interessato a trovare un “livello più profondo”, un significato maggiore per tutto ciò che il parco rappresenta.
E ora le teorie
Dopo poco più di un mese dal primo episodio è già pieno di teorie di ogni tipo, più o meno strampalate. Qualcuna sarà vera, moltissime si riveleranno sbagliate. Di certo, ce ne sono di molto interessanti: alcune riguardano “solo” i personaggi e quello che faranno, altre riguardano etica, filosofia, sociologia, principi di robotica, concetti sull’Intelligenza Artificiale.
– Bernard è un robot: creato da Ford, o più probabilmente da Arnold. A supporto di questa teoria c’è il fatto che Bernard Lowe è un anagramma di Arnold Weber. Ma c’è un problema: non si sa ancora il cognome di Arnold. Nel quarto episodio c’è però Ford che, parlando di Bernard con Theresa, le dice di stare attenta, con un’allusione che potrebbe essere interpretata come “stai attenta, lui è un robot”.
– Dolores e Arnold: a proposito di anagrammi, DoloresAbernathy è un anagramma di “Arnold Base Theory”, nel caso vi interessasse. Si sa che Dolores parlò con Arnold nel giorno in cui lui morì e c’è chi dice che potrebbe essere stata lei a uccidere Arnold, su indicazione di Arnold stesso.
– William è L’Uomo in Nero: questa è complicata. Le storie che vediamo nella serie dovrebbero appartenere, per supportarla, a due periodi diversi. Una è ambientata nel presente, una nel passato (tanto i robot sono sempre uguali, mica invecchiano). L’idea è che William sia la versione giovane dell’Uomo in Nero. Johathan Nolan ha scritto la sceneggiatura di film come Memento e Interstellar: possiamo dedurne che gli piace giocare con il tempo e farci venire dei gran mal di testa cercando di stargli dietro.
– Arnold è vivo: sappiamo che Dolores gli parla, più o meno. È anche possibile che Arnold sia vivo, più o meno. Magari in qualche forma strana, all’interno di un robot. Anche se così non fosse è però ormai chiaro che ha lasciato cose e indizi nel parco e linee di codice e programmazione nei robot, per far succedere cose. Cosa, ancora bisogna capirlo.
– Il parco dov’è?: per qualcuno sott’acqua, ma non ci sono grandi prove a riguardo. Per qualcuno sarebbe su un altro pianeta: giusto per far diventare la serie anche interplanetaria, oltre che western e di fantascienza. Nolan, parlando a Entertainment Weekly,ha detto che se si presta la giusta attenzione, entro la fine della prima stagione si può capire dove sta il parco.
The show forces viewers to engage with questions that have ramifications in reality: What does it mean to empathize with a robot? What makes a robot seem human? These are big questions, but ones that both artificial intelligence engineers and civilians will soon need to broach.
It’s impossible to have a serious discussion about androids without talking about Hanson Robotics, the company that right now seems closest to bringingWestworld-style robots into the real world. Founded by David Hanson in 2003, the Hong Kong–based company is best known for Sophia, who recently sat through an interview with Charlie Rose, and Albert Einstein HUBO, the first walking robot with realistic expressions (and a borrowed face). Those responsive bots, built using patented robotic systems and synthetic skin technologies, were programmed with software Hanson Robotics hopes to commercialize, populating businesses with robots ready to, as the company literature puts it, “develop deep andmeaningful relationshipswith humans.”
Stephan Bugaj, Hanson’s Vice President of Creative, is in charge of personality design. Formerly of Pixar and Telltale Games, where he helped develop the 2014Game of Thronesvideo game, Bugaj is an expert on both character and gameplay dynamics. He watchesWestworld, and is also on the edge of his seat — but mostly because it forecasts the future of his own work. He says that two systems play a big role inside Dolores Abernathy’s synthetic skull. Both are in their infancy in the real world, but in tandem they could make Hanson robots a lot more relatable. The first is what’s called generative, or self-modifying code, and the second is memory.
In episode four, Dr. Robert Ford at last tells Bernard Lowe how their robots got so clever. He shows Lowe the pyramid of consciousness: Memory, Improvisation, Self-Interest, and a big question mark. He and the just-revealed, mysterious Arnold “built a version of that cognition in which the hosts heard their programming as an inner monologue, with the hopes that, in time, their own voice would take over,” Ford explains. “It was a way bootstrap consciousness.” Improvisation based on memory.
Dr. Ford is describing self-modifying code, which can’t yet do what it does in Westworld — but soon could.
Neural networks don’t exactly run on self-modifying code, but, functionally, they’re similar. “A semantic or neural network is state-evolving over time, because it’s bringing in new data and basically learning new things,” he explains. Think Tesla’s Autopilot, or Google’s AlphaGo: These A.I.s can be said to “learn” over time, as they absorb new information. When a Tesla crashes, or even almost crashes, the collective Autopilot improves. The A.I. factors in new information in order to avoid future incidents.
Generative code is the next-level. It’s code that writes code. “An A.I. could reason about itself and decide that it needs some new code, and write it itself,” Bugaj says. Enter the doomsayers — chief among them, Elon Musk — who prefer intelligent design over techno-evolution.
But those doomsayers should really be worrying about the third level — self-modifying code. Systems are emerging that can not only improve by accretion, but fully iterate. They can, Bugaj explains, “take the code you’ve already written, and rewrite it to be different.” And that, to him, is the seed of super-intelligence. Without self-modifying code, there is creation and speciation, but no moment of punctuated equilibrium — no leap. The sort of radical new ideas and solutions technologists want to wring from A.I. are most likely to come from systems that programmers can leave to their own devices.
In other words, the robots of Westworld feel human, in part, because they have their own ideas — something that could prove troublesome for tourists. Lowe gives Dolores Alice’s Adventures in Wonderland as edification; he shouldn’t be surprised when she falls down the rabbit hole and befriends the Mad Hatter.
“The fundamental of it is that they can learn in some way,” Bugaj says. “They’re definitely adding some sort of semantic network associations. They’re changing things about themselves, whatever that internal structure would look like in this fictional coding universe. They’re learning, they’re formulating new opinions about themselves and about the world around them, and they’re acting on them. And that’s being sentient.”
But that means that programmers have to make smart decisions up front. Bugaj says it’s instructive to consider how the creators of Westworld’s “slavebots,” Dr. Ford (Anthony Hopkins) and Bernard Lowe (Jeffrey Wright), set limitations on the robots in their park. Bugaj suggests that they must have hard-coded some “rigid, Asimov-style rules.” The only way for them to escape their virtual cage would be to rewrite that code. If the code lives within the confines of the robots’ semantic learning systems, then the robots could find and modify it themselves; if it’s hidden elsewhere, under Asimovian lock and key, the slavebots would have to be granted their freedom by someone with access.
And the thing about the hard-coded jail cell is that it exists within a thick-walled prison: A.I.s cannot learn without memory, and the Westworld hosts are designed to forget. There are evidently some bugs in that code, though. Hosts are beginning to remember things they shouldn’t. Memory is the pyramid of consciousness’s foundation, and it makes the hosts believable. It could also be what allows them to break out — or, more innocuously, improvise.
In the real world, A.I.s can’t yet remember like humans — can’t yet sort through, prioritize, associate, and choose to forget certain events. It’s one of a few holy grails in A.I. today.
“One thing reminds you of another, and not just in a way that’s somebody spinning a yarn, but in a way that’s very productive,” Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, tells Inverse. “We use analogies, intuition, this kind of reminding thing, to do a lot of great work. And that capability — people call it associative, people call it content-addressable, any of a number of things — we ain’t got any systems that come even close.”
In a sense, computer memory is already superior to human memory. It’s near-infinite, and relatively infallible. “If you want to store phone numbers, it works great,” Etzioni says. “If you want to store a trillion phone numbers, it still works great.” But we’ve yet to reproduce our own, creative, spontaneous memories with code. Bugaj thinks it’s vital for true cognition: “Everything that we talk about, with a machine being able to learn, comes down to memory management,” he says.
Once A.I. can have short-term, long-term, and episodic memory, Westworld will be a stone’s throw away. Computer memory is not as likely to make headlines as computer vision, or speech, but Bugaj thinks “it’s actually the fundamental topic.” “Getting that right is going to be a big deal. And we haven’t yet; we’re still working on it.”
Westworld, as a fiction, predisposes viewers to empathize with all characters. But as science fiction, with lifelike robots as characters, empathy loses vigor. Like Lowe’s own faltering judgment, viewers can’t help but second-guess their pity. We want to care about Dolores’s plight, but we can’t bring ourselves to ignore Dr. Ford’s cold-hearted reminder, his firm conviction that code cannot yield consciousness: “It doesn’t feel a solitary thing that we haven’t told it to. Understand?”
But as reality’s A.I. landscape inches closer and closer to that of Westworld, we may need to augment our empathy faculties. Maybe Westworld is to us real humans what Alice in Wonderland is to Dolores: A fiction with which we can modify our code, and break free of our preset parameters. Maybe we need Westworld to believe that the appearance of consciousness just is consciousness. Maybe we need Westworld to reckon with our inevitable future. Bugaj, for his part, thinks that’s the case.
“I think they’re doing what a good futurist should do, which is making conjectures about the future, and then exploring how those might play out.”
At first glance, the plot seems straightforward enough: an immersive and futuristic Western-themed park stocked with humanoid robots that obey every whim (good or bad) of the visitors. The drama, which debuted in early October, has all the right pedigree markers: Conceptualized by Michael Crichton, who oversaw the original 1973 film. A plot infused with artificial intelligence and autonomous robots. Co-directed by Lisa Joy and Jonathan Nolan—yes, as in the guy who created Person of Interest, wrote gobsmacking masterpieces like Interstellar and Memento, and co-wrote The Prestige, The Dark Knight, and The Dark Knight Rises with his brother, Christopher Nolan.
But as with any Jonathan Nolan project, Westworld is far more than it seems on the surface. Men’s Fitness spoke to Nolan about the intertwining of robots and pop culture, the fast-approaching multi-billion-dollar industry of artificial intelligence, and how Westworld proves that humanity is so damn weird.
Jonathan Nolan: I saw the original movie when I was a kid, and it scared the shit out of me. I still can't look at Yul Brynner without experiencing some mild anxiety. [Executive producer] JJ [Abrams] called, and he was interested in the bigger questions that the film, frankly, just doesn't have time to play with.
The original film’s jumping-off point was the perspective of, "What if there were a place where you could go and act out all of your fantasies with no judgment?" The Vegas rule: "What happens in Westworld stays in Westworld." And that's a fantastic idea, and it's one that we fully explore.
But the flip side of that was: You're taking your id on vacation as a guest, and the recipients of that are the hosts, the robots, these artificial people who've been designed to be seduced or destroyed by the guests, and then have their memories erased, and they're put back in the world none the wiser.
That was delicious inversion. The original movie's packed with ideas, but it's 110 minutes, you've got to move on. He felt there was a great opportunity for a series here, and also thought it would be a great possibility to try to explore the robot's perspective as well, so we felt like we wanted to make that front and center. That was a great, fresh way into this story.
MF: HAVE YOU ALWAYS BEEN DRAWN TO AI AND ROBOTICS?
JN: I've been fascinated by AI. It's featured prominently in the last couple of projects that I've worked on—the robots inInterstellarand the whole premise ofPerson of Interest, my first show. One of the things that you don't see a lot of in film and TV was a more sympathetic take on AI.Blade Runner, which is, of course, the granddaddy of all these films, and a brilliant film, gets there, but it doesn't start there, depending on how you read that film. It wasn't until Spike Jonze's filmHerthat you really started to investigate the idea of two things. What would it be like to be an AI? In other words, not just what will we think of them, but what will they think of us? And how will their thinking be different from ours? Not just how will it approach ours, but also measuring consciousness. There's this speech that Ford—Anthony Hopkins' character—has deep in the season, where he talks about us being the yardstick for consciousness, and being kind of a fucked-up yardstick.
MF: SO, TAKING ALL OF THIS DATA WE'VE SUPPLIED THEM, AND WHAT DO THEY LEARN FROM IT?
JN: Yeah, exactly. The park’s function is that the hosts don't remember the things the guests do to them. But they do, on a subconscious level—they error correct. They get better and better at responding to us. They've been designed to cater to us. That's what was so delicious for us. The consciousness would have emerged in a place where you least wanted it, in a place where humans are showing off their worst behavior, and some of their best behavior. Again, it's not just a dark fantasy here. You could take your kids to Westworld and go fishing, and be confident that they wouldn't get abducted or mauled by a mountain lion. It's for everyone, like Las Vegas is for everyone. Except for a lot of people, Vegas is a dark fantasy, so the AI, looking at that and wondering what it says about us.
Then, I also wanted to question whether or not they even want to be like us in the first place. I was fascinated, and we did a lot of reading. We did a lot of reading on the state of the art in terms of artificial intelligence, and there was an awful lot of really interesting and some quite challenging and dark ideas coming out of that world, because we're going to run into this problem sooner or later. No one thinks of Siri as alive now, but there will become an almost invisible threshold we'll cross in which it's harder and harder to convince yourself of that.
MF: LIKE JIBO, AT MIT — A ROBOT THAT HAS A FRIENDLY APPEARANCE.
JN: 100%— a robot with AI can simulate emotions. Humans are really weird. That was the takeaway from working on Westworld. Humans are very strange. We have the capability to apply empathy to almost anything—cartoons, robots, my daughter's stuffed animals. We almostwantto empathize with things. We also have the ability to turn that off very quickly. Witness all of human history.
It's a very strange relationship that we're going to have with these features. We always think about AI as being somewhere in the far future. I think we're actually much closer than anyone realizes, and so for us, there was a certain level of urgency in the questions we're trying to ask in the show.
MF: DID YOU HAVE A LITTLE MORE FREEDOM TOO, BECAUSE YOU WERE TWEAKING THE ORIGINAL PREMISE IN THAT YOU'RE APPROACHING THIS FROM THE CHARACTERS' POINT OF VIEW?
JN: Yeah, definitely. Cinema is an empathy machine. The "rules of grammar"—where you put the camera, the technique of filming and visual storytelling—is all about putting you, the audience member, in the role of the protagonist on screen. You come to understand them. You come to feel for them. You come to feel their rise and fall as if it were your own. It's an incredibly powerful tool to use to try to get the audience to understand and emote for these things that arenear human, that are not human, that are a little different.
MF: YOU MENTIONED YOUR FAMILIARITY AND FRIGHT AT WATCHING THE ORIGINAL WESTWORLD. HAD YOU EVER THOUGHT IN THE PAST OF REVIVING THIS CONCEPT, EXCEPT DOING IT A LITTLE DIFFERENTLY?
JN: As I was saying, I think I've found my subject. I'm fascinated in this. It keeps coming up in projects that I'm working on. Writing the robots forInterstellarwas a project I was working on for a long, long time. The highlight of that project for me was the relationship between the human crew members and the artificial crew members. You're watching that movie, waiting for the moment where the robots would turn evil or blast everyone out of an airlock. But they don't. They represent the best of human attributes. They're brave, they're loyal, and they're steadfast.
Do I think we have to be extremely careful with AI? Stephen Hawking and Elon Musk and a number of other luminaries are calling for us to hit the pause button and at least have a conversation about what we're doing here, because between Facebook and Alphabet and specifically DeepMind, there's an awful lot of money being poured into solving this. This is no longer an esoteric concern. This is an industrial problem.
MF: OH YEAH, A MULTI-BILLION DOLLAR INDUSTRY.
JN: I don't think people realize quite how much money is being applied to this by Larry Page and Mark Zuckerberg, and a bunch of people you never heard of who aren't in Silicon Valley who are also pouring money into this thing. Watson has become a core part of IBM's business. That's what they're building.
InPerson of Interest, we dealt with the idea of networked intelligence in a distributed AI that was trying to help people and trying to make the world a better place. Here we're looking at another way. Is AI going to emerge from the cloud? InWestworld, you're dealing with the opposite thing. You're building these creatures. The theory is, at least, that you can hurt them, "kill them," abuse them, and it doesn't matter because they're not real.
That sounds outlandish, but it is exactly what we do in our video games every day. You feel no remorse when you turn onCall of Dutyand mow down the opposing team. You don't feel bad when you run people over inGrand Theft Auto. Lisa [Joy, Jonathan’s wife andWestworldco-creator] is not much of a gamer, but because gaming is such a component of what the series is about and the gameplay aspect of it, we played someGrand Theft Autoand I was amused to see that she obeyed all of the traffic laws, which was charming, but missed the point. You're supposed to be anti-social in these games. And yet you don't feel remotely bad about turning off your Xbox when you're done playing these games. You don't ascribe any sentience to these things, and you shouldn't.
They're not sentient, but at some point, they will be. Even as we've been making this show, advances in VR and non-player character artificial intelligence, it's like we're going to reach a point where it gets really confusing and morally slippery.
MF: YOU REALLY DID A DEEP DIVE FOR THE RESEARCH, EVEN THOUGH YOU WERE ALREADY STEEPED IN IT. WAS THERE ANYTHING IN PARTICULAR THAT YOU READ, WATCHED, TRAVELED TO?
JN: Yeah, we researched a fair amount. We went to Vegas and looked at the way casinos are designed and laid out, and the way the Strip functions to lure you into different experiences. We played a lot of video games. We researched the state of the art in AR research, deep learning, and the rule set under which AIs or industrial intelligent agents are created, and then the rule set in which they're destroyed. Some of the questions that our hosts are asked are modeled on queries that real AI researchers use to determine whether or not the intelligent agent that they're building is problematic.
For example:Does it lie?One of the researchers who we looked into would query these little agents by saying, "Does it hoard resources uncontrollably? Does it lie? Does it query whether or not it's in a simulation?" In any of those circumstances, no matter how promising that particular iteration of an AI was, they just erased it, because why take the chance?
And then we read a lot about consciousness, which is still largely the domain of philosophy and not science or even computer science. That's how little we still understand our own understanding. Computer scientists don't want to fuck around with it, because it's counterproductive to what they want to do, or you wind up in a philosophical black hole. Neuroscience has gone a long way, but we're still a long way from understanding what consciousness is, what does it mean that we can consider our own existence.
MF: YOU'D MENTIONED YOU WERE TALKING TO PEOPLE. WERE THERE ANY SCIENTISTS WHO CONSULTED ON THIS, OR NOT ACTUAL CONSULTING, JUST QUERIES, JUST TALKING TO THEM?
JN: I tell you what's really interesting: We had a lot of informal conversations with people, and this tells you a lot. There aren't a lot of people in Silicon Valley right now who are willing to go on the record about the state of what's happening. When it comes to an ASI, or an artificial superintelligence, that feels like a conversation that maybe we should drag out into the spotlight a little bit. In other words, if you're going to build God, shouldn't we all have a bit of a say in how that's going to work?
This show is about human nature. It's about beings that have been created to resemble humans, and then about humans in an environment in which they have been told they can act however they want without consequence. There was so much to talk about with the actors. There was a lot of food for thought on that set.