Post in evidenza

Warfare Revolution

Some of the world’s leading robotics and artificial intelligence pioneers are calling on the United Nations to ban the development and us...

venerdì 4 novembre 2016

Artificial Intelligence Goes Mainstream



From AI ethics to the issues of trust and bias Melanie Mitchell talks to CBR about the future of AI
Statue of Alan Turing at the Bletchley Park Museum, poring over an Enigma machine
This year, the Association for Computing Machinery (ACM) celebrates 50 years of the ACM Turing Award, the most prestigious technical award in the computing industry.  The Turing Award, generally regarded as the ‘Nobel Prize of computing’, is an annual prize awarded to “an individual selected for contributions of a technical nature made to the computing community”. In celebration of the 50 year milestone, renowned computer scientist Melanie Mitchell spoke to CBR’s Ellie Burns about artificial intelligence (AI) – the biggest breakthroughs, hurdles and myths surrounding the technology.
EB: What are the most important examples of Artificial Intelligence in mainstream society today?
MM: There are many important examples of AI in the mainstream; some very visible, others blended in so well with other methods that the AI part is nearly invisible.  Web search is an “invisible” example that has had perhaps the broadest impact. Today’s web search algorithms, which power Google and other modern search engines, are imbued with AI methods such as text processing with neural networks, and searching large-scale knowledge representation graphs. But web search happens to quickly and seamlessly that most people are unaware of how much “AI” has gone into it.

Another example with large impact is speech recognition. With the recent ascent of deep neural networks, speech recognition has improved enough so that it can be easily used for transcribing speech, texting, video captioning, and many other applications.  It’s not perfect, but in many cases it works really well.
There are many other natural language AI applications that ordinary people use every day: email spam detection, language translation, automated news article generation, and automated grammar and writing critiques, among others.
Computer vision is also making an impact in day-to-day life, especially in the areas of face recognition (e.g, on Facebook or Google Photos), handwriting recognition, and image search (i.e., searching a database for a given image, or for images similar to an input image).

We’re all familiar with so-called “recommendation systems,” which advise us on which books, movies, or news stories we might like, based on what kinds of things we’ve already looked at, and on what other people “like us” have enjoyed.
Another sophisticated, but often invisible, application of AI is to navigation and route planning—for example, when Google Maps tells us very quickly the best route to take to a given destination. This is not at all a trivial problem, but, like web search, is available so easily and seamlessly that many people are unaware of the AI that has gone into it.
There are many more examples of AI impacting our daily lives, in medicine, finance, robotics, and other fields. I’ll mention one more possibly “invisible” area: targeting advertising. Companies are using massive amounts of data and advanced machine learning methods to figure out what ads to show you, and when, and where, and how. This one application of AI has become a huge economic force, and indeed has employed a lot of very smart AI Ph.Ds. As one well-known young data scientist lamented, “The best minds of my generation are thinking about how to make people click ads.”
EB: What have been the biggest breakthroughs in Artificial Intelligence in recent years and what impact is it having in the real-world?
MM: The methods known as “Deep Learning” or “Deep Networks” have been central to many of the applications I mentioned above. The breakthrough was not in inventing these methods—they’ve been around for decades. The breakthroughs rather were in getting them to work well, by using huge datasets for learning. This was possible mainly due to faster computers and new parallel computing techniques. But it’s been surprising (at least to me) how far AI can get with this “big data” approach.
The impact in the real world is both in the applications (such as speech recognition, face recognition, language translation, etc.) and also in the ascent of “data science” as a vital area in industry. Businesses have been doing what is called “data analytics” for a very long time, but now are taking this to a wholly new scale, and creating many new kinds of jobs for people who have skills in statistics and machine learning.
Another recent breakthrough is in the area of “reinforcement learning,” in which machines learn to perform a task by attempting to perform it and receiving positive or negative rewards. This is a kind of “active learning”—over time the machine performs various actions, occasionally gets “rewards” or “punishments”, and gradually figures out which chains of actions are likely to lead to rewards.
Like deep networks, reinforcement learning has been studied in the AI community since the 1960s, but recently it has been shown to work on some really impressive tasks, most notably Google’s AlphaGo system, which learned to play the game of Go—from scratch—and got to the point where it could beat some of the best human Go players.  There were a number of clever new methods that resulted in the effectiveness of reinforcement learning; in fact, one of them was to use deep networks to learn to evaluate possible actions to take.


Deep Mind Il Robot impara a giocare 26 FEBBRAIO 2015


Google AI wins first match against Korean Go game champion March 9, 2016

Google’s DeepMind A.I. takes on something even more complicated than chess or go: StarCraft II JEFF GRUBB

Reinforcement learning methods are quite general—algorithms similar to those developed in the AlphaGo system have recently been used to significantly reduce energy use in Google’s data centers. I think we will be seeing some really interesting additional applications of reinforcement learning in the next few years.

Computers are keeping secrets. A team from Google Brain, Google’s deep learning project, has shown that machines can learn how to protect their messages from prying eyes.
Researchers Martín Abadi and David Andersen demonstrate that neural networks, or “neural nets” – computing systems that are loosely based on artificial neurons – can work out how to use a simple encryption technique.
In their experiment, computers were able to make their own form of encryption using machine learning, without being taught specific cryptographic algorithms. The encryption was very basic, especially compared to our current human-designed systems. Even so, it is still an interesting step for neural nets, which the authors state “are generally not meant to be great at cryptography”.

L’intelligenza artificiale di Google ha creato un algoritmo di cifratura impossibile da forzare BRUNO RUFFILLI 01/11/2016 01/11/2016


EB: What are some of the major hurdles that Artificial Intelligence still needs to overcome in the next ten years?
MM: The biggest hurdles for AI are to deal with (1) abstract concepts; (2) common sense; and (3) learning without being explicitly “taught”. I personally don’t think 10 years will be enough to get anywhere near “human-level” in these areas.
This is closely related to an AI problem called “Transfer Learning”:  if a machine learns something in one domain, how can it transfer what it has learned to a related domain? To my mind, this is essentially the question of how to get computers to perform abstraction and analogy-making.
Now, onto common sense:  IBM’s Watson program, which famously beat expert humans on the game show Jeopardy, “knew” that Michael Phelps had won a particular swimming race by 1/100 of a second, but does it know whether or not he got wet in doing so?  Does it know if he got out of the pool after the race?  Does it know if he took off his socks before getting into the pool? There is so much “hidden knowledge” in human understanding that is lacking in computer “understanding.” Some Artificial Intelligence researchers have tried to solve this by creating enormous databases of “common sense knowledge,” but as yet these haven’t succeeded in producing machines with the kinds of background knowledge of the world that humans possess. Imbuing what we call “common sense” into computers is still a wide-open problem.
There’s a lot of discussion around the topic of how “autonomous” machines should be allowed to be in making decisions. This is currently a huge issue for self-driving cars, and will remain a central issue as AI gets ever more sophisticated and widely used. I expect “AI ethics” to become a major new sub-discipline of philosophy.
EB: Much has been made of the potential for Artificial Intelligence in pop culture. What are some of the biggest myths you’ve seen? Can you think of examples where science fiction is getting close to reality?
MM: One of the big myths is that “computers have passed the Turing Test.” In fact, in all the publicized “Turing Tests,” in which judges have tried to guess which conversation is with a human and which with a computer, the conversation topics have been so restricted that the test is nothing like what Turing originally envisioned.

IBM’s Big Bet on Artificial Intelligence Oct. 30, 2016


Artificial intelligence is creeping into our daily lives in unexpected ways. It is not just transforming online services with innovations such as Apple’s Siri voice recognition app, which will send emails when you instruct it to, or Microsoft’s Skype translation services, which enable you to communicate online with people whose languages you do not speak.
Wider applications of artificial intelligence, such as image and pattern recognition (classifying data or objects based on common features), natural language processing (how computers understand and respond to human speech) and machine learning (when software learns something without being programmed to do so) will soon be featuring in many products and services.
Pest control
In recent years, pest control company Rentokil Initial has been experimenting with rodent traps equipped with sensors and WiFi. These send data to a command centre, which the company has built with partners Google and PA Consulting.

A member of staff is only sent to a trap once the machine has told the command centre it has caught a rat or a mouse. This is more efficient than routine patrols, which would often find empty traps.
Rentokil Initial now has more than 20,000 such devices in 12 countries. It has collected more than 3m pieces of data with these so far. These could be used to finesse the company’s digital pest control services with a dose of artificial intelligence, says Tim Shooter, an independent technology consultant who worked with Rentokil Initial on the pilot project.
By blending information from the traps with weather and mapping data, it might be possible to better identify rodent breeding or migration patterns and identify infestation-risk hotspots before they develop, he says.
“That would mean a significant shift away from reactive pest control services … in favour of proactive services that tackle problems before a customer’s even aware of them,” says Mr Shooter.
Beer


IntelligentX Brewing claims to have created the world’s first beer to be brewed using artificial intelligence.

The company is a joint venture between creative agency 10x and machine learning specialist Intelligent Layer. The recipes for the company’s four products — Golden AI, Amber AI, Pale AI and Black AI — change over time, based on customer feedback interpreted using a machine learning algorithm.
Codes printed on the bottles direct customers to a Facebook Messenger bot, which asks questions such as: “How would you rate the hoppiness out of 10?” The responses are then interpreted by the algorithm and the findings are passed to the brewers, who tweak their recipes accordingly. The questions change based on the responses the algorithm finds most useful. IntelligentX’s beers can currently be purchased from UBrew in Bermondsey, London, but will soon be available to order online.
Home security


While security company Cocoon’s devices use the motion sensors and cameras one might expect, they also detect sounds and vibrations — including low-frequency signals inaudible to humans — and use machine learning to understand the noises that are usual and those that may signify a break-in.

Every home has a unique sound fingerprint, says Cocoon co-founder and head of software John Berthels: this may include lorries rumbling by, the central heating switching itself on and off or a pet moving around. The devices gradually build up a picture of what is “normal” for each house.
If noises deviate from the established patterns — a back door being forced open or a window breaking — the device will send an alert to the users’ smartphone, prompting them to check their home on a live video link, set off a high-pitched alarm or call the police. About 850 people took part in Cocoon’s Indiegogo crowdfunding exercise, raising almost $250,000, and the devices are now on sale online for £299 ($399).
Toy cars


In September 2016, US toymaker Mattel unveiled Hot Wheels AI Intelligent Race System, a twist on its much-loved line of toy racing cars which dates back to 1968.

Unlike slot-car systems like Scalextric, which keep cars on track by means of a pin, Hot Wheels AI has sensors on the underside of each car that interpret a gradient pattern printed on the track.
The AI behind this is no more complex than the technology that guides a robot vacuum cleaner, but it means players can race against self-driving, computer-controlled cars and those controlled by other humans.
The cars are larger than their predecessors and require a games controller. Players can also launch virtual hazards such as oil slicks and tyre blowouts to sabotage their competitors.
The toys are now available on retail sites and Mattel will be hoping for a Christmas hit.

Non solo gli assistenti virtuali nei telefonini, ma anche programmi “invisibili” per i consumatori che aiutano le banche a combattere le frodi o le imprese a ottimizzare i sistemi produttivi: l’intelligenza artificiale è già un mercato d’oro, pronto a spiccare il volo. 

Secondo recenti stime della società d’analisi Idc, il reddito globale generato da queste tecnologie in diversi settori sarà nel 2016 di quasi 8 miliardi di dollari, mentre fra appena quattro anni supererà i 47 miliardi. “Sviluppatori e compagnie hanno già cominciato a integrare l’intelligenza artificiale praticamente in ogni tipo di applicazione o processo aziendale”, sottolinea David Schubmehl, direttore di ricerca. 

Stati Uniti e Canada sono la regione con più spesa e quella che produce le entrate maggiori: 6,2 miliardi di dollari quest’anno. La regione Emea (Europa, Medio Oriente e Africa) è il secondo mercato, ma la distanza con Asia e Pacifico, compreso il Giappone, si assottiglierà entro il 2020. E sono proprio i Paesi asiatici quelli che correranno di più.
L’attenzione dei colossi tecnologici, ma non solo, è altissima. Basta pensare agli esempi di sistemi “intelligenti” che già ci circondano, come le app di riconoscimento delle immagini, gli assistenti vocali di Apple, Google o Microsoft, o l’esplosione dei “chatbot”, agenti virtuali che rispondono alle nostre domande sfruttando forme di intelligenza artificiale. E poi c’è una crescente gamma di piattaforme che sono impiegate ad esempio dagli istituti finanziari per scovare le frodi, oppure dalle aziende per migliorare i processi produttivi interni. 

Non a caso quest’anno, secondo le previsioni degli analisti Idc, i settori che più investiranno in sistemi di intelligenza artificiale saranno quello bancario e retail, seguiti da sanità e industria. Sul piano della ricerca l’intelligenza artificiale sta facendo registrare passi da gigante. Microsoft ha da poco raggiunto un traguardo “storico”, creando un sistema di riconoscimento vocale quasi “umano”. Il progetto DeepMind di Google ha ottenuto un altro sorprendente risultato: il sistema per la prima volta ha risolto problemi con un meccanismo di deduzione, senza avere ricevuto informazioni precedenti sull’argomento. Ha “imparato a ricordare”. Di recente alcuni scienziati hanno sviluppato un’intelligenza artificiale che per l’80% ha risolto casi giudiziari al pari dei giudici in carne e ossa.

China puts their best robots on show, revealing how they work and how great the artificial intelligence innovations are getting developed. These robots show different skills, including reading how people think and understanding the human language.

These robots were displayed during the 2016 World Robot Conference held in Beijing. The Xiao I bot is ranked as number one among all the robots which were shown during the conference. Also among the top ones are Microsoft's Cortana, Amazon's Echo and Apple's Siri.
Daily Mail UK highlights these AI assistants through the names they hold. The Xiao I bot is the robot which can decipher human language as well as how people think. The exhibitors showed how the bot performs its analytical skill of decoding massive data. This allows the robot to respond to instructions very well.
The exhibitors also explained that Xiao I bot's understanding of the human brain has been accumulated for decades through specific data about daily life and information about several industries, The Mirror notes.
Meanwhile, Jia Jia is the robot which can recognize human emotions. Dubbed as a humanoid robot and "robot goddess, Jia Jia looks very human because of its long hair and rosy cheeks. Because of this, the bot has attracted a lot of spectators during the conference.
Jia Jia was invented by researchers from China's University of Science and Technology. When asked by one spectator about what skills it has, it said "'I can talk with you. I can recognize faces. I can identify gender and age of people standing front of me, and I can detect your facial expressions." During last year's conference, a similar humanoid robot caught people's attention which the researchers dubbed as "Geminoid F".
Uber Fusions in Pittsburgh. Credit Jeff Swensen for The New York Times

Carnegie Mellon University plans to announce on Wednesday that it will create a research center that focuses on the ethics of artificial intelligence.
The ethics center, called the K&L Gates Endowment for Ethics and Computational Technologies, is being established at a time of growing international concern about the impact of A.I. technologies. That has already led to an array of academic, governmental and private efforts to explore a technology that until recently was largely the stuff of science fiction.

In the last decade, faster computer chips, cheap sensors and large collections of data have helped researchers improve on computerized tasks like machine vision and speech recognition, as well as robotics.
Earlier this year, the White House held a series of workshops around the country to discuss the impact of A.I., and in October the Obama administration released a report on its possible consequences. And in September, five large technology firms — Amazon, Facebook, Google, IBM and Microsoft — created a partnership to help establish ethical guidelines for the design and deployment of A.I. systems.
Subra Suresh, Carnegie Mellon’s president, said injecting ethical discussions into A.I. was necessary as the technology advanced. While the idea of “Terminator” robots still seems far-fetched, the United States military is studying autonomous weapons that could make killing decisions on their own — a development that war planners think would be unwise.
“We are at a unique point in time where the technology is far ahead of society’s ability to restrain it,” Mr. Suresh noted.
But at the same time, he said some people are a bit too optimistic about their claims of A.I. advances, particularly when it comes to autonomous vehicles.
Mr. Suresh said he personally did not think self-driving cars would be in widespread use in the next three years.
Last year, Carnegie Mellon drew national attention when a group of 36 technical staff members and four faculty members left to join a new self-driving car laboratory that Uber established in Pittsburgh. The company recently started testing self-driving cars around the city.
The Uber laboratory has been a sensitive spot for Carnegie Mellon. The field of artificial intelligence emerged in part at Carnegie Mellon in the 1950s in the work of faculty who developed software that showed how computer algorithms could intelligently solve problems.
University officials said the departing faculty have been replaced and 13 additional professors have been hired since the defections. They also said that between 2011 and 2015, Carnegie Mellon faculty and staff created 164 start-up companies.
University officials pointed to a partnership the school entered into last year with Boeing to use machine-learning techniques to analyze vast amounts of data generated by modern aircraft such as the Boeing Dreamliner.
The new center is being created with a $10 million gift from K&L Gates, an international law firm headquartered in Pittsburgh. It will draw from several academic disciplines and will initially add two faculty and three positions for graduate students. It will also establish a biennial conference on ethical issues facing the field.
K&L Gates is one of the nation’s largest law firms. The Microsoft co-founder Bill Gates’s father, William H. Gates Sr., was involved in the firm until his retirement in 1998. Peter J. Kalis, chairman of the law firm, said the potential impact of A.I. technology on the economy and culture made it essential that as a society we make thoughtful, ethical choices about how the software and machines are used.
“Carnegie Mellon resides at the intersection of many disciplines,” he said. “It will take a synthesis of the best thinking of all of these disciplines for society to define the ethical constraints on the emerging A.I. technologies.”

New Research Center to Explore Ethics of Artificial Intelligence 

The Administration’s Report on the Future of Artificial Intelligence OCTOBER 12, 2016 ED FELTON AND TERAH LYONS




MACHINE INTELLIGENCE EVOLUTION

Frankenstein 2.0 3 SETTEMBRE 2016



TECNO-BIOPOLITICA Biometria, Biorobotica, Pelle Elettronica, Arti Artificiali 14 MARZO 2015

OUT OF CONTROL 28 FEBBRAIO 2008


Embrioni Umani Geneticamente Modificati 27 OTTOBRE 2016

Posta un commento
Share/Save/Bookmark
Related Posts Plugin for WordPress, Blogger...

Ratings by outbrain

PostRank