Post in evidenza

Warfare Revolution

Some of the world’s leading robotics and artificial intelligence pioneers are calling on the United Nations to ban the development and us...

venerdì 25 novembre 2016

The Rise of Predictive AI


Researchers have created a machine that they claim can tell if a person is a convicted criminal simply from their facial features.


The artificial intelligence, created at Shanghai Jiao Tong University, was able to correctly identify criminals from a selection of 186 photos nine out of 10 times by assessing their eyes, nose and mouth.
The findings add support to an often-discredited view that criminals have particular facial features, suggesting that the structure of someone's face, including "lip curvature, eye inner corner distance, and the so-called nose-mouth angle", can identify criminality. 
It would be highly controversial if applied, but raises fears that China could add such information to its surveillance capabilities, which already include a dossier on almost everyone called dang'an. The files, collected since the Mao era, contain personal and confidential information such as health records and school reports. 
As part of the research, Xiaolin Wu and Xi Zhang trained the artificial intelligence with around 1,670 pictures of Chinese men, half of whom were convicted criminals. The pictures analysed were taken from identification cards in which the men, aged 18 to 55, were clean-shaven and holding neutral poses.  
Having taught the system, Mr Wu and Mr Xiang then fed it a further 186 images and asked it to sort them into criminals and non-criminals. 
The accuracy of its guesses, which were based on features it associates with criminality, led the researchers to claim that, "despite the historical controversy", people who have committed a crime have certain unique facial features. 
"The faces of general law-abiding public have a greater degree of resemblance compared with the faces of criminals, or criminals have a higher degree of dissimilarity in facial appearance than normal people," said Mr Wu and Mr Xiang. 
More research is required to cover different races, genders and facial expressions before the tool could be widely used. 
The research could add to China's vast security apparatus, which already includes AI-based "predictive policing".
Earlier this year, Beijing hired the China Electronics Technology Group, the country's largest defence contractor, to create an AI that can analyse the behaviour of people in CCTV footage for signs that they're about to commit an act of terror.
Once complete, the system will be used to predict "security events" so that police or the military can be deployed in advance. 
Digital rights experts warned that using AI in this way could be dangerous and that "reaching generalised conclusions from such small data poses huge problems for innocent people". 
Dr Richard Tynan, technologist at Privacy International, said: "This is no different than Craniometry from the 1800s, which has been debunked. In fact, the problem runs much deeper because it can be impossible to know why a machine has made a certain decision about you.
"It demonstrates the arbitrary and absurd correlations that algorithms, AI, and machine learning can find in tiny datasets. This is not the fault of these technologies but rather the danger of applying complex systems in inappropriate contexts."
Minority Report-style AI learns to predict if people are criminals from their facial features Cara McGoogan 24 NOVEMBER 2016


By merging together the head shots of 39 shooters from mass shootings from the past 34 years, the legal site FederalCharges.com revealed the average gunman (they are usually men) isn't too distinguishable. 

The site tracked down the photos and then "averaged" the images with Psychomorph, a facial-averaging software to come up with a terrifying composite image. The average gunman — with white skin, dark hair and dark eyes — could be anyone at your school, office or grocery store.
If you take photos of killers from school shootings specifically, a different photograph of a man emerges. A younger white man, who looks more like a high school or college student in his late teens or early 20s. To get this average face, the site took 17 images of gunmen from shootings such as the Columbine High School massacre in 1999.

Sadly, another common mass shooting location is the workplace, often with a disgruntled employee or a recently fired employee holding the gun. Taking the average image of 16 workplace shooters, a similar image of a nondescript man appears. This man also looks white with a somewhat darker complexion and features than the generic mass shooter.

As these three average faces show, white men are most often the perpetrator behind mass violence. Breaking down the racial and ethnic data shows that more than 50 percent of shooters are white. That's more than three times the likelihood of a black shooter. Despite misperceptions of terrorists from the Middle East killing Americans, only 4 percent of mass shooters are Middle Eastern.

The site used photos of U.S. shooters based on Mother Jones data from 1982 to 2016. To wrangle this data was a struggle because even defining a "mass shooting" is difficult since the U.S. government doesn't have an concrete definition and there's not a streamlined way to collect data about shooting incidents from different parts of the country. 

Computer software reveals the average face of a mass murderer SASHA LEKACH 30 11 2016

A research team in Shanghai has stirred up far-reaching controversy after releasing paper indicating that computers can tell if a person will become a criminal based merelyon his or her facial features.
In the papertitled “Automated Inference on Criminality using Face Image,” Wu Xiaolin and Zhang Xitwo researchers from Shanghai Jiaotong Universitysay they ran computer tests using 1,856 images of real peopleAccording to Wu and Xithe tests revealed "some discriminating structural features for predicting criminalitysuch as lip curvature  and inner-eye corner distance.” 
They believe that the tests have produced evidence for "the validity of automated face-induced inference on criminalitydespite the historical controversy surrounding the topic.”
The article soon went viral onlinewith many researchers criticizing the findings as  discriminatory and irresponsible.
We were unlucky to release our paper around the time when Trump won the [U.SpresidentialelectionSome emails from the U.Scriticized ussaying that the U.Salready has enough trouble and we should not add fuel to the fireSome Chinese netizenson the other handsuggested that we help the Commission of Discipline Inspection [to catch corrupt officials],” Wu told Thepaper.cn during an interview.
Denouncing criticism of the research as “discrimination based on phrenology,” Wu stressed that he has no intention of supporting discrimination based solely on facial features.
We simply found some correlation between facial features and certain social behaviorsImyself am against discrimination based on facial features … Our research can also serve as evidence to fight discrimination,” Wu added.
According to Wuthe teams current goal is to deepen the researchthough they have no plans to put it to use in the field of criminology.
The relationship between ethics and scientific development is hard to explainShould nu lear physicists be responsible for damage caused by nuclear bombs?” Wu mused


DeepFace is a deep learning facial recognition system created by a research group at Facebook. It identifies human faces in digital images. It employs a nine-layer neural net with over 120 million connection weights, and was trained on four million images uploaded by Facebook users.[1][2] The system is said to be 97% accurate, compared to 85% for the FBI's Next Generation Identification system.[3] One of the creators of the software, Yaniv Taigman, came to Facebook via their 2007 acquisition of Face.com.[4] Facebook started rolling out the technology to its users in early 2015, with the exception of users in the EU due to data privacy laws there.[5] A

DeepFace From Wikipedia


Google: Our new system for recognizing faces is the best one ever   

FACIAL RECOGNITION CODE OF CONDUCT 17 GIUGNO 2015


Researchers from Google’s AI division DeepMind and the University of Oxford have used artificial intelligence to create the most accurate lip-reading software ever. Using thousands of hours of TV footage from the BBC, scientists trained a neural network to annotate video footage with 46.8 percent accuracy. That might not seem that impressive at first — especially compared to AI accuracy rates when transcribing audio — but tested on the same footage, a professional human lip-reader was only able to get the right word 12.4 percent of the time.
The research follows similar work published a separate group at the University of Oxford earlier this month. Using related techniques, these scientist were able to create a lip-reading program called LipNet that achieved 93.4 percent accuracy in tests, compared to 52.3 percent human accuracy. However, LipNet was only tested on specially-recorded footage that used volunteers speaking formulaic sentences. By comparison, DeepMind’s software — known as “Watch, Listen, Attend, and Spell” — was tested on far more challenging footage; transcribing natural, unscripted conversations from BBC politics shows.
More than 5,000 hours of footage from TV shows including NewsnightQuestion Time, and the World Today, was used to train DeepMind’s “Watch, Listen, Attend, and Spell” program. The videos included 118,000 difference sentences and some 17,500 unique words, compared to LipNet’s test database of video of just 51 unique words.
DeepMind’s researchers suggest that the program could have a host of applications, including helping hearing-impaired people understand conversations. It could also be used to annotate silent films, or allow you to control digital assistants like Siri or Alexa by just mouthing words to a camera (handy if you’re using the program in public).
But when most people learn that an AI program has learned how to lip-read, their first thought is how it might be used for surveillance. Researchers say that there’s still a big difference in transcribing brightly-lit, high resolution TV footage, and grainy CCTV video with a low frame rate, but you can’t ignore the fact, that artificial intelligence seems to be closing this gap.

Deepmind AI is good at lipreading like fictional Hal9000 from 2001 November 23, 2016



The tension between using technology to improve healthcare versus the privacy risks this poses has risen its head once again, after Google revealed its AI engine DeepMind is using NHS data to help diagnose patients with major health risks.
While on the face of it it's a medical leap forward privacy campaigners are continuing to express concern over the amount of medical data that potentially gives Google access to.
Doctors at the Royal Free Hospital in London have been using DeepMind, the artificial intelligence (AI) arm of Google's Alphabet, claiming it can free up over half a million hours per year, currently spent on paperwork, which could be redirected into actual patient care.
Instead of constantly bringing in patients with kidney conditions, DeepMind will send "breaking news" alerts to doctors when a patient in their charge has a likelihood of heading towards a kidney episode, or other life-threatening conditions, through to complete organ failure.
The app being provided to patients is called 'Streams', and although it offers encryption at both ends and the data won’t be shared with Google, some people are still worried..
The deal was first announced back in May and resulted in an investigation from the Information Commissioners Office. More recently, Moorfield Eye Hospital has confirmed it is looking into using DeepMind for similar purposes.
This pilot scheme may well become monetised in the future, if successful, but because of the sheer ease of using an app over using rifling through notes and the compute power of being able to spot patterns, it could be that the result saves lives and millions of pounds.
Talking to the BBC, a spokesman from lobby group Med Confidential said: "Our concern is that Google gets data on every patient who has attended the hospital in the last five years and they're getting a big monthly report of data on every patient who was in the hospital."
But Professor Jane Dacre, president of the Royal College of Physicians retorts, said patient confidentially was always top priority.
"Whenever we develop new ways of using patient data, it is essential that safeguards are in place for appropriateness and confidentiality, but with these we should embrace the opportunity to improve healthcare quality and reduce the burdens of bureaucracy on clinicians so they can focus on their patients."
The Streams pilot programme will be rolled out early in the new year.

NHS partnership with Google's DeepMind AI tool raises privacy concerns Chris Merriman 23 November 2016 

How Predictive AI Will Change Shopping Amit Sharma NOVEMBER 18, 2016

Unlike experts and political commentators, Havas Cognitive’s Artificial Intelligence project, Eagle AI, was predicting a Trump win from the off.
Commissioned by ITV News to create an AI bot, Havas worked with IBM Watson (an AI created by IBM and licensed by several companies including Havas), to analyse and understand human behaviour – and furthermore to predict human behaviour.

“Usually we do it with a marketing hat on,” explains Faye Raincock, European communications, Havas. “[We] try and predict people’s buying decisions and behaviour when it comes to relationships with brands; but what we’ve developed here is unique in that it’s the first time it’s been used to try and analyse voting behaviour.”
Using billions of data sources, Eagle AI analysed any social media posts and comments related to the election; but it also listened to all of the debates, speeches across the election campaign, and campaign videos. On top of this, it also monitored any press coverage and its virality.
“What was particularly clever was that it was trained to understand emotion, sentiment and tone – and that was the first time that had ever been done relating to voting intention,” Raincock says. “Instead of traditional polling, it dug beneath the surface into how people actually felt, and tried to predict behaviour. ITV never asked us to predict the outcome of the election, largely because they have an existing relationship with traditional pollsters in the US. But as part of our job, to understand what mattered to people, we began looking at voting intent. We felt that was an important parameter to understand behaviour. And we found that the AI was predicting a Trump victory quite soon into its analysis.”
Whilst a great many of us may have never believed Trump was capable of succeeding in his presidential bid, the unbiased analysis of the AI was monitoring countless attitudinal responses to the campaigns. Raincock explains that “when all the human beings, all the experts, all the pollsters were saying no way, it won’t happen – the AI was predicting the opposite.”

What sets the AI apart from traditional pollsters is the removal of the human element – and therefore the removal of bias. The AI has no preconceptions or prejudgements, and that creates a distinction. When traditional pollsters do phone polling, they ask specific questions (eg. how do you plan to vote?), whereas the AI delved much deeper, analysing not what people were willing to admit to pollsters, but rather every post a person put out, and their emotional connection to that.
“It stops being about how you will vote, and it starts being about how a certain candidate or election issue makes you feel. It extrapolates that into what it expects that feeling to predict. It’s a more nuanced understanding than a straightforward answer to a traditional poll,” explains Raincock. Essentially, by using machines rather than human analysis, you completely leave out the innate human bias.
“With pollsters and political commentators, there was a part of them that just couldn’t see how human beings could vote for someone who was saying and doing the things that Donald Trump was saying and doing. And so there was a bias that was connected to their already preconceived ideas as to what was acceptable and what might be deemed to be acceptable by the voting public. And we think that’s ultimately what it comes down to – it’s precisely because it isn’t a human being that it can predict human beings better.”
Executive producer at ITV, Alex Chandler, said, “ITV News was joined by a raft of experts to shed more light on the complexities of this election. Eagle AI’s job was to show for the first time on live TV, the impact of a vitriolic presidential race on the mood of the nation. Thanks to Havas, we were able to deep dive into millions of data points, in a way no programme has been able to do before.”

Havas believes that the creation of such a capable AI is the beginning of a new wave of consumer analysis. “When we looked at the analysis after the effect, we found the AI predicted 41/50 states and 4/5 swing states. There’s always going to be margin for error in these things but we’re seeing applications for marketing,” explains Raincock.
“Our job now is to help people understand cognitive systems better. They start out thinking you’re talking about unstructured, big data challenges and even the dark web – all quite difficult concepts to grasp. In fact, it’s a lot simpler than that. It’s just taking a massive sample, bigger than would be humanly possible, and interpreting a brand’s position or power in a more meaningful way.”
To give an impression of the scale of data the AI can process – it would have taken 2000 researchers 30 years to analyse what Havas analysed in around a month for the US election. With this scale of understanding applied to brand penetration and buying decisions, the application for marketers can be astronomical.
“What we’re finding is that AI and cognitive can analyse on a truly massive scale and then predict behaviour more accurately than ever before.”

AI Analysis Predicted Trump Win Throughout Election Georgia Sanders 15/11/16

Trump's victory: the night a machine predicted humans better than the humans Lisa De Bonis November 09, 2016

Machine Learning and Artificial Intelligence in the Enterprise Timo Elliott ÜberTech | 

OpenAI releases Universe, a platform for training A.I.s to play games, use apps   

Artificial intelligence and the evolution of the fractal economy Nikolas Badminton 05 12 2016


Is Artificial Intelligence Taking Over Our Lives? DECEMBER 5, 2016


Artificial Intelligence Goes Mainstream 4 NOVEMBRE 2016

WESTWORLD The Future of AI 6 NOVEMBRE 2016

AI APOCALYPSE 12 FEBBRAIO 2015
Ex Machina 3 NOVEMBRE 2014

Terminator, Westworld, Ex Machina: The evils of A.I. 25 GENNAIO 2015 
METROPOLIS la profezia visionaria di Fritz Lang 16 MARZO 2015
IL NOSTRO FUTURO POSTUMANO 7 13 MARZO 2011
IL NOSTRO FUTURO POSTUMANO 6 11 FEBBRAIO 2009
Stop the Robots La protesta anti-macchine 19 MARZO 2015 
ETHIC MACHINES 19 SETTEMBRE 2015 
PORNO ROBOT 17 GIUGNO 2015
BROTHEL SEX ROBOT 18 AGOSTO 2016 
Frankenstein 2.0 3 SETTEMBRE 2016
2045: The Year Man Becomes Immortal 7 MARZO 2011 
Immortalità Digitale 8 MAGGIO 2012
RISE OF THE MACHINE: SUPERATO IL TEST DI TURING 9 GIUGNO 2014 
Protecting mankind from intelligent machines 13 GENNAIO 2015 
La Superintelligenza minaccia il genere umano 3 DICEMBRE 2014 
Robot Killer e Super Umani: la Guerra del 2050 30 LUGLIO 2015
STOP KILLER ROBOTS 6 MAGGIO 2013 
MIND THE GAP Nuovo appello contro i robot killer 13 APRILE 2015 
ONU: regole per i robot killer 16 MAGGIO 2014 
Sociopathic Robots 22 GENNAIO 2015 

Embrioni Umani Geneticamente Modificati 27 OTTOBRE 2016

Posta un commento
Share/Save/Bookmark
Related Posts Plugin for WordPress, Blogger...

Ratings by outbrain

PostRank