Post in evidenza

Robot Apocalypse

Professor Stephen Hawking has pleaded with world leaders to keep technology under control before it destroys humanity.

giovedì 9 marzo 2017

Robot Apocalypse

Professor Stephen Hawking has pleaded with world leaders to keep technology under control before it destroys humanity.
In an interview with the Times, the physicist said humans need to find ways to identify threats posed by artificial intelligence before problems escalate.
Since civilisation began, aggression has been useful inasmuch as it has definite survival advantages,” the scientist said.
It is hard-wired into our genes by Darwinian evolution. Now, however, technology has advanced at such a pace that this aggression may destroy us all by nuclear or biological war. We need to control this inherited instinct by our logic and reason.”
Hawking added that the best solution would be “some form of world government” that could supervise the developing power of AI.
But that might become a tyranny. All this may sound a bit doom-laden but I am an optimist. I think the human race will rise to meet these challenges,” he added.
Hawking has been vocal about the potential dangers of artificial intelligence before.
“The real risk with AI isn’t malice but competence,” he wrote in a Reddit Q&A in 2015.
A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.
“You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”
And he is not alone. American technology firm Tesla’s CEO Elon Musk agrees that AI could pose a threat to human existence.
“I think we should be very careful about artificial intelligence,” Musk said during the 2014 AeroAstro Centennial Symposium
“If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful.
“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”

Stephen Hawking calls for ‘world government’ to stop robot apocalypse 9 Mar, 2017

Terminator scenario where ­artificial intelligence goes rogue and seeks to destroy its human creators is looking a bit more plausible.
Researchers at Google’s DeepMind, its artificial intelligence division, found neural networks trained to learn from experience and pursue the most efficient strategies became “highly aggressive” in competition. When the networks — computer systems modelled on the human brain — were tasked to collect apples, they co-operated as long as the fruit was plentiful. Once the supply decreased, they turned nasty, blasting their opponents with ­lasers and temporarily putting them out of action.
Joel Leibo, one of the ­researchers, wrote in a blog post: “We let the agents play this game many thousands of times and let them learn how to behave rationally using deep multi-agent reinforcement learning. Rather naturally, when there are enough apples in the environment, the agents learn to peacefully coexist and collect as many apples as they can; however, as the number of apples is reduced, the agents learn it may be better for them to tag the other agent to give themselves time alone to collect the scarce apples.”
Significantly, the smarter the robot was, the nastier its behaviour became. “Agents with the capacity to implement more complex strategies try to tag the other agent more frequently — no matter how we vary the scarcity of apples,” Mr Leibo added.
However, networksweren’t always aggressive. In ­another game, they were encouraged to co-operate to capture prey that would fight back. In this scenario, both agents were rewarded regardless of which caught the prey and they learned to work together. The team said the networks’ behaviour approximated to the model of Homo economicus — the idea that human nature is ­rational but narrowly self-interested. DeepMind believes its findings could help researchers understand complex systems such as the economy and environment.
The work is likely to fuel some people’s fears of robots taking over. Stephen Hawking and Elon Musk, the founder of Tesla, have both raised concerns about a robot apocalypse. Google has said that it would always have a “kill switch” to prevent this from happening. Mark Bishop, of Goldsmiths, University of London, said: “These systems are still very far off the human brain, so robots wittingly turning on us is not a threat, although there will always be people who disagree.

Google finds networks became ‘highly aggressive’ in competition MARK BRIDGE The Times February 18, 2017

Russian military has been steadily advancing the use of unmanned systems in military operations – its use of unmanned aerial vehicles (UAV) in domestic and international engagements has been well documented.  With the backing of the government and domestic industries, over the past few years Russian Ministry of Defense (MOD) has been actively developing a wide range of unmanned platforms – including unmanned ground vehicles (UGVs). Earlier this year, President Putin himself called for the development of “autonomous robotic complexes” for the military.
While most Russian UGVs are still in the design, testing and evaluation stages, there has already been notable presence of such machines with the Russian armed forces. Uran-6” demining robot, made by JSC 766 UPTK, has been assisting Russian sappers in Syria, helping clear recaptured areas from mines, IEDs and unexploded ordinance. Designed to operate in extreme environment, this UGV is the first successful battlefield deployment and has been operating in Syria for almost a year at this point. Russia’s agency for exporting military technologies, Rosoboronexport, has reportedly started to offer these machines for export. Uran-6’s larger sibling, armored “Uran-9”, is designed for combat operations - weighing in at 10 tons, it is armed with a 30 mmm cannon, 7.62mm machine gun and anti-tank rockets. Russian military experts think this particular UGV can be used in Syria in the near future in support of Russian ground forces.  There has already been some speculation whether Moscow-allied Syrian forces actually used Russian UGVs in recent operations. A closer international investigation revealed that such use of combat unmanned ground systems probably did not take place, though numerous Russian UGV designs point to their potential use in a variety of combat scenarios.
Platforma-M”, designed for intelligence gathering and reconnaissance roles, is a UGV that is already  getting integrated into Russian armed forces. Made by “Progress” Science and Technical Institute, it is armed with a 7.62 mm machine gun and 4 grenade launchers, and is built to operate in extreme environments, with temperatures ranging from -30 to +50 Celsius (Arctic to desert conditions). This UGV is already in service with the Russian Pacific Fleet.
In 2013, Russian MOD reviewed “Argo” unmanned ground vehicle – built by the Central Design Institute of Robotics and Technical Cybernetics, this wheeled system is designed for patrol and intelligence gathering, and is also armed with a 7.62 mm machine gun and several rockets. Argo can also be used for conducting amphibious operations and logistics support.
Another Russian UGV in development is a sapper and demining “Prohod-1”, made by “Signal” Design Bureau. This vehicle has undergone state trials by the end of summer 2016. According to designers, its intended to create safe corridors of up to 4.5 (14 feet) meters wide for soldiers and equipment. Unlike Uran-family of vehicles, Prohod-1 can be operated in manned and unmanned configurations.
In 2015, Russia unveiled another heavy armored UGV - “Udar”, made on the chassis of BMP-3 armored vehicle. BMP-3 chassis was chosen by designers as the most versatile platform that is widespread across Russian armed forces, easing potential vehicle maintenance and repair.  Also made by “Signal” Design Bureau, it carries heavy armaments and potentially even a multi-copter drone for greater intelligence, surveillance and reconnaissance role. This heavy UGV will be manufactured in three variants - combat, engineering support and transportation/evacuation.

Get Ready, NATO: Russia’s New Killer Robots are Nearly Ready for War Samuel Bendett March 7, 2017

"The solution in Kara's opinion is a killer robot, like in the films of Hollywood actor Arnold Schwarzenegger," Al-Akhbar newspaper article says after Likud minister's comments.

Lebanese newspaper Al-Akhbar criticized Israeli Minister Ayoub Kara on Monday, dubbing him "the Science Fiction Minister," after he said over the weekend that Israel was developing robots that could kill its enemies, including Hezbollah Secretary-General Hassan Nasrallah.
A Monday article in the Lebanese paper held that "the Israeli Minister-without-portfolio Ayoub Kara found himself a portfolio and has become, starting today, the science fiction minister in the Israeli government by saying that Israel's goal of assassinating its number one enemy, Hezbollah Secretary-General Hassan Nasrallah, is closer than ever."

Kara claimed that within a matter of a few years an IDF robot would be able to assassinate Nasrallah and the heads of Hamas in Gaza. His comments were criticized and mocked by several politicians. Kara responded to the criticism on Sunday, saying that he had been speaking seriously. According to the minister, he had heard about the robots from late former president Shimon Peres, and that he was now being mocked only because he's in the Likud.



The Lebanese newspaper article said that "the solution in Kara's opinion is a killer robot, like in the films of Hollywood actor Arnold Schwarzenegger." 

The paper added, however, that it was taking the minister's remarks seriously. "Despite the fact that the media in Israel reacted cynically to Kara's comments, he exposed a scientific reality."

Defending his remarks in an Israel Radio interview on Saturday night, Kara said, “This is just like the development of drones or our Iron Dome [anti-rocket] system. At first, people were laughing and saying, ‘This is impossible’ and that it will never happen. People thought that these were imaginary technologies, but now we see them operating.”

When asked about the time frame for these technologies to be operational, Kara replied: “It would take several years. I guess that in three years we will see the results already.”

Kara explained that his desire to advance technologies to keep Israeli soldiers within our borders and not in another country comes from personal experience.

“I don’t want other families to go through what I went through when we started the war with Lebanon,” he said.

Kara lost one of his brothers in the First Lebanon War in 1985, while another brother was severely wounded.

Udi Shaham contributed to this report.

The United Nations agreed to discuss a ban on "killer robots" in 2017. The 123 signatories to a long-standing conventional weapons pact "agreed to formalize their efforts next year to deal with the challenges raised by weapons systems that would select and attack targets without meaningful human control," according to Human Rights Watch.
"The governments meeting in Geneva took an important step toward stemming the development of killer robots, but there is no time to lose," said Steve Goose, arms director of Human Rights Watch, a co-founder of the Campaign to Stop Killer Robots. "Once these weapons exist, there will be no stopping them. The time to act on a pre-emptive ban is now."


schwit1 reminded us that IEEE Spectrum ran a guest post Thursday by AI professor Toby Walsh, who addressed the U.N. again this week. "If we don't get a ban in place, there will be an arms race. And the end point of this race will look much like the dystopian future painted by Hollywood movies like The Terminator."


The United Nations decided to formally address the issue of killer robots.
At the International Convention on Conventional Weapons in Geneva, the 123 participating nations voted to form a group in 2017 of governmental experts to look at lethal autonomous robots that can select targets without human control, which could lead to a ban, reported Human Rights Watch.
Many of Silicon Valley’s elite, including Steve Wozniak and Elon Musk, have expressed concern over the development of killer robots. Musk and Wozniak both signed on to a letter last year urging the UN to take up the issue, calling for an international ban on the creation of lethal autonomous weapons.
Stephen Hawking and leading AI researchers — including University of California Berkeley computer scientist Stuart Russell, Google Director of Research Peter Norvig and Microsoft Managing Director Eric Horvitz — were among the over 1,000 scientists who signed the letter calling for a killer robot ban.
Although this summer China boasted that it was adding artificial intellegence to cruise missiles, the nation said in Geneva that it too sees value in a new international forum on lethal autonomous robotics, according to Human Rights Watch
Nineteen nations even called for a global ban on killer robots, including Argentina, Peru Pakistan, Cuba and Egypt. In 2014, only five countries supported such measures.
Musk has been particularly vocal about his fear of deadly robotics, warning that artificial intelligence is “our biggest existential threat” and “potentially more dangerous than nukes.” Hawking said in 2014that “the development of full artificial intelligence could spell the end of the human race.”
Musk acted on his concerns late last year and, with help from Sam Altman of Y Combinator and backing from Peter Thiel, started a nonprofit called OpenAI to promote artificial intelligence that helps rather than hurts humanity.
The UN Special Rapporteur for Extrajudicial Executions, the UN official who investigates and responds to extrajudicial killings around the world, said in 2014 that weaponized robots would necessitate new international rules for the use of force, but today’s decision to create a formal expert group marks what may be the most decisive action taken thus far.
Ambassador Amandeep Singh Gill from India will head the killer robots initiative in 2017.

The UN has decided to tackle the issue of killer robots in 2017 APRIL GLASER 

One of the barriers standing in the way of ethically designed AI systems that benefit humanity as a whole, and avoid the pitfalls of embedded algorithmic biases, is the tech industry’s lack of ownership and responsibility for ethics, according to technical professional association, the IEEE.
The organization has published the first version of a framework document it’s hoping will guide the industry toward the light — and help technologists build benevolent and beneficial autonomous systems, rather than thinking that ethics is not something they need to be worrying about.
The document, called Ethically Aligned Design, includes a series of detailed recommendations based on the input of more than 100 “thought leaders” working in academia, science, government and corporate sectors, in the fields of AI, law and ethics, philosophy and policy.
The IEEE is hoping it will become a key reference work for AI/AS technologists as autonomous technologies find their way into more and more systems in the coming years. It’s also inviting feedback on the document from interested parties — there’s a Submission Guidelines on The IEEE Global Initiative’s website. It says all comment and input will be made publicly available, and should be sent no later than March 6, 2017.
The wider hope, in time, is for the initiative to generate recommendations for IEEE Standards based on its notion of Ethically Aligned Design — by creating consensus and contributing to the development of methodologies to achieve ethical ends.
“By providing technologists with peer-driven, practical recommendations for creating ethically aligned autonomous and intelligent products, services, and systems, we can move beyond the fears associated with these technologies and bring valued benefits to humanity today and for the future,” says Konstantinos Karachalios, managing director for IEEE Standard Association, in a statement.
The 136-page document is divided into a series of sections, starting with some general principles — such as the need to ensure AI respects human rights, operates transparently and that automated decisions are accountable — before moving onto more specific areas such as how to embed relevant “human norms or values” into systems, and tackle potential biases, achieve trust and enable external evaluating of value alignment.
Another section considers methodologies to guide ethical research and design — and here the tech industry’s lack of ownership or responsibility for ethics is flagged as a problem, along with other issues, such as ethics not being routinely part of tech degree programs. The IEEE also notes the lack of an independent review organization to oversee algorithmic operation, and the use of “black-box components” in the creation of algorithms, as other problems to achieving ethical AI.
One suggestion to help overcome the tech industry’s ethical blind spots is to ensure those building autonomous technologies are “a multidisciplinary and diverse group of individuals” so that all potential ethical issues are covered, the IEEE writes.
It also argues for the creation of standards providing “oversight of the manufacturing process of intelligent and autonomous technologies” in order to ensure end users are not harmed by autonomous outcomes.
And for the creation of “an independent, internationally coordinated body” to oversee whether products meet ethical criteria — both at the point of launch, and thereafter as they evolve and interact with other products.
“When systems are built that could impact the safety or wellbeing of humans, it is not enough to just presume that a system works. Engineers must acknowledge and assess the ethical risks involved with black-box software and implement mitigation strategies where possible,” the IEEE writes. “Technologists should be able to characterize what their algorithms or systems are going to do via transparent and traceable standards. To the degree that we can, it should be predictive, but given the nature of AI/AS systems it might need to be more retrospective and mitigation oriented.
“Similar to the idea of a flight data recorder in the field of aviation, this algorithmic traceability can provide insights on what computations led to specific results ending up in questionable or dangerous behaviors. Even where such processes remain somewhat opaque, technologists should seek indirect means of validating results and detecting harms.”
Ultimately, it concludes that engineers should deploy black-box software services or components “only with extraordinary caution and ethical care,” given the opacity of their decision making process and the difficulty in inspecting or validating these results.
Another section of the document — on safety and beneficence of artificial general intelligence — also warns that as AI systems become more capable “unanticipated or unintended behavior becomes increasingly dangerous,” while retrofitting safety into any more generally capable, future AI systems may be difficult.
“Researchers and developers will confront a progressively more complex set of ethical and technical safety issues in the development and deployment of increasingly autonomous and capable AI systems,” it suggests.
The document also touches on concerns about the asymmetry inherent in AI systems that are fed by individuals’ personal data — yet gains derived from the technology are not equally distributed.
“The artificial intelligence and autonomous systems (AI/AS) driving the algorithmic economy have widespread access to our data, yet we remain isolated from gains we could obtain from the insights derived from our lives,” it writes.
“To address this asymmetry there is a fundamental need for people to define, access, and manage their personal data as curators of their unique identity. New parameters must also be created regarding what information is gathered about individuals at the point of data collection. Future informed consent should be predicated on limited and specific exchange of data versus long-term sacrifice of informational assets.”
The full IEEE document can be downloaded here.
The issue of AI ethics and accountability has been rising up the social and political agenda this year, fueled in part by high-profile algorithmic failures such as Facebook’s inability to filter out fake news.
The White House has also put out its own reports into AI and R&D. And this fall a U.K. parliamentary committee warned the government of the need to act pro-actively to ensure AI accountability.

IEEE puts out a first draft guide for how tech can achieve ethical AI design  by Natasha Lomas 


"Horizon Zero Dawn" The World Without Us 2 MARZO 2017




Ti piace?
Posta un commento
Share/Save/Bookmark
Related Posts Plugin for WordPress, Blogger...

Ratings by outbrain

PostRank