Post in evidenza

Warfare Revolution

Some of the world’s leading robotics and artificial intelligence pioneers are calling on the United Nations to ban the development and us...

mercoledì 30 dicembre 2015

AI Apocalypse 2

Perhaps the most important appointment of 2016 will be the trio of research fellows currently being hired by the Future of Humanity Institute (FHI).  The three hires, it is hoped, will help avert the artificial intelligence (AI) catastrophe that many believe could be the biggest threat to the human race.
The search is a timely one. Those who have warned of the dangers of AI include Stephen Hawking, Bill Gates and Elon MuskMusk, the inventor-cum-business magnate behind enterprises from Tesla Motors’ electric cars to SpaceX’s reusable rockets, is among the FHI’s recent donors. Their thinking is that once artificial intelligences are cleverer than humans - which is 80 per cent likely to happen in the next 100 years, according to FHI estimations – we could face a future in which the interests of a machine do not include human well-being.
Take the example of an AI whose instruction is to make as many paperclips as possible. If it has the self-improvement capacity to follow this goal to its logical extent, it will harvest every iron atom on the planet as it seeks to maximise the number of paperclips in existence. The sensationalism-averse researchers of the FHI will sigh at the inclusion of this point, but in this scenario the countless atom irons in human bodies are fair game: this is a much more likely scenario than the luridly anthropomorphised Terminator stereotype.
With such scenarios being bandied about, the FHI has now advertised to thousands of scientists and programmers in the hope of finding three research fellows to form a new Strategic Artificial Intelligence Research Centre. The trio will join the thinkers of researchers of the FHI, a small but growing offshoot of the University of Oxford which, since 2005, has made its main purpose to predict and prevent large-scale risks to human civilization.
If there were such a thing as a poster boy for research into the risk of computers surpassing humans, it would be the institute’s founding director Professor Nick Bostrom. The inscrutable Swedish professor of philosophy earlier this year addressed a UN committee on the risks posed by AI and other man-made threats, and has twice been named in Foreign Policy magazine’s Top 100 Global Thinkers list.
We’ve been scouring the world for hidden talent in this area ,” says Bostrom, a strong-jawed, grey-eyed man of 42 who is standing in the Institute’s cramped kitchen as he assembles his daily vegetable smoothie.
The FHI itself feels like the School of Athens taking place in an IT support office. Here there are 11 full-time researchers, experts in everything from AI to nanotechnology and population ethics; a team of big brains who are in the market for those three more specialists. “Our last three lunch conversations have been on issues ranging from the distribution of energy in the Universe to totalitarianism and inverse reinforcement learning”, recalls Dr Niel Bowerman, the boyish cheerful Assistant Director of the institute whose ebullience belies a formidable CV including climate change research for the World Bank while still a Master’s student.
While Bostrom tends to attract the limelight, behind every door there is an influential thinker of some stripe. Take for example, Dr Toby Ord, the moral philosopher who attracted attention in 2009 when he began giving away everything he earned over £20,000 - to statistically-effective charities. His day job for the FHI, is to investigate the “deep future” - what humanity might look like thousands or millions of years from today and how the universe as we know it could sustain a vast and advanced vision of humanity.  
Back to the kitchen, where Bostrom has left his cabbage, cauliflower, carrot and lime on the sideboard. “If we are right that this (AI) is the most important area to be working in, then this would be an almost unique place in the world to be working, because you’d be working on the most important problem in the biggest group working on these issues.”
However, it may not all be bad news. The researchers will also be examining the benefits  of AI as well as the  risks, or, in short, why we shouldn’t cast into the flames everything carrying so much as a microchip. “Today, benefits are evident in self-driving cars and improvements in supply chains driving down prices today’ says Dr Bowerman. "In the future these artificial intelligence algorithms could create the potential for breakthroughs driven by improved understandings in fields as diverse as genetics, the environment and macroeconomics.”
So what will Bostrom be looking for in the FHI’s new hires?  Applicants are being asked to submit a writing sample and research proposal and those shortlisted will be interviewed by an FHI panel led by Bostrom.
“The most important thing is brainpower and an ability to engage with questions, even when there is no clear pre-defined recipe or method for how you go about doing that.
“To some extent,” he says, discouragingly, “you know it when you see it. These people are hard to find.”
Applications close on January 6. 

Wanted: three boffins to save the world from the 'AI apocalypse' Tom Ough 29 DECEMBER 2015

Over the past few years, deep-learning artificial-intelligence algorithms have made tremendous leaps. They can now translate words and sentences in real time. They can recognize faces (and whom they belong to). And they can even order Chinese delivery for you -- as is the case with Facebook's (NASDAQ:FB) experimental virtual assistant, M.
You'd think that computer scientists would want to keep their software under wraps, but in fact just the opposite is happening. The foremost AI researchers are pushing to make all of their developments open source, available for anyone to work with. Most recently, Facebook open-sourced the design of its latest computer server built to run its deep-learning algorithms, with plans to add the design to its Open Compute Project. This development comes just a month after Google, an Alphabet (NASDAQ:GOOG) (NASDAQ:GOOGL) company, released the source code for Tensorflow, its AI used for things such as photo search, voice recognition, translation, and more.
So why are Google and Facebook both giving away access to their AI designs?
The Android of AI

Google's decision to license Android via open source led to it becoming the most popular operating system in the world. Google has benefited greatly from the rapid growth in Internet-connected devices running Android and searching on Google. It also has seen an extraordinary benefit from controlling the platform, in particular the Google Play Store and the other default apps included with Google's stock version of Android.

The Google Play app store is bringing in around $1 billion gross revenue per month for Google thanks to the proliferation of Android. Likewise, Google now sees more searches on mobile devices than on desktops and laptops. Google is benefiting substantially from controlling the ecosystem around Android.
The strategy appears similar with AI. By open-sourcing hardware designs and algorithms, both Facebook and Google stand to see substantial benefits. They'll be able to easily identify new talent that's interested in working on its algorithms and hardware by seeing who's contributing to the code base. Additionally, they'll increase the prevalence of deep-learning algorithms, providing more data to train the algorithms and make them better while decreasing the cost of hardware designs. Finally, and most importantly, they'll be able to control the platform future algorithms are built on.
Scaling for free

Google didn't have to build the hardware to scale Android and its Google Play store because it gave other people the tools to do that. Likewise, Google and Facebook don't have to think of every possible instance for their deep-learning algorithms, because they've given the tools to other engineers. Additionally, Facebook doesn't have to order a ton of "Big Sur" servers -- its latest design -- to lower the price. Other people will order the hardware as well, allowing manufacturers to scale up.

Scaling is particularly important for deep-learning AI algorithms. They feed on data, and while Facebook and Google have tons of data, their stores are only a fraction of the potential data available to train these AI algorithms. By open-sourcing their algorithms, Facebook and Google get all the benefits of training algorithms using other people's data and use cases without having to pay for it.
The future of AI

Hardware is just as important to the future of deep-learning algorithms as software. The better the hardware, the more effective the algorithms become. Facebook's Big Sur server is twice as fast as its predecessor, which means it can train AI algorithms twice as fast, or train neural networks twice as large.

Facebook's decision to open-source Big Sur is just as notable as open-sourcing its software through Torch or Google's decision to open-source Tensorflow. It also increases Facebook's chances of controlling at least part of the advancements in AI as it licenses the hardware design on which the neural networks run.
While Facebook and Google are working hard to create the next great development in AI, their decisions to open-source their work increase the likelihood they'll at least have a part in it if they don't create it in-house. They can then feed back those breakthroughs into their own products, such as Search and News Feed.

AI APOCALYPSE 12 FEBBRAIO 2015


The Road to Superintelligence 29 GENNAIO 2015






Posta un commento
Share/Save/Bookmark
Related Posts Plugin for WordPress, Blogger...

Ratings by outbrain

PostRank