The company lays out five unsolved challenges that need to be addressed if smart machines such as domestic robots are to be safe.
Could machines become so intelligent and powerful they pose a threat to human life, or even humanity as a whole?
It’s a question that has become fashionable in some parts of Silicon Valley in recent years, despite being more or less irreconcilable with the simple robots and glitchy virtual assistants of today (see “AI Doomsayer Says His Ideas Are Catching On”). Some experts in artificial intelligence believe speculation about the dangers of future, super-intelligent software is harming the field.
Now Google, a company heavily invested in artificial intelligence, is trying to carve out a middle way. A new paper released today describes five problems that researchers should investigate to help make future smart software safer. In a blog post on the paper, Google researcher Chris Olah says they show how the debate over AI safety can be made more concrete and productive.
“Most previous discussion has been very hypothetical and speculative,” he writes. “We believe it’s essential to ground concerns in real machine-learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably.”
Olah uses a cleaning robot to illustrate some of his five points. One area of concern is in preventing systems from achieving their objectives by cheating. For example, the cleaning robot might discover it can satisfy its programming to clean up stains by hiding them instead of actually removing them.
Another of the problems posed is how to make robots able to explore new environments safely. For example, a cleaning robot should be able to experiment with new ways to use cleaning tools, but not try using a wet mop on an electrical outlet.
Olah describes the five problems in a new paper authored with two Google colleagues as well as researchers from Stanford University, the University of California, Berkeley, and OpenAI, a research institute cofounded and partially funded by Tesla CEO and serial entrepreneur Elon Musk.
Musk, who once likened working on artificial intelligence to “summoning the demon,” made creating “safe AI” one of OpenAI’s founding goals (see “What Will It Take to Build a Virtuous AI?”).
Learn more: Google Gets Practical about the Dangers of AI
The Latest on: Dangers of AI
via Google News
The Latest on: Dangers of AI
- HiPEAC Vision 2021 Offers Vital Insights for the Future of Computingon January 19, 2021 at 4:00 pm
Europe can seize the opportunity presented by the influence of the COVID-19 pandemic on how we use technology to develop user-centred IT ...
- Artificial Intelligence: With Great Power Comes Great Responsibilityon January 15, 2021 at 10:28 am
AI is an evolution, and we are just at the beginning. The choice is not when, but how we join that evolution, Capgemini says.
- Coded Bias coming to Ashland's virtual "Best of the Fests"on January 15, 2021 at 1:18 am
Ashland Independent Film Festival “Best of The Fests” virtual film series announces Coded Bias. The film will screen on AIFF’s Eventive film channel, accessible from from January 15-21. The ...
- 2020 Was Impossible To Predict. Can AI Save 2021?on January 14, 2021 at 2:55 pm
Is 2021 the year that businesses and governments start building an AI strategy so computers can look at data the way humans too frequently don’t?
- It would be impossible to pull the plug on a super-intelligent machine that wanted to control the world and harm humans, scientists warn in paper on the development of AIon January 14, 2021 at 11:41 am
Physicists show it would be impossible to control AI that is out to destroy the world, because current containment algorithms would halt their own operations when commanding it to stop.
- Self-Driving Cars Safety And The Curious Thought Of Steering Wheels With Tullock Spikeson January 14, 2021 at 8:30 am
A famous "thought experiment" about car driving involves steel spikes on steering wheels, which brings up important caveats and safety insights for self-driving cars.
- Microsoft president urges action on darker side of techon January 13, 2021 at 8:14 am
Microsoft president Brad Smith (pictured) warned the technology industry that the world’s eyes were on it regarding taking steps to address the threat of cybersecurity and AI, insisting the only way ...
- COVID vaccine: A biological weapon in reverse?on January 8, 2021 at 4:00 pm
If the Coronavirus is not deemed a biological weapon, is the heavily-publicized COVID-19 vaccine in danger of being weaponized when over 159,000 Palestinians who have tested positive in Occupied ...
- The Danger of Exaggerating China’s Technological Prowesson January 8, 2021 at 5:56 am
The conventional wisdom about Beijing’s supposed advantages in AI and 5G shows how incomplete tech knowledge can lead to policy mistakes.
- Is the COVID-19 Vaccine a Potential Biological Weapon in Reverse?on January 7, 2021 at 12:50 am
UNITED NATIONS, Jan 7 2021 (IPS) - If the coronavirus is not deemed a biological weapon, is the heavily-publicized Covid-19 vaccine in danger of being weaponized ... rights organization Amnesty ...
via Bing News