via www.cs.ox.ac.uk
Frankenstein’s paperclips
AS DOOMSDAY SCENARIOS go, it does not sound terribly frightening. The “paperclip maximiser” is a thought experiment proposed by Nick Bostrom, a philosopher at Oxford University. Imagine an artificial intelligence, he says, which decides to amass as many paperclips as possible. It devotes all its energy to acquiring paperclips, and to improving itself so that it can get paperclips in new ways, while resisting any attempt to divert it from this goal. Eventually it “starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities”. This apparently silly scenario is intended to make the serious point that AIs need not have human-like motives or psyches. They might be able to avoid some kinds of human error or bias while making other kinds of mistake, such as fixating on paperclips. And although their goals might seem innocuous to start with, they could prove dangerous if AIs were able to design their own successors and thus repeatedly improve themselves. Even a “fettered superintelligence”, running on an isolated computer, might persuade its human handlers to set it free. Advanced AI is not just another technology, Mr Bostrom argues, but poses an existential threat to humanity.
The idea of machines that turn on their creators is not new, going back to Mary Shelley’s “Frankenstein” (1818) and earlier; nor is the concept of an AI undergoing an “intelligence explosion” through repeated self-improvement, which was first suggested in 1965. But recent progress in AI has caused renewed concern, and Mr Bostrom has become the best-known proponent of the dangers of advanced AI or, as he prefers to call it, “superintelligence”, the title of his bestselling book.
His interest in AI grew out of his analysis of existential threats to humanity. Unlike pandemic disease, an asteroid strike or a supervolcano, the emergence of superintelligence is something that mankind has some control over. Mr Bostrom’s book prompted Elon Musk to declare that AI is “potentially more dangerous than nukes”. Worries about its safety have also been expressed by Stephen Hawking, a physicist, and Lord Rees, a former head of the Royal Society, Britain’s foremost scientific body. All three of them, and many others in the AI community, signed an open letter calling for research to ensure that AI systems are “robust and beneficial”—ie, do not turn evil. Few would disagree that AI needs to be developed in ways that benefit humanity, but agreement on how to go about it is harder to reach.
Mr Musk thinks openness is the key. He was one of the co-founders in December 2015 of OpenAI, a new research institute with more than $1 billion in funding that will carry out AI research and make all its results public. “We think AI is going to have a massive effect on the future of civilisation, and we’re trying to take the set of actions that will steer that to a good future,” he says. In his view, AI should be as widely distributed as possible. Rogue AIs in science fiction, such as HAL 9000 in “2001: A Space Odyssey” and SKYNET in the “Terminator” films, are big, centralised machines, which is what makes them so dangerous when they turn evil. A more distributed approach will ensure that the benefits of AI are available to everyone, and the consequences less severe if an AI goes bad, Mr Musk argues.
Not everyone agrees with this. Some claim that Mr Musk’s real worry is market concentration—a Facebook or Google monopoly in AI, say—though he dismisses such concerns as “petty”. For the time being, Google, Facebook and other firms are making much of their AI source code and research freely available in any case. And Mr Bostrom is not sure that making AI technology as widely available as possible is necessarily a good thing. In a recent paper he notes that the existence of multiple AIs “does not guarantee that they will act in the interests of humans or remain under human control”, and that proliferation could make the technology harder to control and regulate.
Fears about AIs going rogue are not widely shared by people at the cutting edge of AI research. “A lot of the alarmism comes from people not working directly at the coal face, so they think a lot about more science-fiction scenarios,” says Demis Hassabis of DeepMind. “I don’t think it’s helpful when you use very emotive terms, because it creates hysteria.” Mr Hassabis considers the paperclip scenario to be “unrealistic”, but thinks Mr Bostrom is right to highlight the question of AI motivation. How to specify the right goals and values for AIs, and ensure they remain stable over time, are interesting research questions, he says. (DeepMind has just published a paper with Mr Bostrom’s Future of Humanity Institute about adding “off switches” to AI systems.) A meeting of AI experts held in 2009 in Asilomar, California, also concluded that AI safety was a matter for research, but not immediate concern. The meeting’s venue was significant, because biologists met there in 1975 to draw up voluntary guidelines to ensure the safety of recombinant DNA technology.
Sci-fi scenarios
Learn more: Frankenstein’s paperclips
The Latest on: Artificial intelligence ethics
via Google News
The Latest on: Artificial intelligence ethics
- What Tech Leaders Should Consider About The Ethical Use Of AI In The Metaverseon August 3, 2022 at 6:15 am
With the advent of the metaverse which utilizes AI-based technology, ethical use will become an important point of concern for both brands and users.
- AI Ethics Fighting Passionately For Your Legal Right To Be An Exceptionon August 3, 2022 at 5:00 am
AI Ethics is fighting for your legal right to be an exception when it comes to AI autonomous systems that are based on a no-exceptions Machine Learning approach.
- STOA study on the ethical and societal challenges ofon August 2, 2022 at 5:00 am
A recently published Panel for the Future of Science and Technology (STOA) study offers a bird's eye perspective of the key societal and ethical challenges that can be expect as a result of the ...
- Religions want moral-ethical guardrails as Artificial Intelligence heads towards sentienceon August 2, 2022 at 3:24 am
In view of fast emerging Artificial Intelligence (AI) with the possibility of artificial sentience, the world needs to seriously and urgently address ...
- Are we being kept in the dark about artificial intelligence?on July 31, 2022 at 11:50 pm
In 2019, the UAE became one of the world’s only countries with a dedicated AI ministry. The exact mandate of the ministry remains a work in progress, but serious ethical issues such as AI sentience ...
- How artificial intelligence is changing your worldon July 29, 2022 at 6:30 am
Like this weeks grim economic outlook, the CSIROs report on megatrends that will frame the lives of Australians in the coming decades made confronting reading. From the pandemic to drug-resistant ...
- The future of technology lies in artificial intelligenceon July 28, 2022 at 10:00 am
Artificial intelligence could solve some of the world's most challenging problems, but there is a need for strict guidelines.
- Artificial intelligence can monitor workplaces for safety breaches. Experts say privacy laws are laggingon July 24, 2022 at 5:32 pm
AI that uses CCTV to identify health and safety hazards can be trained to identify breaches such as when a worker is not wearing gloves or a hard hat, or to identify hazards like spills. But privacy ...
- Ethical GmbH Adds Artificial Intelligence to SAE Data Reconciliation Automationon July 23, 2022 at 7:00 am
Ethical GmbH releases a new version of eReconciliation® that uses artificial intelligence (AI) to improve the performance of the software automation engine. This third version of eReconciliation® ...
- Colleges Now Offer Classes Warning Of Ethical Issues With AIon July 22, 2022 at 8:59 am
From school security to the teacher shortage, Artificial Intelligence is entering the public school ...
via Bing News