Frankenstein’s paperclips
AS DOOMSDAY SCENARIOS go, it does not sound terribly frightening. The “paperclip maximiser” is a thought experiment proposed by Nick Bostrom, a philosopher at Oxford University. Imagine an artificial intelligence, he says, which decides to amass as many paperclips as possible. It devotes all its energy to acquiring paperclips, and to improving itself so that it can get paperclips in new ways, while resisting any attempt to divert it from this goal. Eventually it “starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities”. This apparently silly scenario is intended to make the serious point that AIs need not have human-like motives or psyches. They might be able to avoid some kinds of human error or bias while making other kinds of mistake, such as fixating on paperclips. And although their goals might seem innocuous to start with, they could prove dangerous if AIs were able to design their own successors and thus repeatedly improve themselves. Even a “fettered superintelligence”, running on an isolated computer, might persuade its human handlers to set it free. Advanced AI is not just another technology, Mr Bostrom argues, but poses an existential threat to humanity.
The idea of machines that turn on their creators is not new, going back to Mary Shelley’s “Frankenstein” (1818) and earlier; nor is the concept of an AI undergoing an “intelligence explosion” through repeated self-improvement, which was first suggested in 1965. But recent progress in AI has caused renewed concern, and Mr Bostrom has become the best-known proponent of the dangers of advanced AI or, as he prefers to call it, “superintelligence”, the title of his bestselling book.
His interest in AI grew out of his analysis of existential threats to humanity. Unlike pandemic disease, an asteroid strike or a supervolcano, the emergence of superintelligence is something that mankind has some control over. Mr Bostrom’s book prompted Elon Musk to declare that AI is “potentially more dangerous than nukes”. Worries about its safety have also been expressed by Stephen Hawking, a physicist, and Lord Rees, a former head of the Royal Society, Britain’s foremost scientific body. All three of them, and many others in the AI community, signed an open letter calling for research to ensure that AI systems are “robust and beneficial”—ie, do not turn evil. Few would disagree that AI needs to be developed in ways that benefit humanity, but agreement on how to go about it is harder to reach.
Mr Musk thinks openness is the key. He was one of the co-founders in December 2015 of OpenAI, a new research institute with more than $1 billion in funding that will carry out AI research and make all its results public. “We think AI is going to have a massive effect on the future of civilisation, and we’re trying to take the set of actions that will steer that to a good future,” he says. In his view, AI should be as widely distributed as possible. Rogue AIs in science fiction, such as HAL 9000 in “2001: A Space Odyssey” and SKYNET in the “Terminator” films, are big, centralised machines, which is what makes them so dangerous when they turn evil. A more distributed approach will ensure that the benefits of AI are available to everyone, and the consequences less severe if an AI goes bad, Mr Musk argues.
Not everyone agrees with this. Some claim that Mr Musk’s real worry is market concentration—a Facebook or Google monopoly in AI, say—though he dismisses such concerns as “petty”. For the time being, Google, Facebook and other firms are making much of their AI source code and research freely available in any case. And Mr Bostrom is not sure that making AI technology as widely available as possible is necessarily a good thing. In a recent paper he notes that the existence of multiple AIs “does not guarantee that they will act in the interests of humans or remain under human control”, and that proliferation could make the technology harder to control and regulate.
Fears about AIs going rogue are not widely shared by people at the cutting edge of AI research. “A lot of the alarmism comes from people not working directly at the coal face, so they think a lot about more science-fiction scenarios,” says Demis Hassabis of DeepMind. “I don’t think it’s helpful when you use very emotive terms, because it creates hysteria.” Mr Hassabis considers the paperclip scenario to be “unrealistic”, but thinks Mr Bostrom is right to highlight the question of AI motivation. How to specify the right goals and values for AIs, and ensure they remain stable over time, are interesting research questions, he says. (DeepMind has just published a paper with Mr Bostrom’s Future of Humanity Institute about adding “off switches” to AI systems.) A meeting of AI experts held in 2009 in Asilomar, California, also concluded that AI safety was a matter for research, but not immediate concern. The meeting’s venue was significant, because biologists met there in 1975 to draw up voluntary guidelines to ensure the safety of recombinant DNA technology.
Sci-fi scenarios
Learn more: Frankenstein’s paperclips
The Latest on: Artificial intelligence ethics
[google_news title=”” keyword=”Artificial intelligence ethics” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
via Google News
The Latest on: Artificial intelligence ethics
- Ethics for Artificial Intelligenceon May 13, 2024 at 4:39 pm
The field of communication in the world today is determined by the development of information technology, including artificial intelligence.
- New paper examines potential power and pitfalls of harnessing artificial intelligence for sleep medicineon May 13, 2024 at 12:56 pm
In a new research commentary, the Artificial Intelligence in Sleep Medicine Committee of the American Academy of Sleep Medicine highlights how artificial intelligence stands on the threshold of making ...
- The effects of artificial intelligence access among City College studentson May 13, 2024 at 12:34 pm
Rapid advancements in technology have sparked continuous discourse about integrating artificial intelligence (AI) into the ever-evolving landscape of education. At the heart of this discussion lies ...
- Navigating The Future Of AI: Ethical Development, Regulation And Education Take Center Stageon May 13, 2024 at 4:00 am
As we stand on the edge of a new AI-driven era, our conversations must focus not just on what AI can do, but how it should be developed and implemented in society.
- In the rush to adopt AI, ethics and responsibility are taking a backseat at many companieson May 12, 2024 at 10:26 am
ChatGPT sparked a generative AI frenzy in the corporate workplace. Efforts to implement that technology responsibly, however, haven't kept up.
- Artificial Intelligence Practioners To Support Ethical Practice, Quality Service Deliveryon May 12, 2024 at 7:11 am
The president of the National Artificial Intelligence Practioners has charged the Federal Government to position Nigeria as a leading AI hub on the continent of Africa. Prof. Eyitope Ogunbodede, the ...
- Using Artificial Intelligence for Celebrity News: A Modern Approach to Entertainment Journalismon May 11, 2024 at 6:50 am
In the fast-paced world of celebrity and star news, artificial intelligence (AI) is revolutionizing the way information is identified, communicated, and disseminated. AI tools not only improve the ...
- A physicists’ guide to the ethics of artificial intelligenceon May 6, 2024 at 6:00 am
Physics may seem like its own world, but different sectors using machine learning are all part of the same universe.
- Islam and Artificial Intelligence: Ethical Horizons in a Digital Ageon April 30, 2024 at 6:11 am
The conference will delve into the ethical, social, and economic implications of artificial intelligence as seen through the eyes of the Muslim community and other conscientious thinkers. This event ...
- The Flaw in Artificial Intelligence’s Limitless Potentialon April 29, 2024 at 5:00 am
Using AI, our ability to make decisions increases exponentially, which, in mission-critical industries, means saved lives or expedited justice.
via Bing News