Now Reading
Humanity’s last invention and our uncertain future

Humanity’s last invention and our uncertain future

5056989736_53bd009c95_o-560x315
“With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies.”

A philosopher, a scientist and a software engineer have come together to propose a new centre at Cambridge to address developments in human technologies that might pose “extinction-level” risks to our species, from biotechnology to artificial intelligence.

In 1965, Irving John ‘Jack’ Good sat down and wrote a paper for New Scientist called Speculations concerning the first ultra-intelligent machine. Good, a Cambridge-trained mathematician, Bletchley Park cryptographer, pioneering computer scientist and friend of Alan Turing, wrote that in the near future an ultra-intelligent machine would be built.

This machine, he continued, would be the “last invention” that mankind will ever make, leading to an “intelligence explosion” – an exponential increase in self-generating machine intelligence. For Good, who went on to advise Stanley Kubrick on 2001: a Space Odyssey, the “survival of man” depended on the construction of this ultra-intelligent machine.

Fast forward almost 50 years and the world looks very different. Computers dominate modern life across vast swathes of the planet, underpinning key functions of global governance and economics, increasing precision in healthcare, monitoring identity and facilitating most forms of communication – from the paradigm shifting to the most personally intimate. Technology advances for the most part unchecked and unabated.

While few would deny the benefits humanity has received as a result of its engineering genius – from longer life to global networks – some are starting to question whether the acceleration of human technologies will result in the survival of man, as Good contended, or if in fact this is the very thing that will end us.

Now a philosopher, a scientist and a software engineer have come together to propose a new centre at Cambridge, the Centre for the Study of Existential Risk (CSER), to address these cases – from developments in bio and nanotechnology to extreme climate change and even artificial intelligence – in which technology might pose “extinction-level” risks to our species.

“At some point, this century or next, we may well be facing one of the major shifts in human history – perhaps even cosmic history – when intelligence escapes the constraints of biology,” says Huw Price, the Bertrand Russell Professor of Philosophy and one of CSER’s three founders, speaking about the possible impact of Good’s ultra-intelligent machine, or artificial general intelligence (AGI) as we call it today.

“Nature didn’t anticipate us, and we in our turn shouldn’t take AGI for granted. We need to take seriously the possibility that there might be a ‘Pandora’s box’ moment with AGI that, if missed, could be disastrous. I don’t mean that we can predict this with certainty, no one is presently in a position to do that, but that’s the point! With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies.”

Read more . . .

See Also

via University of Cambridge
 

The Latest Streaming News: Artificial general intelligence updated minute-by-minute

Bookmark this page and come back often
 

Latest NEWS

 

Latest VIDEO

 

The Latest from the BLOGOSPHERE

What's Your Reaction?
Don't Like it!
0
I Like it!
0
Scroll To Top