Age of Wonder: Superintelligence and existential risks

I spent the weekend in Eindhoven for Age of Wonder, a festival which turned up to be even more exciting and engaging than its name promised. I’ll get back with images and posts later but right now i felt like blogging my notes from Nick Bostrom‘s keynote about Superintelligence. Bostrom is a Professor in the Faculty of Philosophy at Oxford University and the director of The Future of Humanity Institute. He talked about the ultra fast pace of innovation, hazardous future technologies, artificial intelligence that will one day surpass the one of human beings and might even take over our future.

0b90fad6c119f8ab8.jpg

HAL 9000 vs Dave in Stanley Kubrick’s film 2001: A Space Odyssey

Bostrom is worried about the way humanity is rushing forward. The time between having an idea and developing it is getting increasingly shorter. This gives less space to reflect on the safety of innovation. Bostrom believes that humans cannot see the existential danger this entails. If the future is a place where we really want to live, then we will have to think in different and better-targeted ways about ourselves and about technological developments.

Bostrom’s talk started on a high and slightly worrying note with a few words on existential risk. An existential risk is one that endangers the survival of intelligent life on Earth or that threatens to severely destroy our potential for development. So far, humanity has survived the worst natural or man-caused catastrophes (genocide, tsunami, nuclear explosion, etc.) but an existential catastrophe would be so lethal that it would ruin all future for all mankind. An analogy on an individual scale would be if you find yourself facing a life sentence in prison or in a coma you don’t wake up from.

0asuperisksexistential.jpg

Slide from Nick Bostrom’s presentation: Negligible to existential catastrophes (bigger image)

So far we’ve survived all natural catastrophes but we need to beware of anthropogenic risks. New technologies haven’t yet managed to spread doom. Nuclear weapons, for example, are very destructive but they are also very difficult to make. Now imagine if a destructive technology was easy to make in your garage, It could end in the hands of a lunatic who plots the end of human civilization.

Potentially hazardous future technologies such as machine intelligence, synthetic biology, molecular technology, totalitarism-enabling technologies, geoengineering, human modification, etc. had not been invented 100 years ago. Imagine what might emerge within the next 100 years.

So if you care about the future of human civilization and if your goal is to do some good, you need to look at how to reduce existential risk. You would need to influence when and by whom technologies can be developed. You would need to speed up the development of ‘good’ technologies and retard the development of others such as designer pathogens for example.

How does this play out with a rise of machine intelligence which could result in Super Intelligence?

Machine intelligence will radically surpass biological intelligence (even if it is enhanced through genetic selection for example) one day. Experts find it difficult to agree on when exactly machines will reach the level of human intelligence. They estimate that there is 90% probability that human level artificial intelligence might arise around 2075. Once machine intelligence roughly matches human’s in general intelligence, a machine intelligence takeoff could take place extremely fast.

But how can you control a Super Intelligent machine? What will happen when we develop something that radically surpass our intelligence and might have the capability to shape our future? Any plan we might have to control the super intelligence will probably be easily thwarted by it. Is it possible to have any gatekeeper that/who will make sure that the artificial intelligence will not do anything detrimental to us? The Super Intelligence would probably be capable of figuring out how to escape any confinement we might impose upon it. It might even kill us to prevent us from interfering with its own plans. We should also think about any ultimate goal that a Super Intelligence might have. What if its own goal is to dedicate all the resources of the universe to producing as many paper clips as possible?

0aastrategicrelevance2.jpg

Slide from Nick Bostrom’s presentation: what Super Intelligence can do and how it can achieve its objectives (bigger image)

How can we build an artificial Super Intelligence with human-friendly values? How can we control it and avoid some existential risks that might arise down the road?

The forms of artificial intelligence we are familiar with can solve one problem: speech recognition, face recognition, route-finding software, spam filters, search engines, etc. A general artificial intelligence will be able to carry out a variety of challenges and goals. How can we male sure that it learns humanly meaningful values?

Nick Bostrom’s new book Superintelligence: Paths, Dangers, Strategies will be published by Oxford University in June 2014 (You can pre-order it on Amazon USA and UK.)