EP05: Artificial Intelligence
Listen now
Description
An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.) Researchers have been working seriously on creating human-level intelligence in machines since at least the 1940s, and starting around 2006 that wild dream became truly feasible. Around that year, machine learning took a huge leap forward with the advent of artificial neural nets, algorithms that are not only capable of learning, but can also learn on their own. The rise of neural nets signals a big and sudden move down a dangerous path: machines that can learn on their own may also learn to improve themselves. And when a machine can improve itself, it can rewrite its code, make improvements to its structure – and get better at getting better. At some point, a self-improving machine will surpass the level of human intelligence - becoming superintelligent. At this point, it will become capable of taking over everything from our cellular networks to the global internet infrastructure. And it’s about here that the existential risk that artificial intelligence poses to humanity comes in. We have no reason to believe that a machine we create will be friendly toward us, or even consider us at all. A superintelligent machine in control of the world we’d built and with no capacity to empathize with humans could lead directly to our extinction in all manner of creative ways, from repurposing our atoms into new materials for its expanding network, to plunging us into a resource conflict we would surely lose. There are some people working to head off catastrophe-by-AI, but with each new algorithm we release that is capable of improving itself, a new possible future existential threat is set loose. Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.
More Episodes
There’s one last thing. Maybe the reason why we don’t see other intelligent life, maybe the reason we are in the astoundingly unique position of having to save the future of the human race, is because we are simulated human beings. It would explain a lot. (Original score by Point...
Published 12/05/18
Josh explains that to survive the next century or two – to navigate our existential threats – all of us will have to become informed and involved. It will take a movement that gets behind science done right to make it through the Great Filter. (Original score by Point Lobo.)Interviewees: Toby...
Published 11/30/18