Episodes
There’s one last thing. Maybe the reason why we don’t see other intelligent life, maybe the reason we are in the astoundingly unique position of having to save the future of the human race, is because we are simulated human beings. It would explain a lot. (Original score by Point Lobo.) Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; Anders Sandberg, Oxford University philosopher; Seth Shostak, director of SETI
Published 12/05/18
Josh explains that to survive the next century or two – to navigate our existential threats – all of us will have to become informed and involved. It will take a movement that gets behind science done right to make it through the Great Filter. (Original score by Point Lobo.)Interviewees: Toby Ord, Oxford University philosopher; Sebastian Farquahar, Oxford University philosopher
Published 11/30/18
We humans are our own worst enemies when it comes to what it will take to deal with existential risks. We are loaded with cognitive biases, can’t coordinate on a global scale, and see future generations as freeloaders. Seriously, are we going to survive? (Original score by Point Lobo.)Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; Toby Ord, Oxford University philosopher; Anders Sandberg, Oxford University philosopher; Sebastian...
Published 11/28/18
Surprisingly the field of particle physics poses a handful of existential threats, not just for us humans, but for everything alive on Earth – and in some cases, the entire universe. Poking around on the frontier of scientific understanding has its risks. (Original score by Point Lobo.) Interviewees: Don Lincoln, Fermi National Laboratory senior experimental particle physicist; Ben Shlaer, University of Auckland cosmologist University of Auckland; Daniel Whiteson, University of California,...
Published 11/23/18
Natural viruses and bacteria can be deadly enough; the 1918 Spanish Flu killed 50 million people in four months. But risky new research, carried out in an unknown number of labs around the world, are creating even more dangerous humanmade pathogens. (Original score by Point Lobo.) Interviewees: Beth Willis, former chair, Containment Laboratory Community Advisory Committee; Dr Lynn Klotz, senior fellow at the Center for Arms Control and Non-Proliferation.
Published 11/21/18
An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.) Researchers have been working seriously on creating human-level intelligence in machines since at least the 1940s, and starting around 2006 that wild dream became truly feasible. Around that year, machine learning...
Published 11/16/18
Humans have faced existential risks since our species was born. Because we are Earthbound, what happens to Earth happens to us. Josh points out that there’s a lot that can happen to Earth - like gamma ray bursts, supernovae, and runaway greenhouse effect. (Original score by Point Lobo.) Because humanity is an Earthbound species – we have no way to get ourselves off of Earth and live elsewhere in the universe quite yet – if something terrible happens to Earth, it happens to us as well. Because...
Published 11/14/18
Humanity could have a future billions of years long – or we might not make it past the next century. If we have a trip through the Great Filter ahead of us, then we appear to be entering it now. It looks like existential risks will be our filter. (Original score by Point Lobo.)
Published 11/07/18
The Great Filter hypothesis says we’re alone in the universe because the process of evolution contains some filter that prevents life from spreading into the universe. Have we passed it or is it in our future? Humanity’s survival may depend on the answer. (Original score by Point Lobo.)
Published 11/07/18
Ever wondered where all the aliens are? It’s actually very weird that, as big and old as the universe is, we seem to be the only intelligent life. In this episode, Josh examines the Fermi paradox, and what it says about humanity’s place in the universe. (Original score by Point Lobo.)
Published 11/07/18
Why are smart people warning us about artificial intelligence? As machines grow smarter and able to improve themselves, we run the risk of them developing beyond our control. But AI is just one of the existential risks emerging in our future.
Published 10/24/18
We humans could have a bright future ahead of us that lasts billions of years. But we have to survive the next 200 years first.
Published 10/17/18
Published 07/09/18