Episodes
Mark Brakel (Director of Policy at the Future of Life Institute) joins the podcast to discuss the AI Safety Summit in Bletchley Park, objections to AI policy, AI regulation in the EU and US, global institutions for safe AI, and autonomy in weapon systems.
Timestamps:
00:00 AI Safety Summit in the UK
12:18 Are officials up to date on AI?
23:22 Objections to AI policy
31:27 The EU AI Act
43:37 The right level of regulation
57:11 Risks and regulatory tools
1:04:44 Open-source AI
1:14:56...
Published 11/17/23
Dan Hendrycks joins the podcast again to discuss X.ai, how AI risk thinking has evolved, malicious use of AI, AI race dynamics between companies and between militaries, making AI organizations safer, and how representation engineering could help us understand AI traits like deception. You can learn more about Dan's work at https://www.safe.ai
Timestamps:
00:00 X.ai - Elon Musk's new AI venture
02:41 How AI risk thinking has evolved
12:58 AI bioengeneering
19:16 AI agents
24:55 Preventing...
Published 11/03/23
Samuel Hammond joins the podcast to discuss how AGI will transform economies, governments, institutions, and other power structures. You can read Samuel's blog at https://www.secondbest.ca
Timestamps:
00:00 Is AGI close?
06:56 Compute versus data
09:59 Information theory
20:36 Universality of learning
24:53 Hards steps in evolution
30:30 Governments and advanced AI
40:33 How will AI transform the economy?
55:26 How will AI change transaction costs?
1:00:31 Isolated thinking about AI...
Published 10/20/23
Are we doomed to a future of loneliness and unfulfilling online interactions? What if technology made us feel more connected instead?
Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year
In the eighth and final episode of Imagine A World we explore the...
Published 10/17/23
Let’s imagine a future where AGI is developed but kept at a distance from practically impacting the world, while narrow AI remakes the world completely. Most people don’t know or care about the difference and have no idea how they could distinguish between a human or artificial stranger. Inequality sticks around and AI fractures society into separate media bubbles with irreconcilable perspectives. But it's not all bad. AI markedly improves the general quality of life, enhancing medicine and...
Published 10/10/23
Steve Omohundro joins the podcast to discuss Provably Safe Systems, a paper he co-authored with FLI President Max Tegmark. You can read the paper here: https://arxiv.org/pdf/2309.01933.pdf
Timestamps:
00:00 Provably safe AI systems
12:17 Alignment and evaluations
21:08 Proofs about language model behavior
27:11 Can we formalize safety?
30:29 Provable contracts
43:13 Digital replicas of actual systems
46:32 Proof-carrying code
56:25 Can language models think logically?
1:00:44 Can AI...
Published 10/05/23
What if AI allowed us to communicate with animals? Could interspecies communication lead to new levels of empathy? How might communicating with animals lead humans to reimagine our place in the natural world?
Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last...
Published 10/03/23
If you could extend your life, would you? How might life extension technologies create new social and political divides? How can the world unite to solve the great problems of our time, like AI risk? What if AI creators could agree on an inspection process to expose AI dangers before they're unleashed?
Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought...
Published 09/26/23
Johannes Ackva joins the podcast to discuss the main drivers of climate change and our best technological and governmental options for managing it. You can read more about Johannes' work at http://founderspledge.com/climate
Timestamps:
00:00 Johannes's journey as an environmentalist
13:21 The drivers of climate change
23:00 Oil, coal, and gas
38:05 Solar, wind, and hydro
49:34 Nuclear energy
57:03 Geothermal energy
1:00:41 Most promising technologies
1:05:40 Government subsidies
1:13:28...
Published 09/21/23
How do low income countries affected by climate change imagine their futures? How do they overcome these twin challenges? Will all nations eventually choose or be forced to go digital?
Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year.
In the fourth...
Published 09/19/23
What if we had one advanced AI system for the entire world? Would this led to a world 'beyond' nation states - and do we want this?
Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year.
In the third episode of Imagine A World, we explore the fictional...
Published 09/12/23
Tom Davidson joins the podcast to discuss how AI could quickly automate most cognitive tasks, including AI research, and why this would be risky.
Timestamps:
00:00 The current pace of AI
03:58 Near-term risks from AI
09:34 Historical analogies to AI
13:58 AI benchmarks VS economic impact
18:30 AI takeoff speed and bottlenecks
31:09 Tom's model of AI takeoff speed
36:21 How AI could automate AI research
41:49 Bottlenecks to AI automating AI hardware
46:15 How much of AI research is...
Published 09/08/23
How does who is involved in the design of AI affect the possibilities for our future? Why isn’t the design of AI inclusive already? Can technology solve all our problems? Can human nature change? Do we want either of these things to happen?
Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the...
Published 09/05/23
Are today's democratic systems equipped well enough to create the best possible future for everyone? If they're not, what systems might work better? And are governments around the world taking the destabilizing threats of new technologies seriously enough, or will it take a dramatic event, such as an AI-driven war, to get their act together?
Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We...
Published 09/05/23
Coming Soon…
The year is 2045. Humanity is not extinct, nor living in a dystopia. It has averted climate disaster and major wars. Instead, AI and other new technologies are helping to make the world more peaceful, happy and equal. How? This was what we asked the entrants of our Worldbuilding Contest to imagine last year.
Our new podcast series digs deeper into the eight winning entries, their ideas and solutions, the diverse teams behind them and the challenges they faced. You might love...
Published 08/29/23
Robert Trager joins the podcast to discuss AI governance, the incentives of governments and companies, the track record of international regulation, the security dilemma in AI, cybersecurity at AI companies, and skepticism about AI governance. We also discuss Robert's forthcoming paper International Governance of Civilian AI: A Jurisdictional Certification Approach. You can read more about Robert's work at https://www.governance.ai
Timestamps:
00:00 The goals of AI governance
08:38...
Published 08/20/23
Jason Crawford joins the podcast to discuss the history of progress, the future of economic growth, and the relationship between progress and risks from AI. You can read more about Jason's work at https://rootsofprogress.org
Timestamps:
00:00 Eras of human progress
06:47 Flywheels of progress
17:56 Main causes of progress
21:01 Progress and risk
32:49 Safety as part of progress
45:20 Slowing down specific technologies?
52:29 Four lenses on AI risk
58:48 Analogies causing disagreement...
Published 07/21/23
On this special episode of the podcast, Jaan Tallinn talks with Nathan Labenz about Jaan's model of AI risk, the future of AI development, and pausing giant AI experiments.
Timestamps:
0:00 Nathan introduces Jaan
4:22 AI safety and Future of Life Institute
5:55 Jaan's first meeting with Eliezer Yudkowsky
12:04 Future of AI evolution
14:58 Jaan's investments in AI companies
23:06 The emerging danger paradigm
26:53 Economic transformation with AI
32:31 AI supervising itself
34:06 Language...
Published 07/06/23
Joe Carlsmith joins the podcast to discuss how we change our minds about AI risk, gut feelings versus abstract models, and what to do if transformative AI is coming soon. You can read more about Joe's work at https://joecarlsmith.com.
Timestamps:
00:00 Predictable updating on AI risk
07:27 Abstract models versus gut feelings
22:06 How Joe began believing in AI risk
29:06 Is AI risk falsifiable?
35:39 Types of skepticisms about AI risk
44:51 Are we fundamentally confused?
53:35 Becoming...
Published 06/22/23
Dan Hendrycks joins the podcast to discuss evolutionary dynamics in AI development and how we could develop AI safely. You can read more about Dan's work at https://www.safe.ai
Timestamps:
00:00 Corporate AI race
06:28 Evolutionary dynamics in AI
25:26 Why evolution applies to AI
50:58 Deceptive AI
1:06:04 Competition erodes safety
10:17:40 Evolutionary fitness: humans versus AI
1:26:32 Different paradigms of AI risk
1:42:57 Interpreting AI systems
1:58:03 Honest AI and uncertain...
Published 06/08/23
Roman Yampolskiy joins the podcast to discuss various objections to AI safety, impossibility results for AI, and how much risk civilization should accept from emerging technologies. You can read more about Roman's work at http://cecs.louisville.edu/ry/
Timestamps:
00:00 Objections to AI safety
15:06 Will robots make AI risks salient?
27:51 Was early AI safety research useful?
37:28 Impossibility results for AI
47:25 How much risk should we accept?
1:01:21 Exponential or S-curve?...
Published 05/26/23
Nathan Labenz joins the podcast to discuss the economic effects of AI on growth, productivity, and employment. We also talk about whether AI might have catastrophic effects on the world. You can read more about Nathan's work at https://www.cognitiverevolution.ai
Timestamps:
00:00 Economic transformation from AI
11:15 Productivity increases from technology
17:44 AI effects on employment
28:43 Life without jobs
38:42 Losing contact with reality
42:31 Catastrophic risks from AI
53:52...
Published 05/11/23
Nathan Labenz joins the podcast to discuss the cognitive revolution, his experience red teaming GPT-4, and the potential near-term dangers of AI. You can read more about Nathan's work at
https://www.cognitiverevolution.ai
Timestamps:
00:00 The cognitive revolution
07:47 Red teaming GPT-4
24:00 Coming to believe in transformative AI
30:14 Is AI depth or breadth most impressive?
42:52 Potential near-term dangers from AI
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️...
Published 05/04/23
Maryanna Saenko joins the podcast to discuss how venture capital works, how to fund innovation, and what the fields of investing and philanthropy could learn from each other. You can read more about Maryanna's work at https://future.ventures
Timestamps:
00:00 How does venture capital work?
09:01 Failure and success for startups
13:22 Is overconfidence necessary?
19:20 Repeat entrepreneurs
24:38 Long-term investing
30:36 Feedback loops from investments
35:05 Timing investments
38:35...
Published 04/27/23
Connor Leahy joins the podcast to discuss the state of the AI. Which labs are in front? Which alignment solutions might work? How will the public react to more capable AI? You can read more about Connor's work at https://conjecture.dev
Timestamps:
00:00 Landscape of AI research labs
10:13 Is AGI a useful term?
13:31 AI predictions
17:56 Reinforcement learning from human feedback
29:53 Mechanistic interpretability
33:37 Yudkowsky and Christiano
41:39 Cognitive Emulations
43:11 Public...
Published 04/20/23