Episodes
Connor Leahy joins the podcast to discuss the motivations of AGI corporations, how modern AI is "grown", the need for a science of intelligence, the effects of AI on work, the radical implications of superintelligence, open-source AI, and what you might be able to do about all of this.    Here's the document we discuss in the episode:    https://www.thecompendium.ai   Timestamps:  00:00 The Compendium  15:25 The motivations of AGI corps   31:17 AI is grown, not written   52:59 A...
Published 11/22/24
Suzy Shepherd joins the podcast to discuss her new short film "Writing Doom", which deals with AI risk. We discuss how to use humor in film, how to write concisely, how filmmaking is evolving, in what ways AI is useful for filmmakers, and how we will find meaning in an increasingly automated world.    Here's Writing Doom:   https://www.youtube.com/watch?v=xfMQ7hzyFW4    Timestamps:  00:00 Writing Doom   08:23 Humor in Writing Doom  13:31 Concise writing   18:37 Getting feedback  27:02...
Published 11/08/24
Andrea Miotti joins the podcast to discuss "A Narrow Path" — a roadmap to safe, transformative AI. We talk about our current inability to precisely predict future AI capabilities, the dangers of self-improving and unbounded AI systems, how humanity might coordinate globally to ensure safe AI development, and what a mature science of intelligence would look like.    Here's the document we discuss in the episode:    https://www.narrowpath.co   Timestamps:  00:00 A Narrow Path  06:10 Can we...
Published 10/25/24
Tamay Besiroglu joins the podcast to discuss scaling, AI capabilities in 2030, breakthroughs in AI agents and planning, automating work, the uncertainties of investing in AI, and scaling laws for inference-time compute. Here's the report we discuss in the episode:   https://epochai.org/blog/can-ai-scaling-continue-through-2030   Timestamps:  00:00 How important is scaling?   08:03 How capable will AIs be in 2030?   18:33 AI agents, reasoning, and planning  23:39 Automating coding and...
Published 10/11/24
Ryan Greenblatt joins the podcast to discuss AI control, timelines, takeoff speeds, misalignment, and slowing down around human-level AI.  You can learn more about Ryan's work here: https://www.redwoodresearch.org/team/ryan-greenblatt   Timestamps:  00:00 AI control   09:35 Challenges to AI control   23:48 AI control as a bridge to alignment  26:54 Policy and coordination for AI safety  29:25 Slowing down around human-level AI  49:14 Scheming and misalignment  01:27:27 AI timelines...
Published 09/27/24
Tom Barnes joins the podcast to discuss how much the world spends on AI capabilities versus AI safety, how governments can prepare for advanced AI, and how to build a more resilient world.    Tom's report on advanced AI: https://www.founderspledge.com/research/research-and-recommendations-advanced-artificial-intelligence    Timestamps:  00:00 Spending on safety vs capabilities  09:06 Racing dynamics - is the classic story true?   28:15 How are governments preparing for advanced AI?...
Published 09/12/24
Samuel Hammond joins the podcast to discuss whether AI progress is slowing down or speeding up, AI agents and reasoning, why superintelligence is an ideological goal, open source AI, how technical change leads to regime change, the economics of advanced AI, and much more.    Our conversation often references this essay by Samuel: https://www.secondbest.ca/p/ninety-five-theses-on-ai    Timestamps:  00:00 Is AI plateauing or accelerating?   06:55 How do we get AI agents?   16:12 Do agency...
Published 08/22/24
Anousheh Ansari joins the podcast to discuss how innovation prizes can incentivize technical innovation in space, AI, quantum computing, and carbon removal. We discuss the pros and cons of such prizes, where they work best, and how far they can scale. Learn more about Anousheh's work here: https://www.xprize.org/home   Timestamps:  00:00 Innovation prizes at XPRIZE  08:25 Deciding which prizes to create  19:00 Creating new markets  29:51 How far can prizes scale?   35:25 When are prizes...
Published 08/09/24
Mary Robinson joins the podcast to discuss long-view leadership, risks from AI and nuclear weapons, prioritizing global problems, how to overcome barriers to international cooperation, and advice to future leaders. Learn more about Robinson's work as Chair of The Elders at https://theelders.org   Timestamps:  00:00 Mary's journey to presidency   05:11 Long-view leadership  06:55 Prioritizing global problems  08:38 Risks from artificial intelligence  11:55 Climate change  15:18 Barriers...
Published 07/25/24
Emilia Javorsky joins the podcast to discuss AI-driven power concentration and how we might mitigate it. We also discuss optimism, utopia, and cultural experimentation.  Apply for our RFP here:   https://futureoflife.org/grant-program/mitigate-ai-driven-power-concentration/ Timestamps:  00:00 Power concentration   07:43 RFP: Mitigating AI-driven power concentration  14:15 Open source AI   26:50 Institutions and incentives  35:20 Techno-optimism   43:44 Global monoculture   53:55...
Published 07/11/24
Anton Korinek joins the podcast to discuss the effects of automation on wages and labor, how we measure the complexity of tasks, the economics of an intelligence explosion, and the market structure of the AI industry. Learn more about Anton's work at https://www.korinek.com   Timestamps:  00:00 Automation and wages  14:32 Complexity for people and machines  20:31 Moravec's paradox  26:15 Can people switch careers?   30:57 Intelligence explosion economics  44:08 The lump of labor...
Published 06/21/24
Christian Ruhl joins the podcast to discuss US-China competition and the risk of war, official versus unofficial diplomacy, hotlines between countries, catastrophic biological risks, ultraviolet germicidal light, and ancient civilizational collapse. Find out more about Christian's work at https://www.founderspledge.com   Timestamps:  00:00 US-China competition and risk   18:01 The security dilemma   30:21 Official and unofficial diplomacy  39:53 Hotlines between countries   01:01:54...
Published 06/07/24
Christian Nunes joins the podcast to discuss deepfakes, how they impact women in particular, how we can protect ordinary victims of deepfakes, and the current landscape of deepfake legislation. You can learn more about Christian's work at https://now.org and about the Ban Deepfakes campaign at https://bandeepfakes.org  Timestamps: 00:00 The National Organisation for Women (NOW)  05:37 Deepfakes and women  10:12 Protecting ordinary victims of deepfakes  16:06 Deepfake legislation  23:38...
Published 05/24/24
Dan Faggella joins the podcast to discuss whether humanity should eventually create AGI, how AI will change power dynamics between institutions, what drives AI progress, and which industries are implementing AI successfully. Find out more about Dan at https://danfaggella.com Timestamps: 00:00 Value differences in AI 12:07 Should we eventually create AGI? 28:22 What is a worthy successor? 43:19 AI changing power dynamics 59:00 Open source AI 01:05:07 What drives AI progress? 01:16:36 What...
Published 05/03/24
Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps: 00:00 Intelligence as optimization-power 05:18 Will LLMs imitate human values? 07:15 Why would AI develop dangerous goals? 09:55 Goal-completeness 12:53 Alignment to which values? 22:12 Is AI just another technology? 31:20 What is FOOM? 38:59 Risks from centralized power 49:18 Can AI defend us...
Published 04/19/24
Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's work at https://anniejacobsen.com Timestamps: 00:00 A scenario of nuclear war 06:56 Who would launch an attack? 13:50 Detecting nuclear attacks 19:37 The first critical seconds 29:42 Decisions under time pressure 34:27 Lessons from insiders 44:18...
Published 04/05/24
Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress, the impacts of AI from either side of the human-level intelligence threshold, intelligence and power, and her thoughts on how we can mitigate AI risk. Find more on Katja's work at https://aiimpacts.org/. Timestamps: 0:20 AI Impacts surveys 18:11 What...
Published 03/14/24
Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating. You can read more about Holly's work at https://pauseai.info Timestamps: 00:00 Pausing AI 10:23 Risks during an AI pause 19:41 Hardware overhang 29:04 Technological progress 37:00 Safety research during a pause 54:42 Social dynamics of AI risk 1:10:00 What prevents cooperation? 1:18:21 What...
Published 02/29/24
Sneha Revanur joins the podcast to discuss the social effects of AI, the illusory divide between AI ethics and AI safety, the importance of humans in the loop, the different effects of AI on younger and older people, and the importance of AIs identifying as AIs. You can read more about Sneha's work at https://encodejustice.org Timestamps: 00:00 Encode Justice 06:11 AI ethics and AI safety 15:49 Humans in the loop 23:59 AI in social media 30:42 Deteriorating social skills? 36:00 AIs...
Published 02/16/24
Roman Yampolskiy joins the podcast again to discuss whether AI is like a Shoggoth, whether scaling laws will hold for more agent-like AIs, evidence that AI is uncontrollable, and whether designing human-like AI would be safer than the current development path. You can read more about Roman's work at http://cecs.louisville.edu/ry/ Timestamps: 00:00 Is AI like a Shoggoth? 09:50 Scaling laws 16:41 Are humans more general than AIs? 21:54 Are AI models explainable? 27:49 Using AI to explain...
Published 02/02/24
On this special episode of the podcast, Flo Crivello talks with Nathan Labenz about AI as a new form of life, whether attempts to regulate AI risks regulatory capture, how a GPU kill switch could work, and why Flo expects AGI in 2-8 years. Timestamps: 00:00 Technological progress 07:59 Regulatory capture and AI 11:53 AI as a new form of life 15:44 Can AI development be paused? 20:12 Biden's executive order on AI 22:54 How would a GPU kill switch work? 27:00 Regulating models or...
Published 01/19/24
Carl Robichaud joins the podcast to discuss the new nuclear arms race, how much world leaders and ideologies matter for nuclear risk, and how to reach a stable, low-risk era. You can learn more about Carl's work here: https://www.longview.org/about/carl-robichaud/ Timestamps: 00:00 A new nuclear arms race 08:07 How much do world leaders matter? 18:04 How much does ideology matter? 22:14 Do nuclear weapons cause stable peace? 31:29 North Korea 34:01 Have we overestimated nuclear risk?...
Published 01/06/24
Frank Sauer joins the podcast to discuss autonomy in weapon systems, killer drones, low-tech defenses against drones, the flaws and unpredictability of autonomous weapon systems, and the political possibilities of regulating such systems. You can learn more about Frank's work here: https://metis.unibw.de/en/ Timestamps: 00:00 Autonomy in weapon systems 12:19 Balance of offense and defense 20:05 Killer drone systems 28:53 Is autonomy like nuclear weapons? 37:20 Low-tech defenses against...
Published 12/14/23
Darren McKee joins the podcast to discuss how AI might be difficult to control, which goals and traits AI systems will develop, and whether there's a unified solution to AI alignment. Timestamps: 00:00 Uncontrollable superintelligence 16:41 AI goals and the "virus analogy" 28:36 Speed of AI cognition 39:25 Narrow AI and autonomy 52:23 Reliability of current and future AI 1:02:33 Planning for multiple AI scenarios 1:18:57 Will AIs seek self-preservation? 1:27:57 Is there a unified...
Published 12/01/23