Episodes
We speak with Stephen Casper, or "Cas" as his friends call him. Cas is a PhD student at MIT in the Computer Science (EECS) department, in the Algorithmic Alignment Group advised by Prof Dylan Hadfield-Menell. Formerly, he worked with the Harvard Kreiman Lab and the Center for Human-Compatible AI (CHAI) at Berkeley. His work focuses on better understanding the internal workings of AI models (better known as “interpretability”), making them robust to various kinds of adversarial attacks, and ca...
Published 06/19/24
We speak with Katja Grace. Katja is the co-founder and lead researcher at AI Impacts, a research group trying to answer key questions about the future of AI — when certain capabilities will arise, what will AI look like, how it will all go for humanity.We talk to Katja about:* How AI Impacts latest rigorous survey of leading AI researchers shows they've dramatically reduced their timelines to when AI will successfully tackle all human tasks & occupations.* The survey's methodology and why...
Published 06/19/24
We speak with Rob Miles. Rob is the host of the “Robert Miles AI Safety” channel on YouTube, the single most popular AI alignment video series out there — he has 145,000 subscribers and his top video has ~600,000 views. He goes much deeper than many educational resources out there on alignment, going into important technical topics like the orthogonality thesis, inner misalignment, and instrumental convergence.
Through his work, Robert has educated thousands on AI safety, including many now...
Published 03/08/24
We speak with Thomas Larsen, Director for Strategy at the Center for AI Policy in Washington, DC, to do a "speed run" overview of all the major technical research directions in AI alignment. A great way to quickly learn broadly about the field of technical AI alignment.
In 2022, Thomas spent ~75 hours putting together an overview of what everyone in technical alignment was doing. Since then, he's continued to be deeply engaged in AI safety. We talk to Thomas to share an updated overview to...
Published 12/14/23
We speak with Ryan Kidd, Co-Director at ML Alignment & Theory Scholars (MATS) program, previously "SERI MATS".
MATS (https://www.matsprogram.org/) provides research mentorship, technical seminars, and connections to help new AI researchers get established and start producing impactful research towards AI safety & alignment.
Prior to MATS, Ryan completed a PhD in Physics at the University of Queensland (UQ) in Australia.
We talk about:
* What the MATS program is
* Who should apply...
Published 11/08/23
We speak with Adam Gleave, CEO of FAR AI (https://far.ai). FAR AI’s mission is to ensure AI systems are trustworthy & beneficial. They incubate & accelerate research that's too resource-intensive for academia but not ready for commercialisation. They work on everything from adversarial robustness, interpretability, preference learning, & more.
We talk to Adam about:
* The founding story of FAR as an AI safety org, and how it's different from the big commercial labs (e.g. OpenAI)...
Published 11/06/23
We speak with Jamie Bernardi, co-founder & AI Safety Lead at not-for-profit BlueDot Impact, who host the biggest and most up-to-date courses on AI safety & alignment at AI Safety Fundamentals (https://aisafetyfundamentals.com/). Jamie completed his Bachelors (Physical Natural Sciences) and Masters (Physics) at the U. Cambridge and worked as an ML Engineer before co-founding BlueDot Impact.
The free courses they offer are created in collaboration with people on the cutting edge of AI...
Published 10/12/23
In this episode, we speak with Prof Richard Dazeley about the implications of a world with AGI and how we can best respond. We talk about what he thinks AGI will actually look like as well as the technical and governance responses we should put in today and in the future to ensure a safe and positive future with AGI.
Prof Richard Dazeley is the Deputy Head of School at the School of Information Technology at Deakin University in Melbourne, Australia. He’s also a senior member of the...
Published 08/03/23
In this episode, we have back on the show Hunter Jay, CEO Ripe Robotics, our co-host on Ep 1. We synthesise everything we've heard on AGI timelines from experts in Ep 1-5, take in more data points, and use this to give our own forecasts for AGI, ASI (i.e. superintelligence), and "intelligence explosion" (i.e. singularity). Importantly, we have different takes on when AGI will likely arrive, leading to exciting debates on AGI bottlenecks, hardware requirements, the need for sequential...
Published 07/20/23
In this episode, we have back on our show Alex Browne, ML Engineer, who we heard on Ep2. He got in contact after watching recent developments in the 4 months since Ep2, which have accelerated his timelines for AGI. Hear why and his latest prediction.
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- About Alex Browne --
* Bio: Alex is a software engineer & tech founder with...
Published 05/22/23
In this episode, we speak with forecasting researcher & data scientist at Amazon AWS, Ryan Kupyn, about his timelines for the arrival of AGI.
Ryan was recently ranked the #1 forecaster in Astral Codex Ten's 2022 Prediction contest, beating out 500+ other forecasters and proving himself to be a world-class forecaster. He has also done work in ML & works as a forecaster for Amazon AWS.
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter:...
Published 03/31/23
In this episode, we speak with Rain.AI CTO Jack Kendall about his timelines for the arrival of AGI. He also speaks to how we might get there and some of the implications.
Hosted by Soroush Pour.
Show links
Jack KendallBio: Jack invented a new method for connecting artificial silicon neurons using coaxial nanowires at the U. Florida before starting Rain as co-founder and CTO.LinkedIn: https://www.linkedin.com/in/jack-kendall-21072887/Website: https://rain.aiFurther resourcesTry out...
Published 02/01/23
In this episode, we speak with ML Engineer Alex Browne about his timelines for AGI. He also speaks to how we might get there and some of the implications.
Hosted by Soroush Pour.
Show links
Follow Alex Browne:GitHub: https://github.com/albrowBlog: https://medium.com/@albrowFurther resources:ChatGPT: https://openai.com/blog/chatgpt/Stable Diffusion: https://stability.ai/blog/stablediffusion2-1-release7-dec-2022
Published 01/10/23
In this first episode, we speak with AGI alignment Logan Riggs Smith about his forecasted timelines for the potential arrive of AGI. He also speaks to how we might get there and some of the implications.
Hosted by Hunter Jay and Soroush Pour
Show links
Further writings from Logan Riggs SmithCotra report on AGI timelines:Original report (very long)Scott Alexander analysis of this report
Published 11/26/22
What can you expect to hear and learn on "The Artificial General Intelligence (AGI) Show with Soroush Pour"?
Hosted by Soroush Pour
Published 11/12/22