37 - Jaime Sevilla on AI Forecasting
Listen now
Description
Epoch AI is the premier organization that tracks the trajectory of AI - how much compute is used, the role of algorithmic improvements, the growth in data used, and when the above trends might hit an end. In this episode, I speak with the director of Epoch AI, Jaime Sevilla, about how compute, data, and algorithmic improvements are impacting AI, and whether continuing to scale can get us AGI. Patreon: https://www.patreon.com/axrpodcast Ko-fi: https://ko-fi.com/axrpodcast The transcript: https://axrp.net/episode/2024/10/04/episode-37-jaime-sevilla-forecasting-ai.html   Topics we discuss, and timestamps: 0:00:38 - The pace of AI progress 0:07:49 - How Epoch AI tracks AI compute 0:11:44 - Why does AI compute grow so smoothly? 0:21:46 - When will we run out of computers? 0:38:56 - Algorithmic improvement 0:44:21 - Algorithmic improvement and scaling laws 0:56:56 - Training data 1:04:56 - Can scaling produce AGI? 1:16:55 - When will AGI arrive? 1:21:20 - Epoch AI 1:27:06 - Open questions in AI forecasting 1:35:21 - Epoch AI and x-risk 1:41:34 - Following Epoch AI's research   Links for Jaime and Epoch AI: Epoch AI: https://epochai.org/ Machine Learning Trends dashboard: https://epochai.org/trends Epoch AI on X / Twitter: https://x.com/EpochAIResearch Jaime on X / Twitter: https://x.com/Jsevillamol   Research we discuss: Training Compute of Frontier AI Models Grows by 4-5x per Year: https://epochai.org/blog/training-compute-of-frontier-ai-models-grows-by-4-5x-per-year Optimally Allocating Compute Between Inference and Training: https://epochai.org/blog/optimally-allocating-compute-between-inference-and-training Algorithmic Progress in Language Models [blog post]: https://epochai.org/blog/algorithmic-progress-in-language-models Algorithmic progress in language models [paper]: https://arxiv.org/abs/2403.05812 Training Compute-Optimal Large Language Models [aka the Chinchilla scaling law paper]: https://arxiv.org/abs/2203.15556 Will We Run Out of Data? Limits of LLM Scaling Based on Human-Generated Data [blog post]: https://epochai.org/blog/will-we-run-out-of-data-limits-of-llm-scaling-based-on-human-generated-data Will we run out of data? Limits of LLM scaling based on human-generated data [paper]: https://arxiv.org/abs/2211.04325 The Direct Approach: https://epochai.org/blog/the-direct-approach   Episode art by Hamish Doodles: hamishdoodles.com
More Episodes
Sometimes, people talk about transformers as having "world models" as a result of being trained to predict text data on the internet. But what does this even mean? In this episode, I talk with Adam Shai and Paul Riechers about their work applying computational mechanics, a sub-field of physics...
Published 09/29/24