Venkatesh Rao: Protocols, Intelligence, and Scaling
Listen now
Description
“There is this move from generality in a relative sense of ‘we are not as specialized as insects’ to generality in the sense of omnipotent, omniscient, godlike capabilities. And I think there's something very dangerous that happens there, which is you start thinking of the word ‘general’ in completely unhinged ways.” In episode 114 of The Gradient Podcast, Daniel Bashir speaks to Venkatesh Rao. Venkatesh is a writer and consultant. He has been writing the widely read Ribbonfarm blog since 2007, and more recently, the popular Ribbonfarm Studio Substack newsletter. He is the author of Tempo, a book on timing and decision-making, and is currently working on his second book, on the foundations of temporality. He has been an independent consultant since 2011, supporting senior executives in the technology industry. His work in recent years has focused on AI, semiconductor, sustainability, and protocol technology sectors. He holds a PhD in control theory (2003) from the University of Michigan. He is currently based in the Seattle area, and enjoys dabbling in robotics in his spare time. You can learn more about his work at venkateshrao.com Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected] Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter Outline: * (00:00) Intro * (01:38) Origins of Ribbonfarm and Venkat’s academic background * (04:23) Voice and recurring themes in Venkat’s work * (11:45) Patch models and multi-agent systems: integrating philosophy of language, balancing realism with tractability * (21:00) More on abstractions vs. tractability in Venkat’s work * (29:07) Scaling of industrial value systems, characterizing AI as a discipline * (39:25) Emergent science, intelligence and abstractions, presuppositions in science, generality and universality, cameras and engines * (55:05) Psychometric terms * (1:09:07) Inductive biases (yes I mentioned the No Free Lunch Theorem and then just talked about the definition of inductive bias and not the actual theorem 🤡) * (1:18:13) LLM training and efficiency, comparing LLMs to humans * (1:23:35) Experiential age, analogies for knowledge transfer * (1:30:50) More clarification on the analogy * (1:37:20) Massed Muddler Intelligence and protocols * (1:38:40) Introducing protocols and the Summer of protocols * (1:49:15) Evolution of protocols, hardness * (1:54:20) LLMs, protocols, time, future visions, and progress * (2:01:33) Protocols, drifting from value systems, friction, compiling explicit knowledge * (2:14:23) Directions for ML people in protocols research * (2:18:05) Outro Links: * Venkat’s Twitter and homepage * Mediocre Computing * Summer of Protocols and 2024 Call for Applications (apply!) * Essays discussed * Patch models and their applications to multivehicle command and control * From Mediocre Computing * Text is All You Need * Magic, Mundanity, and Deep Protocolization * A Camera, Not an Engine * Massed Muddler Intelligence * On protocols * The Unreasonable Sufficiency of Protocols * Protocols Don’t Build Pyramids * Protocols in (Emergency) Time * Atoms, Institutions, Blockchains Get full access to The Gradient at thegradientpub.substack.com/subscribe
More Episodes
Episode 140 I spoke with Professor Jacob Andreas about: * Language and the world * World models * How he’s developed as a scientist Enjoy! Jacob is an associate professor at MIT in the Department of Electrical Engineering and Computer Science as well as the Computer Science and Artificial...
Published 10/10/24
Episode 139 I spoke with Evan Ratliff about: * Shell Game, Evan’s new podcast, where he creates an AI voice clone of himself and sets it loose. * The end of the Longform Podcast and his thoughts on the state of journalism. Enjoy! Evan is an award-winning investigative journalist, bestselling...
Published 09/26/24