#191 (Part 1) – Carl Shulman on the economy and national security after AGI
Listen now
Description
This is the first part of our marathon interview with Carl Shulman. The second episode is on government and society after AGI. You can listen to them in either order! The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply? Many people sort of consider that hypothetical, but maybe nobody has followed through and considered all the implications as much as Carl Shulman. Behind the scenes, his work has greatly influenced how leaders in artificial general intelligence (AGI) picture the world they're creating. Links to learn more, highlights, and full transcript. Carl simply follows the logic to its natural conclusion. This is a world where 1 cent of electricity can be turned into medical advice, company management, or scientific research that would today cost $100s, resulting in a scramble to manufacture chips and apply them to the most lucrative forms of intellectual labour. It's a world where, given their incredible hourly salaries, the supply of outstanding AI researchers quickly goes from 10,000 to 10 million or more, enormously accelerating progress in the field. It's a world where companies operated entirely by AIs working together are much faster and more cost-effective than those that lean on humans for decision making, and the latter are progressively driven out of business.  It's a world where the technical challenges around control of robots are rapidly overcome, leading to robots into strong, fast, precise, and tireless workers able to accomplish any physical work the economy requires, and a rush to build billions of them and cash in. As the economy grows, each person could effectively afford the practical equivalent of a team of hundreds of machine 'people' to help them with every aspect of their lives. And with growth rates this high, it doesn't take long to run up against Earth's physical limits — in this case, the toughest to engineer your way out of is the Earth's ability to release waste heat. If this machine economy and its insatiable demand for power generates more heat than the Earth radiates into space, then it will rapidly heat up and become uninhabitable for humans and other animals. This creates pressure to move economic activity off-planet. So you could develop effective populations of billions of scientific researchers operating on computer chips orbiting in space, sending the results of their work, such as drug designs, back to Earth for use. These are just some of the wild implications that could follow naturally from truly embracing the hypothetical: what if we develop AGI that could accomplish everything that the most productive humans can, using the same energy supply? In today's episode, Carl explains the above, and then host Rob Wiblin pushes back on whether that’s realistic or just a cool story, asking: If we're heading towards the above, how come economic growth is slow now and not really increasing?Why have computers and computer chips had so little effect on economic productivity so far?Are self-replicating biological systems a good comparison for self-replicating machine systems?Isn't this just too crazy and weird to be plausible?What bottlenecks would be encountered in supplying energy and natural resources to this growing economy?Might there not be severely declining returns to bigger brains and more training?Wouldn't humanity get scared and pull the brakes if such a transformation kicked off?If this is right, how come economists don't agree?Finally, Carl addresses the moral status of machine minds themselves. Would they be conscious or otherwise have a claim to moral or rights? And how might humans and machines coexist with neither side dominating or exploiting the other? Chapters: Cold open (00:00:00)Rob’s intro (00:01:00)Tran
More Episodes
"I think stories are the way we shift the Overton window — so widen the range of things that are acceptable for policy and palatable to the public. Almost by definition, a lot of things that are going to be really important and shape the future are not in the Overton window, because they sound...
Published 11/21/24
"I think one of the reasons I took [shutting down my charity] so hard is because entrepreneurship is all about this bets-based mindset. So you say, “I’m going to take a bunch of bets. I’m going to take some risky bets that have really high upside.” And this is a winning strategy in life, but...
Published 11/14/24
Published 11/14/24