Google AI Overviews
Listen now
Description
This week we talk about search engines, SEO, and Habsburg AI. We also discuss AI summaries, the web economy, and alignment. Recommended Book: Pandora’s Box by Peter Biskind Transcript There's a concept in the world of artificial intelligence, alignment, which refers to the goals underpinning the development and expression of AI systems. This is generally considered to be a pretty important realm of inquiry because, if AI consciousness were to ever emerge—if an artificial intelligence that's truly intelligent in the sense that humans are intelligent were to be developed—it would be vital said intelligence were on the same general wavelength as humans, in terms of moral outlook and the practical application of its efforts. Said another way, as AI grows in capacity and capability, we want to make sure it values human life, has a sense of ethics that roughly aligns with that of humanity and global human civilization—the rules of the road that human beings adhere to being embedded deep in its programming, essentially—and we'd want to make sure that as it continues to grow, these baseline concerns remain, rather than being weeded out in favor of motivations and beliefs that we don't understand, and which may or may not align with our versions of the same, even to the point that human lives become unimportant, or even seem antithetical to this AI's future ambitions. This is important even at the level we're at today, where artificial general intelligence, AI that's roughly equivalent in terms of thinking and doing and parsing with human intelligence, hasn't yet been developed, at least not in public. But it becomes even more vital if and when artificial superintelligence of some kind emerges, whether that means AI systems that are actually thinking like we do, but are much smarter and more capable than the average human, or whether it means versions of what we've already got that are just a lot more capable in some narrowly defined way than what we have today: futuristic ChatGPTs that aren't conscious, but which, because of their immense potency, could still nudge things in negative directions if their unthinking motivations, the systems guiding their actions, are not aligned with our desires and values. Of course, humanity is not a monolithic bloc, and alignment is thus a tricky task—because whose beliefs do we bake into these things? Even if we figure out a way to entrench those values and ethics and such permanently into these systems, which version of values and ethics do we use? The democratic, capitalistic West's? The authoritarian, Chinese- and Russian-style clampdown approach, which limits speech and utilizes heavy censorship in order to centralize power and maintain stability? Maybe a more ambitious version of these things that does away with the downsides of both, cobbling together the best of everything we've tried in favor of something truly new? And regardless of directionality, who decides all this? Who chooses which values to install, and how? The Alignment Problem refers to an issue identified by computer scientist and AI expert Norbert Weiner in 1960, when he wrote about how tricky it can be to figure out the motivations of a system that, by definition, does things we don't quite understand—a truly useful advanced AI would be advanced enough that not only would its computation put human computation, using our brains, to shame, but even the logic it uses to arrive at its solutions, the things it sees, how it sees the world in general, and how it reaches its conclusions, all of that would be something like a black box that, although we can see and understand the inputs and outputs, what happens inside might be forever unintelligible to us, unless we process it through other machines, other AIs maybe, that attempt to bridge that gap and explain things to us. The idea here, then, is that while we may invest a lot of time and energy in trying to align these systems with our values, it will be devilishly d
More Episodes
This week we talk about neural networks, AGI, and scaling laws. We also discuss training data, user acquisition, and energy consumption. Recommended Book: Through the Grapevine by Taylor N. Carlson Transcript Depending on whose numbers you use, and which industries and types of investment those...
Published 11/19/24
Published 11/19/24
This week we talk about the Double Reduction Policy, gaokao, and Chegg. We also discuss GPTs, cheating, and disruption. Recommended Book: Autocracy, Inc by Anne Applebaum Transcript In July of 2021, the Chinese government implemented a new education rule called the Double Reduction Policy. This...
Published 11/12/24