Accelerating to 2027?
Listen now
Description
Hat Tip to this week’s creators: @leopoldasch, @JoeSlater87, @GaryMarcus, @ulonnaya, @alex, @ttunguz, @mmasnick, @dannyrimer, @imdavidpierce, @asafitch, @ylecun, @nxthompson, @kaifulee, @DaphneKoller, @AndrewYNg, @aidangomez, @Kyle_L_Wiggers, @waynema, @QianerLiu, @nicnewman, @nmasc_, @steph_palazzolo, @nofilmschool Contents * Editorial:  * Essays of the Week * Situational Awareness: The Decade Ahead * ChatGPT is b******t * AGI by 2027? * Ilya Sutskever, OpenAI’s former chief scientist, launches new AI company * The Series A Crunch Is No Joke * The Series A Crunch or the Seedpocalypse of 2024  * The Surgeon General Is Wrong. Social Media Doesn’t Need Warning Labels * Video of the Week * Danny Rimer on 20VC - (Must See) * AI of the Week * Anthropic has a fast new AI model — and a clever new way to interact with chatbots * Nvidia’s Ascent to Most Valuable Company Has Echoes of Dot-Com Boom * The Expanding Universe of Generative Models * DeepMind’s new AI generates soundtracks and dialogue for videos * News Of the Week * Apple Suspends Work on Next Vision Pro, Focused on Releasing Cheaper Model in Late 2025 * Is the news industry ready for another pivot to video? * Cerebras, an Nvidia Challenger, Files for IPO Confidentially * Startup of the Week * Final Cut Camera and iPad Multicam are Truly Revolutionary * X of the Week * Leopold Aschenbrenner Editorial I had not heard of Leopold Aschenbrenner until yesterday. I was meeting with Faraj Aalaei (a SignalRank board member) and my colleague Rob Hodgkinson when they began to talk about “Situational Awareness,” his essay on the future of AGI, and its likely speed of emergence. So I had to read it, and it is this week’s essay of the week. He starts his 165-page epic with: Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. So, Leopold is not humble. He finds himself “among” the few people with situational awareness. As a person prone to bigging up myself, I am not one to prematurely judge somebody’s view of self. So, I read all 165 pages. He makes one point. The growth of AI capability is accelerating. More is being done at a lower cost, and the trend will continue to be super-intelligence by 2027. At that point, billions of skilled bots will solve problems at a rate we cannot imagine. And they will work together, with little human input, to do so. His case is developed using linear progression from current developments. According to Leopold, all you have to believe in is straight lines. He also has a secondary narrative related to safety, particularly the safety of models and their weightings (how they achieve their results). By safety, he does not mean the models will do bad things. He means that third parties, namely China, can steal the weightings and reproduce the results. He focuses on the poor security surrounding models as the problem. And he deems governments unaware of the dangers. Although German-born, he argues in favor of the US-led effort to see AGI as a weapon to defeat China and threatens dire consequences if it does not. He sees the “free world” as in danger unless it stops others from gaining the sophistication he predicts in the time he predicts. At that point, I felt I was reading a manifesto for World War Three. But as I see it, the smartest people in the space have converged on a different perspective, a third way, one I will dub AGI Realism. The core tenets are simple: * Superintelligence is a matter of national security. We are rapidly building machines smarter than the smartest humans. This is not another cool Silicon Valley boom; this isn’t some random community of coders writing an innocent open source software package; this isn’t fun and games. Superintelligence is going to be wild; it will be the most powerful weapon mankind ha
More Episodes
Published 06/22/24
Hat Tip to this week’s creators: @tedgioia, @benthompson, @stratechery, @peterwalker99, @omri_drory, @sama, @mariogabriele, @gruber, @giannandrea, @craigfederighi, @gregjoz, @alex, @MParekh, @waxeditorial, @romaindillet, @cookie, @ttunguz, @Kantrowitz Contents * Editorial: Checkmate! * Essays of...
Published 06/15/24
A reminder for new readers. That Was The Week includes a collection of my selected readings on critical issues in tech, startups, and venture capital. I selected the articles because they are of interest to me. The selections often include things I entirely disagree with. But they express common...
Published 06/07/24