Episodes
Published 05/07/24
Will AI someday do all our scientific research for us? Not likely. Drs. Molly Crockett and Lisa Messeri join for a takedown of the hype of "self-driving labs" and why such misrepresentations also harm the humans who are vital to scientific research.Dr. Molly Crockett is an associate professor of psychology at Princeton University.Dr. Lisa Messeri is an associate professor of anthropology at Yale University, and author of the new book, In the Land of the Unreal: Virtual and Other Realities in ...
Published 05/07/24
Dr. Timnit Gebru guest-hosts with Alex in a deep dive into Marc Andreessen's 2023 manifesto, which argues, loftily, in favor of maximizing the use of 'AI' in all possible spheres of life. Timnit Gebru is the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR). Prior to that she was fired by Google, where she was serving as co-lead of the Ethical AI research team, in December 2020 for raising issues of discrimination in the workplace. Timnit also...
Published 04/19/24
Award-winning AI journalist Karen Hao joins Alex and Emily to talk about why LLMs can't possibly replace the work of reporters -- and why the hype is damaging to already-struggling and necessary publications. References: Adweek: Google Is Paying Publishers to Test an Unreleased Gen AI Platform The Quint: AI Invents Quote From Real Person in Article by Bihar News Site: A Wake-Up Call? Fresh AI Hell: Alliance for the Future VentureBeat: Google researchers unveil ‘VLOGGER’, an AI that can...
Published 04/03/24
Alex and Emily put on their social scientist hats and take on the churn of research papers suggesting that LLMs could be used to replace human labor in social science research -- or even human subjects. Why these writings are essentially calls to fabricate data. References: PNAS: ChatGPT outperforms crowd workers for text-annotation tasks Beware the Hype: ChatGPT Didn't Replace Human Data AnnotatorsChatGPT Can Replace the Underpaid Workers Who Train AI, Researchers SayPolitical Analysis: Out...
Published 03/13/24
Science fiction authors and all-around tech thinkers Annalee Newitz and Charlie Jane Anders join this week to talk about Isaac Asimov's oft-cited and equally often misunderstood laws of robotics, as debuted in his short story collection, 'I, Robot.' Meanwhile, both global and US military institutions are declaring interest in 'ethical' frameworks for autonomous weaponry. Plus, in AI Hell, a ballsy scientific diagram heard 'round the world -- and a proposal for the end of books as we know it,...
Published 02/29/24
Just Tech Fellow Dr. Chris Gilliard aka "Hypervisible" joins Emily and Alex to talk about the wave of universities adopting AI-driven educational technologies, and the lack of protections they offer students in terms of data privacy or even emotional safety. References: Inside Higher Ed: Arizona State Joins ChatGPT in First Higher Ed Partnership ASU press release version: New Collaboration with OpenAI Charts theFuture of AI in Higher Education MLive: Your Classmate Could Be an AI Student at...
Published 02/15/24
Is ChatGPT really going to take your job? Emily and Alex unpack two hype-tastic papers that make implausible claims about the number of workforce tasks LLMs might make cheaper, faster or easier. And why bad methodology may still trick companies into trying to replace human workers with mathy-math. Visit us on PeerTube for the video of this conversation. References: OpenAI: GPTs are GPTs Goldman Sachs: The Potentially Large Effects of Artificial Intelligence on Economic Growth FYI: Over the...
Published 02/01/24
New year, same B******t Mountain. Alex and Emily are joined by feminist technosolutionism critics Eleanor Drage and Kerry McInerney to tear down the ways AI is proposed as a solution to structural inequality, including racism, ableism, and sexism -- and why this hype can occlude the need for more meaningful changes in institutions. Dr. Eleanor Drage is a Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence. Dr. Kerry McInerney is a Research Fellow at the Leverhulme...
Published 01/17/24
AI Hell has frozen over for a single hour. Alex and Emily visit all seven circles in a tour of the worst in bite-sized BS. References: Pentagon moving toward letting AI weapons autonomously kill humans NYC Mayor uses AI to make robocalls in languages he doesn’t speak University of Michigan investing in OpenAI Tesla: claims of “full self-driving” are free speech LLMs may not "understand" output 'Maths-ticated' data LLMs can’t analyze an SEC filing How GPT-4 can be used to create fake...
Published 01/10/24
Congress spent 2023 busy with hearings to investigate the capabilities, risks and potential uses of large language models and other 'artificial intelligence' systems. Alex and Emily, plus journalist Justin Hendrix, talk about the limitations of these hearings, the alarmist fixation on so-called 'p(doom)' and overdue laws on data privacy. Justin Hendrix is editor of the Tech Policy Press. References: TPP tracker for the US Senate 'AI Insight Forum' hearings Balancing Knowledge and Governance:...
Published 01/03/24
Researchers Sarah West and Andreas Liesenfeld join Alex and Emily to examine what software companies really mean when they say their work is 'open source,' and call for greater transparency. This episode was recorded on November 20, 2023. Dr. Sarah West is the managing director of the AI Now Institute. Her award-winning research and writing blends social science, policy, and historical methods to address the intersection of technology, labor, antitrust, and platform accountability. And she’s...
Published 11/30/23
Emily and Alex time travel back to a conference of men who gathered at Dartmouth College in the summer of 1956 to examine problems relating to computation and "thinking machines," an event commonly mythologized as the founding of the field of artificial intelligence. But our crack team of AI hype detectives is on the case with a close reading of the grant proposal that started it all. This episode was recorder on November 6, 2023. Watch the video version on PeerTube. References: "A Proposal...
Published 11/21/23
Drs. Emma Strubell and Sasha Luccioni join Emily and Alex for an environment-focused hour of AI hype. How much carbon does a single use of ChatGPT emit? What about the water or energy consumption of manufacturing the graphics processing units that train various large language models? Why even catastrophic estimates from well-meaning researchers may not tell the full story. This episode was recorded on November 6, 2023. References: "The Carbon Footprint of Machine Learning Training Will...
Published 11/08/23
Emily and Alex read through Google vice president Blaise Aguera y Arcas' recent proclamation that "artificial general intelligence is already here." Why this claim is a maze of hype and moving goalposts. References: Noema Magazine: "Artificial General Intelligence Is Already Here."  "AI and the Everything in the Whole Wide World Benchmark"  "Targeting the Benchmark: On Methodology and Current Natural Language Processing Research" "Recoding Gender: Women's Changing Participation in...
Published 10/31/23
Emily and Alex are joined by Stanford PhD student Haley Lepp to examine the increasing hype around LLMs in education spaces - whether they're pitched as ways to reduce teacher workloads, increase accessibility, or simply "democratize learning and knowing" in the Global South. Plus a double dose of devaluating educator expertise and fatalism about the 'inevitability' of LLMs in the classroom. Haley Lepp is a Ph.D. student in the Stanford University Graduate School of Education. She draws on...
Published 10/04/23
Alex and Emily are taking another stab at Google and other companies' aspirations to be part of the healthcare system - this time with the expertise of Stanford incoming assistant professor of dermatology and biomedical data science Roxana Daneshjou. A look at the gap between medical licensing examination questions and real life, and the inherently two-tiered system that might emerge if LLMs are brought into the diagnostic process. References: Google blog post describing Med-PaLM Nature:...
Published 09/28/23
Emily and Alex tackle the White House hype about the 'voluntary commitments' of companies to limit the harms of their large language models: but only some large language models, and only some, over-hyped kinds of harms. Plus a full portion of Fresh Hell...and a little bit of good news. References: White House press release on voluntary commitments Emily’s blog post critiquing the “voluntary commitments” An “AI safety” infused take on regulation AI Causes Real Harm. Let’s Focus on That over...
Published 09/20/23
Emily and Alex are joined by technology scholar Dr. Lucy Suchman to scrutinize a new book from Henry Kissinger and coauthors Eric Schmidt and Daniel Huttenlocher that declares a new 'Age of AI,' with abundant hype about the capacity of large language models for warmaking. Plus close scrutiny of Palantir's debut of an artificial intelligence platform for combat, and why the company is promising more than the mathy-maths can provide. Dr. Lucy Suchman is a professor emerita of sociology at...
Published 09/13/23
Emily and Alex talk to UC Berkeley scholar Hannah Zeavin about the case of the National Eating Disorders Association helpline, which tried to replace human volunteers with a chatbot--and why the datafication and automation of mental health services are an injustice that will disproportionately affect the already vulnerable. Content note: This is a conversation that touches on mental health, people in crisis, and exploitation. This episode was originally recorded on June 8, 2023. Watch the...
Published 09/07/23
Take a deep breath and join Alex and Emily in AI Hell itself, as they take down a month's worth of hype in a mere 60 minutes. This episode aired on Friday, May 5, 2023. Watch the video of this episode on PeerTube. References: Terrifying NEJM article on GPT-4 in medicine “Healthcare professionals preferred ChatGPT 79% of the time” Good thoughts from various experts in response ChatGPT supposedly reading dental x-rays Chatbots “need” therapists CEO proposes AI therapist, removes proposal...
Published 08/29/23
After a hype-y few weeks of AI happenings, Alex and Emily shovel the BS on GPT-4’s “system card,” its alleged “sparks of Artificial General Intelligence,” and a criti-hype heavy "AI pause" letter. Hint: for a good time, check the citations. This episode originally aired on Friday, April 7, 2023. You can also watch the video of this episode on PeerTube. References: GPT-4 system card: https://cdn.openai.com/papers/gpt-4-system-card.pdf “Sparks of AGI” hype:...
Published 08/24/23
Alex and Emily are taking AI to court! Amid big claims about LLMs, a look at the facts about ChatGPT, legal expertise, and what the bar exam actually tells you about someone's ability to practice law--with help from Harvard legal and technology scholar Kendra Albert. This episode was first recorded on March 3, 2023. Watch the video of this episode on PeerTube. References: Social Science Research Network paper “written” by ChatGPT Joe Wanzala, “ChatGPT is ideal for eDiscovery” Legal...
Published 08/16/23
Should the mathy-maths be telling doctors what might be wrong with you? And can they actually help train medical professionals to treat human patients? Alex and Emily discuss the not-so-real medical and healthcare applications of ChatGPT and other large language models. Plus another round of fresh AI hell, featuring "charisma as a service," and other assorted reasons to tear your hair out. This episode was first recorded on February 17th of 2023. Watch the video of this episode on...
Published 08/08/23
New year, new hype? As the world gets swept up in the fervor over ChatGPT of late 2022, Emily and Alex give a deep sigh and begin to unpack the wave of fresh enthusiasm over large language models and the "chat" format specifically. Plus, more fresh AI hell. This episode was recorded on January 20, 2023. Watch the video of this episode on PeerTube. References: Situating Search (Shah & Bender 2022)  Related op-ed:...
Published 08/04/23