Emily M. Bender, Professor at UW — Language Models and Linguistics
Listen now
Description
In this episode, Emily and Lukas dive into the problems with bigger and bigger language models, the difference between form and meaning, the limits of benchmarks, and why it's important to name the languages we study. Show notes (links to papers and transcript): http://wandb.me/gd-emily-m-bender --- Emily M. Bender is a Professor of Linguistics at and Faculty Director of the Master's Program in Computational Linguistics at University of Washington. Her research areas include multilingual grammar engineering, variation (within and across languages), the relationship between linguistics and computational linguistics, and societal issues in NLP. --- Timestamps: 0:00 Sneak peek, intro 1:03 Stochastic Parrots 9:57 The societal impact of big language models 16:49 How language models can be harmful 26:00 The important difference between linguistic form and meaning 34:40 The octopus thought experiment 42:11 Language acquisition and the future of language models 49:47 Why benchmarks are limited 54:38 Ways of complementing benchmarks 1:01:20 The #BenderRule 1:03:50 Language diversity and linguistics 1:12:49 Outro
More Episodes
Pieter is the Chief Scientist and Co-founder at Covariant, where his team is building universal AI for robotic manipulation. Pieter also hosts The Robot Brains Podcast, in which he explores how far humanity has come in its mission to create conscious computers, mindful machines, and rational...
Published 10/07/21
In this episode we're joined by Chris Albon, Director of Machine Learning at the Wikimedia Foundation. Lukas and Chris talk about Wikimedia's approach to content moderation, what it's like to work in a place so transparent that even internal chats are public, how Wikimedia uses machine learning...
Published 09/23/21
Jeff talks about building Facebook's early data team, founding Cloudera, and transitioning into biomedicine with Hammer Lab and Related Sciences.
Published 08/26/21