Episodes
Curious about the safety of LLMs? 🤔 Join us for an insightful new episode featuring Suchin Gururangan, Young Investigator at Allen Institute for Artificial Intelligence and Data Science Engineer at Appuri. 🚀 Don't miss out on expert insights into the world of LLMs!
Published 02/29/24
Published 02/29/24
This podcast episode features Dr. Mohamed Elhoseiny, a true luminary in the realm of computer vision with over a decade of groundbreaking research. As an Assistant Professor at KAUST, Dr. Elhoseiny's work delves into the intersections of Computer Vision, Language & Vision, and Computational Creativity in Art, Fashion, and AI. Notably, he co-organized the 1st and 2nd Workshops on Closing the Loop between Vision and Language, demonstrating his commitment to advancing interdisciplinary...
Published 01/08/24
Our first guest with this new format is Kyle Lo, the most senior lead scientist in the Semantic Scholar team at Allen Institute for AI (AI2), who kindly agreed to share his perspective on #Science of #Science (#scisci) on our podcast. SciSci is concerned with studying how people do science, and includes developing methods and tools to help people consume AND produce science. Kyle has made several critical contributions in this field which enabled a lot of SciSci work over the past 5+ years,...
Published 12/28/23
In this special episode of NLP Highlights, we discussed building and open sourcing language models. What is the usual recipe for building large language models? What does it mean to open source them? What new research questions can we answer by open sourcing them? We particularly focused on the ongoing Open Language Model (OLMo) project at AI2, and invited Iz Beltagy and Dirk Groeneveld, the research and engineering leads of the OLMo project to chat. Blog post announcing OLMo:...
Published 06/29/23
How can we generate coherent long stories from language models? Ensuring that the generated story has long range consistency and that it conforms to a high level plan is typically challenging. In this episode, Kevin Yang describes their system that prompts language models to first generate an outline, and iteratively generate the story while following the outline and reranking and editing the outputs for coherence. We also discussed the challenges involved in evaluating long generated...
Published 03/24/23
Compositional generalization refers to the capability of models to generalize to out-of-distribution instances by composing information obtained from the training data. In this episode we chatted with Najoung Kim, on how to explicitly evaluate specific kinds of compositional generalization in neural network models of language. Najoung described COGS, a dataset she built for this, some recent results in the space, and why we should be careful about interpreting the results given the current...
Published 01/20/23
We invited Urvashi Khandelwal, a research scientist at Google Brain to talk about nearest neighbor language and machine translation models. These models interpolate parametric (conditional) language models with non-parametric distributions over the closest values in some data stores built from relevant data. Not only are these models shown to outperform the usual parametric language models, they also have important implications on memorization and generalization in language models. Urvashi's...
Published 01/13/23
In this episode, we talk with Kayo Yin, an incoming PhD at Berkeley, and Malihe Alikhani, an assistant professor at the University of Pittsburgh, about opportunities for the NLP community to contribute to Sign Language Processing (SLP). We talked about history and misconceptions about sign languages, high-level similarities and differences between spoken and sign languages, distinct linguistic features of signed languages, representations, computational resources, SLP tasks, and suggestions...
Published 05/19/22
This episode is the third in our current series on PhD applications. We talk about what the PhD application process looks like after applications are submitted. We start with a general overview of the timeline, then talk about how to approach interviews and conversations with faculty, and finish by discussing the different factors to consider in deciding between programs. The guests for this episode are Rada Mihalcea (Professor at the University of Michigan), Aishwarya Kamath (PhD student...
Published 03/02/22
This episode is the second in our current series on PhD applications. How do PhD programs in Europe differ from PhD programs in the US, and how should people decide between them? In this episode, we invite Barbara Plank (Professor at ITU, IT University of Copenhagen) and Gonçalo Correia (ELLIS PhD student at University of Lisbon and University of Amsterdam) to share their perspectives on this question. We start by talking about the main differences between pursuing a PhD in Europe and the...
Published 10/19/21
This episode is the first in our current series on PhD applications. How should people prepare their applications to PhD programs in NLP? In this episode, we invite Nathan Schneider (Professor of Linguistics and Computer Science at Georgetown University) and Roma Patel (PhD student in Computer Science at Brown University) to share their perspectives on preparing application materials. We start by talking about what factors should go into the decision to apply for PhD programs and how to...
Published 10/06/21
In this episode, we discussed the Alexa Prize Socialbot Grand Challenge and this year's winning submission, Alquist 4.0, with Petr Marek, a member of the winning team. Petr gave us an overview of their submission, the design choices that led to them winning the competition, including combining a hardcoded dialog tree and a neural generator model and extracting implicit personal information about users from their responses, and some outstanding challenges. Petr Marek is a PhD student at the...
Published 09/27/21
What can NLP researchers learn from Human Computer Interaction (HCI) research? We chatted with Nanna Inie and Leon Derczynski to find out. We discussed HCI's research processes including methods of inquiry, the data annotation processes used in HCI, and how they are different from NLP, and the cognitive methods used in HCI for qualitative error analyses. We also briefly talked about the opportunities the field of HCI presents for NLP researchers. This discussion is based on the following...
Published 08/20/21
In this episode, we talk with Lisa Beinborn, an assistant professor at Vrije Universiteit Amsterdam, about how to use human cognitive signals to improve and analyze NLP models. We start by discussing different kinds of cognitive signals—eye-tracking, EEG, MEG, and fMRI—and challenges associated with using them. We then turn to Lisa’s recent work connecting interpretability measures with eye-tracking data, which reflect the relative importance measures of different tokens in human reading...
Published 08/09/21
In this episode, we talk to Shunyu Yao about recent insights into how transformers can represent hierarchical structure in language. Bounded-depth hierarchical structure is thought to be a key feature of natural languages, motivating Shunyu and his coauthors to show that transformers can efficiently represent bounded-depth Dyck languages, which can be thought of as a formal model of the structure of natural languages. We went on to discuss some of the intuitive ideas that emerge from the...
Published 07/02/21
We discussed adversarial dataset construction and dynamic benchmarking in this episode with Douwe Kiela, a research scientist at Facebook AI Research who has been working on a dynamic benchmarking platform called Dynabench. Dynamic benchmarking tries to address the issue of many recent datasets getting solved with little progress being made towards solving the corresponding tasks. The idea is to involve models in the data collection loop to encourage humans to provide data points that are...
Published 06/19/21
We invited members of Masakhane, Tosin Adewumi and Perez Ogayo, to talk about their EMNLP Findings paper that discusses why typical research is limited for low-resourced NLP and how participatory research can help.   As a result of participatory research, Masakhane has many, many success stories: first datasets and benchmarks in African languages, first research on human evaluation specifically for MT for low-resource languages, etc. In this episode, we talked about one of them—MasakhaNER—in...
Published 06/08/21
We invited Lisa Li to talk about her recent work, Prefix-Tuning: Optimizing Continuous Prompts for Generation. Prefix tuning is a lightweight alternative to finetuning, and the idea is to tune only a fixed-length task-specific continuous vector, and to keep the pretrained transformer parameters frozen. We discussed how prefix tuning compares with finetuning and other efficient alternatives on two tasks in various experimental settings, and in what scenarios prefix tuning is preferable. Lisa...
Published 05/24/21
How can we build Visual Question Answering systems for real users? For this episode, we chatted with Danna Gurari, about her work in building datasets and models towards VQA for people who are blind. We talked about the differences between the existing datasets, and Vizwiz, a dataset built by Gurari et al., and the resulting algorithmic changes. We also discussed the unsolved challenges in this field, and the new tasks they result in. Danna Gurari is an Assistant Professor as well as...
Published 05/04/21
We invited Jayant Krishnamurthy and Hao Fang, researchers at Microsoft Semantic Machines to discuss their platform for building task-oriented dialog systems, and their recent TACL paper on the topic. The paper introduces a new formalism for task-oriented dialog to effectively handle references and revisions in complex dialog, and a large realistic dataset that uses this formalism. Leaderboard associated with the dataset:...
Published 04/14/21
In this episode, Robin Jia talks about how to build robust NLP systems. We discuss the different senses in which a system can be robust, reasons to care about system robustness, and the challenges involved in evaluating robustness of NLP models. We talk about how to build certifiably robust models through interval bound propagation and discrete encoding functions, as well as how to modify data collection procedures through active learning for more robust model development. Robin Jia is...
Published 04/05/21
We invited Nils Holzenberger, a PhD student at JHU to talk about a dataset involving statutory reasoning in tax law Holzenberger et al. released recently. This dataset includes difficult textual entailment and question answering problems that involve reasoning about how sections in tax law are applicable to specific cases. They also released a Prolog solver that fully solves the problems, and show that learned models using dense representations of text perform poorly. We discussed why this is...
Published 11/12/20
We invited Alona Fyshe to talk about the link between NLP and the human brain. We began by talking about what we currently know about the connection between representations used in NLP and representations recorded in the brain. We also discussed how different brain imaging techniques compare to each other. We then dove into experiments investigating how hidden states of LSTM language models correlate with EEG brain imaging data on three types of language inputs: well-formed grammatical...
Published 10/30/20