Harvey Lederman: Propositional Attitudes and Reference in Language Models
Listen now
Description
In episode 106 of The Gradient Podcast, Daniel Bashir speaks to Professor Harvey Lederman. Professor Lederman is a professor of philosophy at UT Austin. He has broad interests in contemporary philosophy and in the history of philosophy: his areas of specialty include philosophical logic, the Ming dynasty philosopher Wang Yangming, epistemology, and philosophy of language. He has recently been working on incomplete preferences, on trying in the philosophy of language, and on Wang Yangming’s moral metaphysics. Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected] Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter Outline: * (00:00) Intro * (02:15) Harvey’s background * (05:30) Higher-order metaphysics and propositional attitudes * (06:25) Motivations * (12:25) Setup: syntactic types and ontological categories * (25:11) What makes higher-order languages meaningful and not vague? * (25:57) Higher-order languages corresponding to the world * (30:52) Extreme vagueness * (35:32) Desirable features of languages and important questions in philosophy * (36:42) Higher-order identity * (40:32) Intuitions about mental content, language, context-sensitivity * (50:42) Perspectivism * (51:32) Co-referring names, identity statements * (55:42) The paper’s approach, “know” as context-sensitive * (57:24) Propositional attitude psychology and mentalese generalizations * (59:57) The “good standing” of theorizing about propositional attitudes * (1:02:22) Mentalese * (1:03:32) “Does knowledge imply belief?” — when a question does not have good standing * (1:06:17) Sense, Reference, and Substitution * (1:07:07) Fregeans and the principle of Substitution * (1:12:12) Follow-up work to this paper * (1:13:39) Do Language Models Produce Reference Like Libraries or Like Librarians? * (1:15:02) Bibliotechnism * (1:19:08) Inscriptions and reference, what it takes for something to refer * (1:22:37) Derivative and basic reference * (1:24:47) Intuition: n-gram models and reference * (1:28:22) Meaningfulness in sentences produced by n-gram models * (1:30:40) Bibliotechnism and LLMs, disanalogies to n-grams * (1:33:17) On other recent work (vector grounding, do LMs refer?, etc.) * (1:40:12) Causal connections and reference, how bibliotechnism makes good on the meanings of sentences * (1:45:46) RLHF, sensitivity to truth and meaningfulness * (1:48:47) Intelligibility * (1:50:52) When LLMs produce novel reference * (1:53:37) Novel reference vs. find-replace * (1:56:00) Directionality example * (1:58:22) Human intentions and derivative reference * (2:00:47) Between bibliotechnism and agency * (2:05:32) Where do invented names / novel reference come from? * (2:07:17) Further questions * (2:10:04) Outro Links: * Harvey’s homepage and Twitter * Papers discussed * Higher-order metaphysics and propositional attitudes * Perspectivism * Sense, Reference, and Substitution * Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs Get full access to The Gradient at thegradientpub.substack.com/subscribe
More Episodes
Published 11/21/24
Episode 140 I spoke with Professor Jacob Andreas about: * Language and the world * World models * How he’s developed as a scientist Enjoy! Jacob is an associate professor at MIT in the Department of Electrical Engineering and Computer Science as well as the Computer Science and Artificial...
Published 10/10/24