Episodes
We’ve seen a rise in interest recently and a number of major announcements related to local LLMs and AI PCs. NVIDIA, Apple, and Intel are getting into this along with models like the Phi family from Microsoft. In this episode, we dig into local AI tooling, frameworks, and optimizations to help you navigate this AI niche, and we talk about how this might impact AI adoption in the longer term.
Published 06/04/24
At the age of 72, U.S. Representative Don Beyer of Virginia enrolled at GMU to pursue a Master’s degree in C.S. with a concentration in Machine Learning. Rep. Beyer is Vice Chair of the bipartisan Artificial Intelligence Caucus & Vice Chair of the NDC’s AI Working Group. He is the author of the AI Foundation Model Transparency Act & a lead cosponsor of the CREATE AI Act, the Federal Artificial Intelligence Risk Management Act & the Artificial Intelligence Environmental Impacts...
Published 05/29/24
Daniel & Chris share their first impressions of OpenAI’s newest LLM: GPT-4o and Daniel tries to bring the model into the conversation with humorously mixed results. Together, they explore the implications of Omni’s new feature set - the speed, the voice interface, and the new multimodal capabilities.
Published 05/22/24
There’s a lot of hype about AI agents right now, but developing robust agents isn’t yet a reality in general. Imbue is leading the way towards more robust agents by taking a full-stack approach; from hardware innovations through to user interface. In this episode, Josh, Imbue’s CTO, tell us more about their approach and some of what they have learned along the way.
Published 05/15/24
Yep, you heard that right. Autonomous fighter jets are in the news. Chris and Daniel discuss a modified F-16 known as the X-62A VISTA and autonomous vehicles/ systems more generally. They also comment on the Linux Foundation’s new Open Platform for Enterprise AI.
Published 05/08/24
We recently gathered some Practical AI listeners for a live webinar with Danny from LibreChat to discuss the future of private, open source chat UIs. During the discussion we hear about the motivations behind LibreChat, why enterprise users are hosting their own chat UIs, and how Danny (and the LibreChat community) is creating amazing features (like RAG and plugins).
Published 04/30/24
First there was Mamba… now there is Jamba from AI21. This is a model that combines the best non-transformer goodness of Mamba with good ‘ol attention layers. This results in a highly performant and efficient model that AI21 has open sourced! We hear all about it (along with a variety of other LLM things) from AI21’s co-founder Yoav.
Published 04/24/24
2024 promises to be the year of multi-modal AI, and we are already seeing some amazing things. In this “fully connected” episode, Chris and Daniel explore the new Udio product/service for generating music. Then they dig into the differences between recent multi-modal efforts and more “traditional” ways of combining data modalities.
Published 04/16/24
Daniel & Chris delight in conversation with “the funniest guy in AI”, Demetrios Brinkmann. Together they explore the results of the MLOps Community’s latest survey. They also preview the upcoming AI Quality Conference.
Published 04/10/24
In this fully connected episode, Daniel & Chris discuss NVIDIA GTC keynote comments from CEO Jensen Huang about teaching kids to code. Then they dive into the notion of “community” in the AI world, before discussing challenges in the adoption of generative AI by non-technical people. They finish by addressing the evolving balance between generative AI interfaces and search engines.
Published 04/02/24
Daniel and Chris are out this week, so we’re bringing you conversations all about AI’s complicated relationship to software developers from other Changelog pods: JS Party, Go Time & The Changelog.
Published 03/26/24
Daniel & Chris explore the state of the art in prompt engineering with Jared Zoneraich, the founder of PromptLayer. PromptLayer is the first platform built specifically for prompt engineering. It can visually manage prompts, evaluate models, log LLM requests, search usage history, and help your organization collaborate as a team. Jared provides expert guidance in how to be implement prompt engineering, but also illustrates how we got here, and where we’re likely to go next.
Published 03/20/24
Runway is an applied AI research company shaping the next era of art, entertainment & human creativity. Chris sat down with Runway co-founder / CTO, Anastasis Germanidis, to discuss their rise and how it’s defining the future of the creative landscape with its text & image to video models. We hope you find Anastasis’s founder story as inspiring as Chris did.
Published 03/12/24
While everyone is super hyped about generative AI, computer vision researchers have been working in the background on significant advancements in deep learning architectures. YOLOv9 was just released with some noteworthy advancements relevant to parameter efficient models. In this episode, Chris and Daniel dig into the details and also discuss advancements in parameter efficient LLMs, such as Microsofts 1-Bit LLMs and Qualcomm’s new AI Hub.
Published 03/06/24
Recently, we briefly mentioned the concept of “Activation Hacking” in the episode with Karan from Nous Research. In this fully connected episode, Chris and Daniel dive into the details of this model control mechanism, also called “representation engineering”. Of course, they also take time to discuss the new Sora model from OpenAI.
Published 02/28/24
Chris & Daniel explore AI in national security with Lt. General Jack Shanahan (USAF, Ret.). The conversation reflects Jack’s unique background as the only senior U.S. military officer responsible for standing up and leading two organizations in the United States Department of Defense (DoD) dedicated to fielding artificial intelligence capabilities: Project Maven and the DoD Joint AI Center (JAIC). Together, Jack, Daniel & Chris dive into the fascinating details of Jack’s recent...
Published 02/20/24
Google has been releasing a ton of new GenAI functionality under the name “Gemini”, and they’ve officially rebranded Bard as Gemini. We take some time to talk through Gemini compared with offerings from OpenAI, Anthropic, Cohere, etc. We also discuss the recent FCC decision to ban the use of AI voices in robocalls and what the decision might mean for government involvement in AI in 2024.
Published 02/14/24
Nous Research has been pumping out some of the best open access LLMs using SOTA data synthesis techniques. Their Hermes family of models is incredibly popular! In this episode, Karan from Nous talks about the origins of Nous as a distributed collective of LLM researchers. We also get into fine-tuning strategies and why data synthesis works so well.
Published 02/06/24
Recently the release of the rabbit r1 device resulted in huge interest in both the device and “Large Action Models” (or LAMs). What is an LAM? Is this something new? Did these models come out of nowhere, or are they related to other things we are already using? Chris and Daniel dig into LAMs in this episode and discuss neuro-symbolic AI, AI tool usage, multimodal models, and more.
Published 01/30/24
Small changes in prompts can create large changes in the output behavior of generative AI models. Add to that the confusion around proper evaluation of LLM applications, and you have a recipe for confusion and frustration. Raza and the Humanloop team have been diving into these problems, and, in this episode, Raza helps us understand how non-technical prompt engineers can productively collaborate with technical software engineers while building AI-driven apps.
Published 01/23/24
Recently, Intel’s Liftoff program for startups and Prediction Guard hosted the first ever “Advent of GenAI” hackathon. 2,000 people from all around the world participated in Generate AI related challenges over 7 days. In this episode, we discuss the hackathon, some of the creative solutions, the idea behind it, and more.
Published 01/17/24
We scoured the internet to find all the AI related predictions for 2024 (at least from people that might know what they are talking about), and, in this episode, we talk about some of the common themes. We also take a moment to look back at 2023 commenting with some distance on a crazy AI year.
Published 01/10/24
Prashanth Rao mentioned LanceDB as a stand out amongst the many vector DB options in episode #234. Now, Chang She (co-founder and CEO of LanceDB) joins us to talk through the specifics of their open source, on-disk, embedded vector search offering. We talk about how their unique columnar database structure enables serverless deployments and drastic savings (without performance hits) at scale. This one is super practical, so don’t miss it!
Published 12/19/23
The new open source AI book from PremAI starts with “As a data scientist/ML engineer/developer with a 9 to 5 job, it’s difficult to keep track of all the innovations.” We couldn’t agree more, and we are so happy that this week’s guest Casper (among other contributors) have created this resource for practitioners. During the episode, we cover the key categories to think about as you try to navigate the open source AI ecosystem, and Casper gives his thoughts on fine-tuning, vector DBs & more.
Published 12/12/23
In this enlightening episode, we delve deeper than the usual buzz surrounding AI’s perils, focusing instead on the tangible problems emerging from the use of machine learning algorithms across Europe. We explore “suspicion machines” — systems that assign scores to welfare program participants, estimating their likelihood of committing fraud. Join us as Justin and Gabriel share insights from their thorough investigation, which involved gaining access to one of these models and meticulously...
Published 12/05/23