Responsible AI and Scaling: From Large Language Models to Dual Scaling Laws and ROI Insights
Listen now
Description
In episode 41, the discussion kicks off with an introduction to large language models and their potential pitfalls, such as hallucinations, using DataGemma as a case study. The conversation then contrasts Retrieval-Interleaved Generation with Retrieval-Augmented Generation, highlighting their unique approaches. Insights from Professor Ethan Mollick illuminate dual scaling laws in AI, shedding light on the intricacies of scaling AI technologies. The episode also features a segment on PwC's 2024 US Responsible AI Survey, followed by an in-depth exploration of Responsible AI, focusing on its risks, objectives, and strategies. The episode wraps up by evaluating the ROI on Responsible AI initiatives.
More Episodes
In episode 54, we begin with an introduction to AI multihoming and its potential impact on businesses. The discussion explores current AI spending trends and the importance of vendor diversity in leveraging AI technologies. We take a closer look at Anthropic's AI models and the factors involved...
Published 10/30/24
In episode 53, we begin with a welcome and an introduction to the concept of cooking with AI. The episode explores how AI-powered tools are democratizing software development, making it accessible to a broader audience. We discuss AI's role as a new sous chef, empowering software creation and...
Published 10/28/24