How to detect errors in AI-generated code
Listen now
Description
Ben chats with Gias Uddin, an assistant professor at York University in Toronto, where he teaches software engineering, data science, and machine learning. His research focuses on designing intelligent tools for testing, debugging, and summarizing software and AI systems. He recently published a paper about detecting errors in code generated by LLMs. Gias and Ben discuss the concept of hallucinations in AI-generated code, the need for tools to detect and correct those hallucinations, and the potential for AI-powered tools to generate QA tests.
More Episodes
The home team is joined by Kinnaird McQuaid, founder and CTO of NightVision, which offers developer-friendly API and web app security testing. Kinnaird talks about his path from school-age hacker to white-hat security expert, why it’s important to build security practices into the software...
Published 12/10/24
Published 12/10/24
Ben and Ryan talk all things mobile app development with Kenny Johnston, Chief Product Officer at Instabug. They explore what’s unique about mobile observability, how AI tools can reduce developer toil, and why user experience matters so much for app quality.
Published 12/06/24