S5, E205 - Exploring the Privacy & Cybersecurity Risks of Large Language Models
Listen now
Description
Prepare to have your mind expanded as we navigate the complex labyrinth of large language models and the cybersecurity threats they harbor. We dissect a groundbreaking paper that exposes how AI titans are susceptible to a slew of sophisticated cyber assaults, from prompt hacking to adversarial attacks and the less discussed but equally alarming issue of gradient exposure. As the conversation unfolds, we unravel the unnerving potential for these intelligent systems to inadvertently spill the beans on confidential training data, a privacy nightmare that transcends academic speculation and poses tangible security threats.  Resources: https://arxiv.org/pdf/2402.00888.pdf Support the show
More Episodes
Send us a Text Message. Discover the intricate dance between technology and ethics as Jake Ottenwaelder, principal privacy engineer at Integrated Privacy LLC, takes us into the heart of fractional privacy engineering. Join us for a captivating journey where Jake, pivoting from cybersecurity to...
Published 05/09/24
Send us a Text Message. Rumor Has It, in Privacy... Banning TikTok won't solve social media's issues with foreign influence, teen harm, and data privacy. Despite the proposed ban, the underlying problems remain unaddressed. We need comprehensive solutions to tackle these challenges head-on.  ...
Published 05/03/24
Published 05/03/24