Description
In this interview we explore the new and sometimes strange world of redteaming AI. I have SO many questions, like what is AI safety?
We'll discuss her presence at Black Hat, where she delivered two days of training and participated on an AI safety panel.
We'll also discuss the process of pentesting an AI. Will pentesters just have giant cheatsheets or text files full of adversarial prompts? How can we automate this? Will an AI generate adversarial prompts you can use against another AI? And finally, what do we do with the results?
Resources:
PyRIT AI redteaming tool Microsoft's AI redteaming guide Show Notes: https://securityweekly.com/esw-371
This week in the enterprise security news,
Upwind Security gets a massive $100M Series B Trustwave and Cybereason merge NVIDIA wants to force SOC analyst millennials to socialize with AI agents Has the cybersecurity workforce peaked? Why incident response is essential for resilience an example...
Published 11/16/24
Naturally, the next approach to try is a federated one. How do we break down cybersecurity into more bite-sized components? How do we alleviate all this CISO stress we've heard about, and make their job seem less impossible than it does today?
This will be a more standards and GRC focused...
Published 11/15/24