AI red teaming strategy and risk assessments: A conversation with Brenda Leong
Listen now
Description
  AI governance is a rapidly evolving field that faces a wide array of risks, challenges and opportunities. For organizations looking to leverage AI systems such as large language models and generative AI, assessing risk prior to deployment is a must. One technique that’s been borrowed from the security space is red teaming. The practice is growing, and regulators are taking notice. Brenda Leong, a partner of Luminos Law, helps global businesses manage their AI and data risks. I recently caught up with her to discuss what organizations should be thinking about when diving into red teaming to assess risk prior to deployment.
More Episodes
As the U.S. enters the final stretch of the 2024 election cycle, we face a tight race at the presidential and congressional levels. With a razor-thin margin separating Vice President Kamala Harris and former president Donald Trump, we decided to take a look at the possible policy positions of...
Published 10/04/24
Published 10/04/24
The year 2024 proved to be another robust one for emerging U.S. state privacy law. Seven states joined the ranks, bringing the total up to 19.   Unlike previous years, however, 2024 underwent a paradigm shift away from the standard framework influenced by the draft Washington State Privacy Act....
Published 10/01/24