Inside OpenAI's trust and safety operation - with Rosie Campbell
Listen now
Description
No organisation in the AI world is under more intense scrutiny than OpenAI. The maker of Dall-E, GPT4, ChatGPT and Sora is constantly pushing the boundaries of artificial intelligence and has supercharged the enthusiasm of the general public for AI technologies. With that elevated position come questions about how OpenAI can ensure its models are not used for malign purposes. In this interview we talk to Rosie Campbell from OpenAI’s policy research team about the many processes and safeguards in place to prevent abuse. Rosie also talks about the forward-looking work of the policy research team, anticipating longer-term risks that might emerge with more advanced AI systems. Helen and Rosie discuss the challenges associated with agentic systems (AI that can interface with the wider world via APIs and other technologies), red-teaming new models, and whether advanced AIs should have ‘rights’ in the same way that humans or animals do. You can read the paper referenced in this episode ‘Practices for Governing Agentic AI Systems’ co-written by Rosie and her colleagues: https://cdn.openai.com/papers/practices-for-governing-agentic-ai-systems.pdf Watch the video of the interview here: https://www.youtube.com/watch?v=81LNrlEqgcM 
More Episodes
Emily Mackevicius is a co-founder and director of Basis, a nonprofit applied research organization focused on understanding and building intelligence while advancing society’s ability to solve intractable problems. Emily is a member of the Simons Society of Fellows, and a postdoc in the Aronov...
Published 04/15/24
Stability AI’s Stable Diffusion model is one of the best known and most widely used text-to-image systems. The decision to open-source both the model weights and code has ensured its mass adoption, with the company claiming more than 330 million downloads. Details of the latest version - Stable...
Published 04/07/24