Episodes
Emily Mackevicius is a co-founder and director of Basis, a nonprofit applied research organization focused on understanding and building intelligence while advancing society’s ability to solve intractable problems. Emily is a member of the Simons Society of Fellows, and a postdoc in the Aronov lab and the Center for Theoretical Neuroscience at Columbia’s Zuckerman Institute. Her research uncovers how complex cognitive behaviors are generated by networks of neurons through local interactions...
Published 04/15/24
Stability AI’s Stable Diffusion model is one of the best known and most widely used text-to-image systems. The decision to open-source both the model weights and code has ensured its mass adoption, with the company claiming more than 330 million downloads. Details of the latest version - Stable Diffusion 3 - were revealed in a paper, published by the company in March 2024. In this episode, Stability AI’s Kate Hodesdon joins Helen to discuss some of SD3’s new features, including improved...
Published 04/07/24
No organisation in the AI world is under more intense scrutiny than OpenAI. The maker of Dall-E, GPT4, ChatGPT and Sora is constantly pushing the boundaries of artificial intelligence and has supercharged the enthusiasm of the general public for AI technologies. With that elevated position come questions about how OpenAI can ensure its models are not used for malign purposes. In this interview we talk to Rosie Campbell from OpenAI’s policy research team about the many processes and...
Published 03/07/24
Nina Schick is a leading commentator on Artificial Intelligence and its impact on business, geopolitics and humanity.  Her book ‘Deepfakes and the Infocalypse’ charts the early use of gen AI to create deepfake pornography and the technology’s subsequent use as a tool of political manipulation.  With over two decades of geopolitical experience, Nina has long been focused on macro-trends for society. She has advised global leaders, including Joe Biden, the President of the United States, and...
Published 02/09/24
Charlie Blake from Graphcore’s research team discusses their AI Papers of the Month for January 2024.  Graphcore research has been collating and sharing a review of the most consequential AI papers internally, every month, for a number of years.  Now – for the first time – the research team is making this valuable resource public, to help the wider AI community keep up-to-date with the most exciting breakthroughs.  Papers of the Month for January 2024 (with some work from December 2023)...
Published 02/02/24
Data is the fuel that is powering the AI revolution - but what do we do when there's just not enough data to satisfy the insatiable appetite of new model training? In this episode, Florian Hönicke, Principal AI Engineer at Jina AI, discusses the use of LLMs to generate synthetic data to help solve the data bottleneck. He also addresses the potential risks associated with an over-reliance on synthetic data.  German startup Jina AI is one of the many exciting companies coming out of Europe,...
Published 01/29/24
NeurIPS is the world’s largest AI conference, where leading AI practitioners come together to share the latest research and debate the way forward for artificial intelligence.  In this special episode, Helen examines some of the big themes of NeurIPS 2023 and talks to a range of attendees about their work, the big issues of the day, and what they’ve seen at NeurIPS that caught their attention.  It’s fair to say that LLMs loomed large over this year’s conference, but there’s plenty more to...
Published 12/22/23
Miranda Mowbray is one of Britain’s leading thinkers on the ethics of Artificial Intelligence.  After a long and distinguished career as a research scientist with HP, she is now an Honorary Lecturer in Computer Science at the University of Bristol where she specialises in ethics for AI, and data science for cybersecurity.  In our wide-ranging conversation, Miranda breaks down the definition of AI ethics into its many constituent parts – including safety, transparency, non-discrimination and...
Published 12/05/23
Danijela Horak explains how the BBC is making use of AI and its plans for the future, including detecting deepfakes as well as using deepfake technology as part of its production process. Danijela and Helen discuss the Corporation's use of open source models and its view on closed source technologies such as the GPT family of models from OpenAI. We find out how the BBC uses AI for recommendation, while taking cautious approach to user data, and Helen and Danjela reflect on why there needs...
Published 12/04/23
First episode coming soon... Join Graphcore's Helen Byrne and leading figures in the world of Artificial Intelligence as they discuss advances in AI research, commercial deployment, ethics, and occasional weird and wonderful applications.
Published 12/04/23