AGI Safety and Alignment with Robert Miles
Listen now
Description
This episode we're chatting with Robert Miles about why we even want artificial general intelligence, general AI as narrow AI where its input is the world, when predictions of AI sound like science fiction, covering terms like: AI safety, the control problem, Ai alignment, specification problem; the lack of people working in AI alignment, AGI doesn’t need to be conscious, and more
More Episodes
This episode we're chatting with Alex Shvartsman about what is our AI future, human crafted storytelling, the Generative AI use backlash, disclaimers for generated text, human vs AI authorship, practical or functional goals of LLMs, changing themes in science fiction, a diversity of international...
Published 05/14/24
Published 05/14/24
This episode we're chatting with Eleanor and Kerry on what is good technology and is it even possible? Technology is political, watering down regulation, the magic of AI, the value of human creativity, how Feminism, Aboriginal, mixed race studies can help AI development? The performative nature...
Published 04/02/24