Episode 49: AGI Alignment and Safety
Listen now
Description
Is Elon Musk right that Artificial General Intelligence (AGI) research is like 'summoning the demon' and should be regulated? In episodes 48 and 49, we discussed how our genes 'align' our interests with their own utilizing carrots and sticks (pleasure/pain) or attention and perception. If our genes can create a General Intelligence (i.e. Universal Explainer) alignment and safety 'program' for us, what's to stop us from doing that to future Artificial General Intelligences (AGIs) that we create?  But even if we can, should we? "I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon." --Elon Musk --- Support this podcast: https://anchor.fm/four-strands/support
More Episodes
We take a deep dive into Karl Popper’s philosophical ideas about music that he outlines in four chapters in this intellectual autobiography Unended Quest: “Music,” Speculations about the Rise of Polyphonic Music,” “Two Kinds of Music,” and “Progressivism in Art, Especially in...
Published 11/12/24
Published 11/12/24
Here we interview AI researcher Kenneth Stanley, who makes the case that in complex systems, pursing specific objectives can actually be counterproductive. Instead, whether in machine learning, business, science, education, or art, we should pursue what is interesting. It is in this search for...
Published 10/29/24