“My theory of change for working in AI healthtech” by Andrew_Critch
Listen now
Description
This post starts out pretty gloomy but ends up with some points that I feel pretty positive about. Day to day, I'm more focussed on the positive points, but awareness of the negative has been crucial to forming my priorities, so I'm going to start with those. It's mostly addressed to the EA community, but is hopefully somewhat of interest to LessWrong and the Alignment Forum as well. My main concernsI think AGI is going to be developed soon, and quickly. Possibly (20%) that's next year, and m...
More Episodes
[Warning: This post is probably only worth reading if you already have opinions on the Solomonoff induction being malign, or at least heard of the concept and want to understand it better.] IntroductionI recently reread the classic argument from Paul Christiano about the Solomonoff prior being...
Published 11/25/24
Published 11/25/24
Audio note: this article contains 33 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. Many of you readers may instinctively know that this is wrong. If you flip a coin (50% chance) twice, you are not guaranteed to...
Published 11/20/24