“You should consider applying to PhDs soon.” by bilalchughtai
Listen now
Description
TLDR: In this post, I argue that if you are a junior AI safety researcher, you should consider applying to PhD programs in ML soon, especially if you have recently participated in an AI safety upskilling or research program like MATS or ARENA, might be interested in working on AI safety long term, but don't have immediate career plans. It is relatively cheap to apply, and provides good future option value. I don’t argue that you should necessarily do a PhD, but some other posts do. PhD application deadlines are coming up soon; many application deadlines are December 15th, though some are earlier (as early as next week). For the uninitiated, I provide a step by step guide to applying at the end of this post. Applying to PhD programs might, in expectation, be worth your time. This might be true even if you are not [...] --- First published: November 29th, 2024 Source: https://www.lesswrong.com/posts/PdtkXcgbRpdHWRNt6/you-should-consider-applying-to-phds-soon --- Narrated by TYPE III AUDIO.
More Episodes
Audio note: this article contains 449 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. Based off research performed in the MATS 5.1 extension program, under the mentorship of Alex Turner (TurnTrout). Research...
Published 12/04/24
Preface Several friends have asked me about what psychological effects I think could affect human judgement about x-risk. This isn't a complete answer, but in 2018 I wrote a draft of "AI Research Considerations for Human Existential Safety" (ARCHES) that included an overview of cognitive biases...
Published 12/04/24
In the spirit of the season, you can book a call with me to help w/ your interp project (no large coding though) Would you like someone to: Review your paper or code? Brainstorm ideas on next steps? How to best communicate your results? Discuss conceptual problems Obvious Advice (e.g. being...
Published 12/03/24