Consider reading How to pursue a career in technical AI alignment. It covers more topics and has more details, and I endorse most if not all of the advice. To quote Andrew Critch: I get a lot of emails from folks with strong math backgrounds (mostly, PhD students in math at top schools) who areRead more about FAQ: Advice for AI alignment researchers[…]
Read more FAQ: Advice for AI alignment researchersAbout Me
Hi, I'm Rohin! I work as a Research Scientist on the technical AGI safety team at DeepMind. I completed my PhD at the Center for Human-Compatible AI at UC Berkeley, where I worked on building AI systems that can learn to assist a human user, even if they don't initially know what the user wants.
I'm particularly interested in big picture questions about artificial intelligence. What techniques will we use to build human-level AI systems? How will their deployment affect the world? What can we do to make this deployment go better? I write up summaries and thoughts about recent work tackling these questions in the Alignment Newsletter.
I am involved with Effective Altruism (EA), and out of concern for animal welfare, I am almost vegan. In my free time, I enjoy puzzles, board games, and karaoke. You can email me at rohinmshah@gmail.com, though if you want to ask me about careers in AI alignment, you should read my FAQ first.