AI Safety Resources
Welcome to our AI Safety Resources page. Here you will find various resources to deepen your understanding of AI Safety, including newsletters to keep you up-to-date, links to external resources, and information about our Slack workspace.
Newsletters
Here are some newsletters that we recommend to stay informed about AI Safety:
- The Centre For AI Safety Newsletter: This is a short and high quality weekly newsletter covering key events in AI with a particular focus on AI safety. Produced by the CAIS or Centre for AI Safety led by Dan Hendrycks. Subscribe here.
Recent News
External Resources
Here are some places to get started for reading further about AI risks:
- The first place you should visit: A wonderful map of the landscape of AI safety resources.
- Introduction to AI safety by Rob Miles: Rob Miles is a popular youtuber who covers AI safety content. This is his intro to AI safety talk.
- A collection of introductory resources to AI safety: Someone collected a whole bunch of useful introductory resources to AI safety.
- Ajeya Contra on AI being misaligned by default: Puts forward an argument for why powerful AI will be misaligned by default.
- Possible funding opportunities from the Long Term Future Fund: There are many possible sources of funding for doing AI safety related research but this is one of the best sources of funding.
Join Our Slack Workspace
We have a vibrant community discussing AI Safety on our Slack workspace. If you’re interested in joining, please fill out this interest form, and we’ll get back to you as soon as possible.
Books
The Alignment Problem: Machine Learning and Human Values
Human Compatible: Artificial Intelligence and the Problem of Control
Life 3.0: Being Human in the Age of Artificial Intelligence
Superintelligence: Paths, Dangers, Strategies
Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence