AI Safety

Haumaru Mohio Horihori

Why might AI safety be an important problem?

Projections suggest that there could be substantial progress in AI in the next few decades, potentially to the point where machines outperform humans in many, if not all, tasks.

This could have enormous benefits, helping to solve currently intractable global problems. However, it could also pose severe risks. Some of the risks from advanced AI could even be existential — meaning that they could cause human extinction, or an equally permanent and severe disempowerment of humanity.

Because of this potential, many in the effective altruism community see positively shaping the development of AI as one of the most important ways of creating a better future. Initial research also suggests that this could be a highly cost effective way of doing good, given that it scores highly on scale, neglectedness and tractability:

Scale

Āwhata

Like the industrial and agricultural revolutions, developments in AI will likely have a global impact, for better or for worse. Existential risks are particularly concerning, because they could affect the entire future to come. ¹

Neglectedness

Mahuetanga

Far more is spent on advancing AI capabilities than on ensuring their safety, and it is estimated that only around 400 people worldwide are working directly on reducing AI-related existential risks.²

Solvability

Whakaotitanga

Making progress on preventing an AI-related catastrophe seems difficult, but there are a lot of avenues for more research and the field is very young. Assessments vary widely, but the area seems moderately tractable overall.³

For full references and further information, we highly recommend reading 80,000 Hours’ in-depth report on AI safety

How cost effective is AI safety work?

Estimates of the cost effectiveness of AI safety work range from $1.06-$1,200 USD/life saved in expectation. By comparison, GiveWell estimates that top global health and development charities can save a life for ~$4,500 USD. This suggests that donating to AI safety work could be anywhere from 4 to 4,480 times as effective as donating to GiveWell’s top charities.

Learn more with a free book

We’re giving away free copies of Brian Christian’s The Alignment Problem, to help people learn more about the potential risks posed by artificial intelligence. Request your free copy below.

Want to get into a career in AI safety?

Request a free one-on-one call with our team. We can help you formulate a plan, find resources appropriate to your career stage and level of expertise, and put you in touch with mentors.

Support safer AI development

  • Managed philanthropic fund which makes grants to people and organisations safeguarding the longterm future- including those working on AI safety

    Learn more

  • GovAI

    Building a global research community, dedicated to helping humanity navigate the transition to a world with advanced AI

    Learn more

  • Research organisation aiming to shift the development of AI towards provably safe systems that act in accordance with human interests even as they become more powerful.

    Learn more