Support work to reorient AI research towards provably beneficial systems

Charities > Longterm Future > AI > CHAI

The Center for Human-Compatible AI (CHAI) is an AI alignment research centre based at the University of California in Berkeley.

CHAI was founded in 2016 by computer science professor Stuart Russell, along with a broader group of academics. Concerned about the current trajectory of emerging technologies, the CHAI team aimed to develop a new model of AI which, if successful, would not pose a threat to humanity.

Their approach to AI safety research focuses on ensuring that artificial intelligences will act in accordance with human goals and values. In particular, they have investigated inverse reinforcement learning, in which an artificial intelligence learns human values from observing human behaviour. CHAI have also worked on modelling human-machine interactions in scenarios where intelligent machines have an "off-switch" that they are capable of overriding.

As well as producing an extensive amount of published research, CHAI engages in significant field-building and thought leadership work. For example, they have funded and trained over 20 PhD students, making a notable contribution to development of AI safety as a field. CHAI also works to inform public policy, hosting and attending workshops, debates and seminars to engage with key policymakers.

For more information about CHAI and their work on AI safety, visit their website.


As recommended by