Senior Research Fellow
AI Safety Institute for Law & AI AI Safety Institute for Law & AI

Senior Research Fellow

The Institute for Law & AI (LawAI) is looking for (Senior) Research Fellows to join its team to conduct legal research at the intersection of law and artificial intelligence.

Read More
Senior Associate, AI Governance
AI Safety The Future Society AI Safety The Future Society

Senior Associate, AI Governance

The Future Society (TFS) is seeking a driven and experienced professional to help advance our global AI governance activities. Specifically, we are looking for a Senior Associate with 6+ years of relevant experience for developing, advocating for, and/or implementing international AI policy and governance mechanisms. These include laws and regulations, voluntary frameworks, standards, or industry practices.

Read More
Chief Operating Officer
Science Policy & Infrastructure Foresight Institute Science Policy & Infrastructure Foresight Institute

Chief Operating Officer

The Foresight Institute is at the forefront of advancing safe and beneficial science and technology for the long-term flourishing of life. We are looking for a dynamic and experienced Chief Operating Officer to join our growing team and lead our critical operational functions. This role is essential for ensuring that our programs and initiatives are executed efficiently, and it offers a unique opportunity to shape the future of our organization.

Read More
Evaluators, Systemic Reviews
AI Safety Elicit AI Safety Elicit

Evaluators, Systemic Reviews

We’re working to make Elicit more helpful for writing systematic reviews in life sciences and social science. To do that, we need really high-quality evaluations of Elicit’s output. So, we’re hiring PhDs to evaluate reviews written with Elicit’s help.

Read More
GTM (Go to Market) Lead
AI Safety Elicit AI Safety Elicit

GTM (Go to Market) Lead

As the GTM lead, you’ll run full sales cycles end-to-end, with the opportunity to lead a team. It’s great for people who want to own the entire customer relationship, work with engineers and product managers, and wear multiple hats. Because Elicit has broad domain appeal, you’ll also be challenged to ramp up on lots of different research areas - from biomedicine, to policy, to industrial manufacturing, though we do have an initial vertical focus.

Read More
Technical Staff, Forecasting Tools
AI Safety Sage Future AI Safety Sage Future

Technical Staff, Forecasting Tools

Sage is looking for a member of technical staff to (1) ideate, design, build, and write interactive explainers and demos about AI progress for AI Digest, and (2) build relationships with our audience of policymakers and the public, and grow our readership.

Read More
Member of Technical Staff, AI Digest
AI Safety Sage Future AI Safety Sage Future

Member of Technical Staff, AI Digest

Sage is looking for a member of technical staff to (1) ideate, design, build, and write interactive explainers and demos about AI progress for AI Digest, and (2) build relationships with our audience of policymakers and the public, and grow our readership.

Read More
Expression of Interest
Center for Reducing Suffering Center for Reducing Suffering

Expression of Interest

Are you excited to join our mission and work with us on figuring out how to best reduce suffering? We would love to hear from you! Please register your interest here.

Read More
Expression of Interest
Building Effective Altruism BlueDot Impact Building Effective Altruism BlueDot Impact

Expression of Interest

We are accepting expressions of interest for certain positions that we haven't yet opened hiring rounds for. We’ve made the first step as easy as possible, and we recommend submitting an expression of interest even if you’re not sure if you’re a good fit. If we think you could be a particularly great fit for our team, we will reach out to you.

Read More
Machine Learning Engineer
AI Safety Elicit AI Safety Elicit

Machine Learning Engineer

As an ML research engineer at Elicit, you will:

  • Compose together tens to thousands of calls to language models to accomplish tasks that we can't accomplish with a single call.

  • Curate datasets for finetuning models, e.g. for training models to extract policy conclusions from papers

  • Set up evaluation metrics that tell us what changes to our models or training setup are improvements

  • Scale up semantic search from a few thousand documents to 100k+ documents

Read More
Interpretability Researcher
AI Safety EleutherAI AI Safety EleutherAI

Interpretability Researcher

EleutherAI is seeking talented and motivated individuals to join our Interpretability team to perform cutting edge research with large language and vision models. We aim to better understand the features learned by today’s deep neural networks, so we can better steer their behavior and inform the public and policy makers about their risks and benefits.

Read More