Director of Strategic Partnerships
AI Safety OpenMined AI Safety OpenMined

Director of Strategic Partnerships

As the Director of Strategic Partnerships, you will take the lead in establishing and overseeing OpenMined’s new fundraising function. This role is pivotal in driving our mission to advance AI safety and Privacy-Enhancing Technologies (PETs).

Read More
Senior Mathematician, AI Benchmarking
AI Safety Epoch AI AI Safety Epoch AI

Senior Mathematician, AI Benchmarking

We are seeking an experienced mathematician to lead the expansion of our state-of-the-art mathematical reasoning AI benchmarking efforts. The Senior Mathematician will lead the creation of ~1,000 mathematical problems for evaluating the mathematical reasoning capabilities of AI systems, with difficulty ranging from highschool competition to graduate-level mathematics problems. This person will play an important role in advancing our ability to assess the capabilities of AI systems.

Read More
Associate Editor
AI Safety Tech Policy Press AI Safety Tech Policy Press

Associate Editor

Tech Policy Press is looking for an Associate Editor to help expand its coverage of key policy areas, including artificial intelligence. The Associate Editor will have the opportunity to develop key aspects of the editorial and product strategy at Tech Policy Press.

Read More
(Senior) Research Scholar, Special Projects Track
AI Safety Institute for Law & AI AI Safety Institute for Law & AI

(Senior) Research Scholar, Special Projects Track

The Institute for Law & AI (LawAI) is looking for Research Scholars and Senior Research Scholars (Special Projects Track) to join its team to launch or manage new programs or initiatives in the field of AI law & policy. Both positions are one-year, visiting roles. For those interested in a one-year program focused on research, please see our (Senior) Research Scholar (General Track) job description.

Read More
(Senior) Research Scholar, General Track
AI Safety Institute for Law & AI AI Safety Institute for Law & AI

(Senior) Research Scholar, General Track

The Institute for Law & AI (LawAI) is looking for Research Scholars and Senior Research Scholars to join its team to conduct legal research and engage with policymakers. Both positions are one-year, visiting roles.

Read More
Senior Research Fellow
AI Safety Institute for Law & AI AI Safety Institute for Law & AI

Senior Research Fellow

The Institute for Law & AI (LawAI) is looking for (Senior) Research Fellows to join its team to conduct legal research at the intersection of law and artificial intelligence.

Read More
Senior Associate, AI Governance
AI Safety The Future Society AI Safety The Future Society

Senior Associate, AI Governance

The Future Society (TFS) is seeking a driven and experienced professional to help advance our global AI governance activities. Specifically, we are looking for a Senior Associate with 6+ years of relevant experience for developing, advocating for, and/or implementing international AI policy and governance mechanisms. These include laws and regulations, voluntary frameworks, standards, or industry practices.

Read More
Writers
AI Safety Heron Program for AI Security AI Safety Heron Program for AI Security

Writers

The Heron Program for AI Security is seeking Writers to author feature pieces of the newsletter on a freelance basis.

The AI security newsletter will cover the growing intersection of information security and AI with a focus on the technical challenges that relate to societal and catastrophic risks. 

Read More
Talent Identification Advisor
AI Safety Impact Academy AI Safety Impact Academy

Talent Identification Advisor

Help us connect with world-class STEM talent in your region and communities for our upcoming program!

Impact Academy is a startup non-profit that runs cutting-edge fellowships to enable global talent to use their careers to mitigate global catastrophic risks and contribute to a better future. Over the coming years, we will focus all our efforts on advancing the safe and beneficial development of AI through our programs.

Read More
Evaluators, Systemic Reviews
AI Safety Elicit AI Safety Elicit

Evaluators, Systemic Reviews

We’re working to make Elicit more helpful for writing systematic reviews in life sciences and social science. To do that, we need really high-quality evaluations of Elicit’s output. So, we’re hiring PhDs to evaluate reviews written with Elicit’s help.

Read More
Organising Director
AI Safety Pause AI AI Safety Pause AI

Organising Director

Although many volunteers contribute to PauseAI (some even full-time), PauseAI currently has no paid staff. You will be the first hire and play a crucial role in how the organization grows and evolves. You will work closely with the founder, Joep Meindertsma. Be aware that PauseAI may grow very quickly in the near future, both in terms of members and funding.

Read More
GTM (Go to Market) Lead
AI Safety Elicit AI Safety Elicit

GTM (Go to Market) Lead

As the GTM lead, you’ll run full sales cycles end-to-end, with the opportunity to lead a team. It’s great for people who want to own the entire customer relationship, work with engineers and product managers, and wear multiple hats. Because Elicit has broad domain appeal, you’ll also be challenged to ramp up on lots of different research areas - from biomedicine, to policy, to industrial manufacturing, though we do have an initial vertical focus.

Read More
Question Writer, Math Benchmark (Contractor)
AI Safety Epoch AI AI Safety Epoch AI

Question Writer, Math Benchmark (Contractor)

We are looking for up to three question authors to contribute original and difficult mathematics problems to a novel mathematical problem-solving benchmark for AI systems. 

The primary day-to-day activity of this contractor position is writing original math questions. It’s crucial for our purposes that the questions and their answers do not appear on the Internet. Question authors will also be expected to review the questions proposed by other question authors to ensure that the desired criteria of originality, difficulty and correctness are satisfied. Experimenting with existing AI systems to get a sense of their capabilities is also encouraged.

Read More
Technical Staff, Forecasting Tools
AI Safety Sage Future AI Safety Sage Future

Technical Staff, Forecasting Tools

Sage is looking for a member of technical staff to (1) ideate, design, build, and write interactive explainers and demos about AI progress for AI Digest, and (2) build relationships with our audience of policymakers and the public, and grow our readership.

Read More
Member of Technical Staff, AI Digest
AI Safety Sage Future AI Safety Sage Future

Member of Technical Staff, AI Digest

Sage is looking for a member of technical staff to (1) ideate, design, build, and write interactive explainers and demos about AI progress for AI Digest, and (2) build relationships with our audience of policymakers and the public, and grow our readership.

Read More
Project Lead, Windfall Trust
AI Safety Future of Life Foundation AI Safety Future of Life Foundation

Project Lead, Windfall Trust

The Future of Life Foundation (FLF) is seeking a driven individual to lead efforts to bring the Windfall Trust (WT) project to fruition. You’ll be responsible for hiring a founding team to support you, and will be provided with operations support, research support, and significant funding runway.

Read More
Expression of Interest
AI Safety Epoch AI AI Safety Epoch AI

Expression of Interest

We have an interdisciplinary team, and we are constantly in search of talent that can add to our team's skill set. Please submit this expression of interest if you are interested in working with us. We won’t scan applications on a regular basis, but we will reach out if there’s an open position we think you might be a good fit for.

Read More