Director of Global Challenges Project

Director of Global Challenges Project

Own and run impactful workshops that introduce promising students to careers in AI safety and biosecurity. We're looking for a Director of Global Challenges Project to own and run the program, delivering impactful workshops to promising talent. You'll also lead our OASIS workshops for top AI safety university organizers (hosted at the Constellation office), support other Kairos programming, and potentially lead new programs.

Read More
AI Red Teamer (Contract)
AI Safety Trajectory Labs AI Safety Trajectory Labs

AI Red Teamer (Contract)

Trajectory Labs is a stealth AI security startup working with leading AI companies. We're hiring to red-team a product unreleased to the public from a frontier AI lab.

Read More
Field Strategist
AI Safety Atlas Computing AI Safety Atlas Computing

Field Strategist

Atlas Computing is looking for a cohort of Field Strategists to join us in scoping and building new organizations to secure society against the most severe risks it may face as AI becomes more capable.

Read More
LLM Evaluation Engineer
AI Safety ThirdLaw AI Safety ThirdLaw

LLM Evaluation Engineer

You’ll build the evaluation layer in the ThirdLaw platform—the part of the system that decides whether an LLM prompt, response, tool call, or agent behavior is acceptable.

Read More
Operations Analyst
AI Safety The Future Society AI Safety The Future Society

Operations Analyst

The Future Society (TFS) is seeking an Operations Analyst to support operations, project management, and communications for our team of 19 staff working on aligning AI through better governance. This role is ideal for a brilliant, mission-driven generalist who thrives on organization, planning, problem solving, and collaboration.

Read More
(Senior) Research Fellow
AI Safety Institute for Law & AI AI Safety Institute for Law & AI

(Senior) Research Fellow

The Institute for Law & AI (LawAI) is seeking (Senior) Research Fellows to conduct legal research at the intersection of law and artificial intelligence

Read More
Societal Defense Researcher
AI Safety CARMA AI Safety CARMA

Societal Defense Researcher

CARMA's Public Security Policy (PSP) Program is seeking a Policy Researcher to advance our work on AI governance, risk mitigation, and emergency response frameworks. This role will focus on disaster preparedness planning for AI-related crises across multiple jurisdictional levels (national, state/provincial, multinational, and alliance-based), vulnerability assessments of critical societal systems, and development of robust governance approaches.

Read More
Associate or Principal DOE
AI Safety Lionheart Ventures AI Safety Lionheart Ventures

Associate or Principal DOE

We are looking to hire either an Associate or Principal for our AI fund, which will be determined by the experience level of our preferred candidate. In this role, you will serve a critical role on the investment team working alongside our partners. It’s a high-impact role with significant autonomy and responsibility across all aspects of the investment process.

Read More
Video & Multimedia Producer
AI Safety The Midas Project AI Safety The Midas Project

Video & Multimedia Producer

You'll transform our investigative findings into compelling visual stories that help people understand our work. As our first video producer, you'll build the infrastructure and strategy for our multimedia presence.

Read More
Program Manager
AI Safety The Midas Project AI Safety The Midas Project

Program Manager

You'll be a versatile team member doing whatever needs doing at a watchdog organization in growth mode. If you're comfortable switching between different roles to help build something from the ground up, this is the right fit.

Read More
Communications Lead
AI Safety The Midas Project AI Safety The Midas Project

Communications Lead

You'll be our primary voice translating investigations and watchdog work into narratives that resonate with journalists, policymakers, and the public. You'll manage most external communications — from report launches to social media — ensuring our evidence-based work cuts through the noise.

Read More
Research Specialist
AI Safety The Midas Project AI Safety The Midas Project

Research Specialist

You'll conduct the investigations that form the backbone of our watchdog work—uncovering conflicts of interest, documenting corporate governance failures, tracking policy changes, and building comprehensive public records of AI company behavior. You'll combine deep research skills with strategic thinking to ensure our work is rigorous, defensible, and impactful.

Read More
Operations Manager
AI Safety Windfall Trust AI Safety Windfall Trust

Operations Manager

As our Operations Manager, you will oversee the internal systems, processes, and structures that enable the Windfall Trust to deliver on its mission. You will play a central role in setting up and managing the organization’s core functions—ranging from legal structures and compliance to finance, HR, and day-to-day operations.

Read More
Lab Operations Coordinator
AI Safety Apart Research AI Safety Apart Research

Lab Operations Coordinator

We're seeking a Lab Operations Coordinator to ensure Apart Lab runs smoothly and is able to continue to scale efficiently. You'll manage the operational details that enable our research teams to focus on their work, from onboarding new participants and tracking compliance to coordinating funding and conference logistics.

Read More
Research Project Manager
AI Safety Apart Research AI Safety Apart Research

Research Project Manager

We’re seeking a Research Project Manager to guide globally distributed AI safety research teams through our Studio and Fellowship programs targeting peer-reviewed publication. You’ll be the primary point-of-contact for these teams, providing direction, feedback, and accountability while ensuring projects stay on track.

Read More
Founding Generalist
AI Safety Kairos AI Safety Kairos

Founding Generalist

Join us as a Founding Generalist and take real ownership over core programs that shape the AI safety talent pipeline. As a Founding Generalist, you'll have real ownership over core programs that shape the AI safety talent pipeline. You'll take on a wide range of responsibilities that combine strategy, relationship-building, and execution. This isn't a typical operations role—you'll be a key builder on our team, taking on high-stakes work with significant autonomy.

Read More
Full-Stack Engineer
AI Safety Beneficial AI Foundation AI Safety Beneficial AI Foundation

Full-Stack Engineer

This position involves working with Max Tegmark and colleagues at the Beneficial AI Foundation supporting the turbocharging of formal verification with AI tools as described here and in Towards Guaranteed Safe AI (a high-level introduction is given in the 2nd half of this TED talk). The core idea is to deploy not untrusted neural networks, but AI-written verified code implementing machine-learning algorithms and knowledge. The position can be either remote from anywhere or by MIT in Cambridge, Massachusetts.

Read More