Communications Lead
AI Safety The Midas Project AI Safety The Midas Project

Communications Lead

You'll be our primary voice translating investigations and watchdog work into narratives that resonate with journalists, policymakers, and the public. You'll manage most external communications — from report launches to social media — ensuring our evidence-based work cuts through the noise.

Read More
Research Specialist
AI Safety The Midas Project AI Safety The Midas Project

Research Specialist

You'll conduct the investigations that form the backbone of our watchdog work—uncovering conflicts of interest, documenting corporate governance failures, tracking policy changes, and building comprehensive public records of AI company behavior. You'll combine deep research skills with strategic thinking to ensure our work is rigorous, defensible, and impactful.

Read More
Literature Review Contractor
Biosecurity Centre for Long-Term Resilience Biosecurity Centre for Long-Term Resilience

Literature Review Contractor

Do you have a passion for making the world a safer place from biological risks, and experience with technological and scientific literature reviews? The Centre for Long-Term Resilience (CLTR) is seeking a contractor to support a research project conducted by the Biosecurity Policy Unit in advancing the UK’s microbial forensics capabilities.

Read More
Operations Manager
AI Safety Windfall Trust AI Safety Windfall Trust

Operations Manager

As our Operations Manager, you will oversee the internal systems, processes, and structures that enable the Windfall Trust to deliver on its mission. You will play a central role in setting up and managing the organization’s core functions—ranging from legal structures and compliance to finance, HR, and day-to-day operations.

Read More
Lab Operations Coordinator
AI Safety Apart Research AI Safety Apart Research

Lab Operations Coordinator

We're seeking a Lab Operations Coordinator to ensure Apart Lab runs smoothly and is able to continue to scale efficiently. You'll manage the operational details that enable our research teams to focus on their work, from onboarding new participants and tracking compliance to coordinating funding and conference logistics.

Read More
Research Project Manager
AI Safety Apart Research AI Safety Apart Research

Research Project Manager

We’re seeking a Research Project Manager to guide globally distributed AI safety research teams through our Studio and Fellowship programs targeting peer-reviewed publication. You’ll be the primary point-of-contact for these teams, providing direction, feedback, and accountability while ensuring projects stay on track.

Read More
Program Associate
Global Health & Development GiveWell Global Health & Development GiveWell

Program Associate

Our research team is seeking Program Associates who will provide critical support to the team to maximize the impact of a program portfolio of life-saving and poverty-alleviating programs. This is primarily a project management role, providing key support to GiveWell researchers during their research and grant investigations.

Read More
Software Engineer, AI Safety and Biosecurity
GCRs & x-Risk SecureBio GCRs & x-Risk SecureBio

Software Engineer, AI Safety and Biosecurity

SecureBio seeks a Software Engineer to help scale and systematize the way we run evaluations of frontier AI systems, with a focus on biosecurity and misuse risk. This work has already made an impact: our biosecurity evaluations have been publicly cited in model cards from OpenAI, Anthropic, and Google, and featured in xAI's Risk Management Framework.

Read More
Research Analyst
Global Health & Development GiveWell Global Health & Development GiveWell

Research Analyst

As a Research Analyst on GiveWell's Commons team, you will support our broader research team in identifying cost-effective giving opportunities. Your work will contribute to GiveWell’s decisions about how hundreds of millions of dollars will be spent to save and improve the lives of people living in the lowest-income communities in the world. You will also play a key role in fulfilling our commitment to transparency and ensuring that the work we produce is accurate and high quality.

Read More
Founding Generalist
AI Safety Kairos AI Safety Kairos

Founding Generalist

Join us as a Founding Generalist and take real ownership over core programs that shape the AI safety talent pipeline. As a Founding Generalist, you'll have real ownership over core programs that shape the AI safety talent pipeline. You'll take on a wide range of responsibilities that combine strategy, relationship-building, and execution. This isn't a typical operations role—you'll be a key builder on our team, taking on high-stakes work with significant autonomy.

Read More
Research Operations Lead

Research Operations Lead

We are seeking a Research Operations Lead to optimize the operational infrastructure of our Founders Pledge Research Team. This role will optimize the operational infrastructure of our 16-person research team and play a crucial project management role. You'll streamline research processes, enhance team productivity, and ensure our research capabilities scale effectively as we continue to grow our impact in philanthropic research and cause prioritization.

Read More
Full-Stack Engineer
AI Safety Beneficial AI Foundation AI Safety Beneficial AI Foundation

Full-Stack Engineer

This position involves working with Max Tegmark and colleagues at the Beneficial AI Foundation supporting the turbocharging of formal verification with AI tools as described here and in Towards Guaranteed Safe AI (a high-level introduction is given in the 2nd half of this TED talk). The core idea is to deploy not untrusted neural networks, but AI-written verified code implementing machine-learning algorithms and knowledge. The position can be either remote from anywhere or by MIT in Cambridge, Massachusetts.

Read More
Senior Programs & Strategy Manager
AI Safety FarAI AI Safety FarAI

Senior Programs & Strategy Manager

FAR.AI is seeking a Senior Programs and Strategy Manager to flesh out a high level vision and drive the content strategy for FAR.AI’s flagship events. The role involves working closely with FAR.AI leadership to decide what subfields deserve new programming, curating agendas to highlight the most impactful work, and connecting researchers with collaborators, funders and decisionmakers.

Read More
Infrastructure Engineer
AI Safety, GCRs & x-Risk FarAI AI Safety, GCRs & x-Risk FarAI

Infrastructure Engineer

FAR.AI is seeking an Infrastructure Engineer to manage our GPU cluster which supports diverse impactful research workloads, from exploratory prototypes to multi-node training of frontier open-weight models. You will work in our tight-knit Foundations team, who develop flexible and scalable research infrastructure. You will consult across the entire FAR.AI technical team, supporting bespoke and complex workloads.

Read More
Chief Operating Officer (COO)
AI Safety SaferAI AI Safety SaferAI

Chief Operating Officer (COO)

SaferAI is seeking a Chief Operating Officer (COO) to serve as a key executive partner to the Executive Director. The main responsibility of the COO will be to ensure that SaferAI remains an excellent, high-performing organization. This broad leadership role encompasses fundraising, management, hiring, strategy, and organization-wide processes.

Read More
Machine Learning Infrastructure Engineer
AI Safety Gray Swan AI Safety Gray Swan

Machine Learning Infrastructure Engineer

We’re seeking an ML Infra Engineer to build robust, scalable, and high-performance infrastructure for distributed inference and training. You’ll take specialized language models from our ML research team and transform them into fast and reliable services that scale from proof-of-concept to enterprise deployment.

Read More