
Infrastructure Engineer
FAR.AI is seeking an Infrastructure Engineer to manage our GPU cluster which supports diverse impactful research workloads, from exploratory prototypes to multi-node training of frontier open-weight models. You will work in our tight-knit Foundations team, who develop flexible and scalable research infrastructure. You will consult across the entire FAR.AI technical team, supporting bespoke and complex workloads.

Chief Operating Officer (COO)
SaferAI is seeking a Chief Operating Officer (COO) to serve as a key executive partner to the Executive Director. The main responsibility of the COO will be to ensure that SaferAI remains an excellent, high-performing organization. This broad leadership role encompasses fundraising, management, hiring, strategy, and organization-wide processes.

Senior Recruitment Specialist – AI Safety focus
Impact Ops is looking for an experienced Senior Recruitment Specialist to join our team with leadership potential. The starting salary range is £55,000 to £75,000, depending on prior experience and location. There may be flexibility in salary for exceptional candidates with significant leadership experience.

Winter Research Fellowship (EU Law)
The Institute for Law & AI’s Winter Research Fellowship (EU Law) is a paid program that offers law students, professionals, and academics the opportunity to work at the leading edge of AI, law, and policy, with a particular focus on EU Law and the EU AI Act

Winter Research Fellowship (US Law & Policy)
The Institute for Law & AI’s (LawAI) Winter Research Fellowship (US) is a paid program that offers law students, professionals, and academics the opportunity to work at the leading edge of AI law and policy, with a particular focus on United States Law and Policy.

Winter Research Fellowship (Legal Frontiers)
LawAI’s Winter Research Fellowship (Legal Frontiers) is a paid program that offers law students, professionals, and academics the opportunity to work at the leading edge of AI, law, and policy.

Software Engineer
This position involves working with Bryan Parno, Max Tegmark and colleagues at the Beneficial AI Foundation on turbocharging formal verification of Rust code with AI tools as described here and in Towards Guaranteed Safe AI (a high-level introduction is given in the 2nd half of this TED talk).

Machine Learning Infrastructure Engineer
We’re seeking an ML Infra Engineer to build robust, scalable, and high-performance infrastructure for distributed inference and training. You’ll take specialized language models from our ML research team and transform them into fast and reliable services that scale from proof-of-concept to enterprise deployment.

Research Scientist
We are seeking expressions of interest from both experienced and first-time Research Scientists to develop and execute on a safety research agenda and/or accelerate our existing projects.

Research Engineer
We are seeking expressions of interest from research engineers looking to work on AI safety projects and red-teaming.

Research Intern
Contribute to cutting-edge research in scalable, distributed machine learning systems alongside experienced researchers and engineers. Explore new ways of building and verifying neural networks that operate across huge, decentralised, topologies of heterogenous devices.

Communications Director and Staff Director
CARMA is seeking an exceptional combined Communications Director and Staff Director, in one person, to play a pivotal role in advancing our mission. This position combines both strategic leadership and hands-on execution of communications with high-level organizational support to ensure CARMA operates effectively while communicating its critical work with clarity and impact.

Senior Economist, Effects of Transformative AI
CARMA seeks an innovative economist to join our Geostrategic Dynamics team, exploring how advanced AI systems will reshape global economic paradigms, incentive structures, and power dynamics. You'll develop models to analyze critical risks including labor obsolescence, economic instability, concentration of power, shocks to the financial system, and degraded living standards as AI capabilities accelerate.

AI Safety Operations Contractor
We're seeking an exceptional AI Safety Operations Contractor to serve as the operational backbone of our organization. This role combines executive assistant duties with strategic operations work, supporting our leadership team and research initiatives in the critical field of AI safety.

Research Engineer - Novel AI Platforms for Multiscale Alignment
The Alignment of Dynamical Cognitive Systems program seeks a Research Engineer to develop novel AI platforms addressing critical alignment challenges and practical LLM agents.

Senior Technical Specialist, AI Risk Assessment
We are seeking a Senior Technical Specialist to join our Comprehensive Risk Assessment team. This role combines original research in AI safety and technical governance with strong emphasis on conceptual depth and quality assurance leadership.

AI Offense-Defense Dynamics Lead Researcher
In this role, you'll lead research to decode these offense-defense dynamics, examining how specific attributes of AI technologies influence their propensity to either enhance societal safety or amplify risks.

Senior Economist, AI Geostrategic and Economic Mechanisms
CARMA seeks an innovative economist to join our Geostrategic Dynamics team, exploring how advanced AI systems will reshape global economic paradigms, incentive structures, and power dynamics. You'll develop models to analyze critical risks including labor obsolescence, economic instability, concentration of power, shocks to the financial system, and degraded living standards as AI capabilities accelerate.

Public Security Policy Researcher
CARMA's Public Security Policy (PSP) Program is seeking a Policy Researcher to advance our work on AI governance, risk mitigation, and emergency response frameworks. This role will focus on disaster preparedness planning for AI-related crises across multiple jurisdictional levels (national, state/provincial, multinational, and alliance-based), vulnerability assessments of critical societal systems, and development of robust governance approaches.

Digital Media Accelerator
The FLI Digital Media Accelerator program aims to help creators produce content, grow channels, spread the word to new audiences. We're looking to support creators who can explain complex AI issues - such as AGI implications, control problems, misaligned goals, or Big Tech power concentration - in ways that their audiences can understand and relate to.