Managing Director
AI Safety Effective Institutions Project AI Safety Effective Institutions Project

Managing Director

The Effective Institutions Project (EIP) is hiring a Managing Director who will serve as a critical member of EIP’s executive team as the organization scales from a small startup to a leading strategic partner for philanthropists seeking to address major global challenges. We’re looking for an entrepreneurial, collaborative, and conscientious leader to help our organization manifest its full potential.

Read More
Podcast Host
AI Safety 80,000 Hours AI Safety 80,000 Hours

Podcast Host

Help shape the most important conversation of our time. We’re looking for someone exceptional to join 80,000 Hours as a Podcast Host, helping us build a media platform that properly covers recursively self-improving AGI and its implications.

Read More
Podcast Chief of Staff
AI Safety 80,000 Hours AI Safety 80,000 Hours

Podcast Chief of Staff

Help shape the most important conversation of our time. We’re looking for someone exceptional to join 80,000 Hours as Chief of Staff for our podcast team, helping us build a media platform that properly covers recursively self-improving AGI and its implications.

Read More
(Senior) Program Officer
AI Safety Effective Institutions Project AI Safety Effective Institutions Project

(Senior) Program Officer

The Effective Institutions Project (EIP) is looking to hire at least two (Senior) Program Officers to support our research and grantmaking as the organization scales from a small startup to a leading partner to philanthropists seeking to address major global challenges. The ideal candidate is a strong analytical thinker and excellent communicator who is passionate about navigating complex organizational systems in search of meaningful opportunities for societal benefit.

Read More
Senior Researcher, Data Insights
AI Safety Epoch AI AI Safety Epoch AI

Senior Researcher, Data Insights

Epoch AI is seeking a Senior Researcher to inform us and our stakeholders about key trends in machine learning. The role will involve investigating key trends in AI development, rigorously analyzing them, then presenting them as succinct data insights on our website. 

Read More
Machine Learning Engineer
AI Safety Gray Swan AI Safety Gray Swan

Machine Learning Engineer

We’re looking for top-tier machine learning engineers to help develop cutting-edge AI security solutions. If you have a strong background in building, deploying, and scaling machine learning models and a passion for AI security, we’d love to hear from you.

Read More
Chief Technology Officer
AI Safety Epoch AI AI Safety Epoch AI

Chief Technology Officer

Epoch AI is seeking a Chief Technology Officer (CTO) to lead our engineering team. As CTO, you will help shape our strategic direction and ensure the alignment of our tech stack—including the Epoch AI website, databases, data collection systems, and AI evaluation infrastructure—with the organization's broader mission and priorities.

Read More
Fellow
AI Safety SAIF AI Safety SAIF

Fellow

SAIF is seeking applications for a 6 or 12-month fellowship developing and executing on projects related to SAIF’s mission. We are open to individuals with a range of backgrounds and experience in AI safety and governance who are eager to conduct high-impact work. Fellows will be welcomed as part of the team and have access to SAIF’s broad network of experts and collaborators. The fellowship is fully remote.

Read More
Expression of Interest
AI Safety Harmony Intelligence AI Safety Harmony Intelligence

Expression of Interest

If none of the open roles is a fit for you, but you think you'd be a great future member of the Harmony team, we'd still love to hear from you. In the future, we're likely to have openings spanning Engineering, Research, Sales, Marketing, Operations, People, and more.

Read More
Content Writer, Freelance
AI Safety Civ AI AI Safety Civ AI

Content Writer, Freelance

We’re looking for a freelance writer to help us produce educational articles about AI for our varied audiences. Audiences may include any groups who stand to be impacted by AI, from elections officials to trade unions to research scientists. Articles will often include interactive demonstrations, built by the CivAI engineering team, that convey some aspect of AI capabilities or risks. We’re aiming for the articles to be short and memorable, presenting unique viewpoints on AI that help people make sense of its far-flung impacts.

Read More
Cyber Researcher
AI Safety Pattern Labs AI Safety Pattern Labs

Cyber Researcher

As a cyber researcher at Pattern Labs, you will be at the forefront of research on AI models' cyber capabilities (such as AI’s ability to discover vulnerabilities and develop exploits, carry out network attacks, etc.). This role sits at a unique intersection of cybersecurity expertise and AI capabilities, where your background in vulnerability and cyber research will help shape the future of AI security.

Read More
Software Engineer
AI Safety Harmony Intelligence AI Safety Harmony Intelligence

Software Engineer

As a Software Engineer at Harmony, you’ll have a unique opportunity to have a big impact! Your work will be multi-faceted, spanning software engineering, product ideation, project planning, applying cutting edge AI research, building customer relationships, and more.

Read More
Software Engineer
AI Safety Gray Swan AI Safety Gray Swan

Software Engineer

We are seeking a versatile Software Engineer to join our team. This role involves building and refining user interfaces, backend APIs, and database systems, with a preference for candidates who can bridge the gap between frontend and backend development.

Read More
Formal Verification and AI Research Director
AI Safety Atlas Computing AI Safety Atlas Computing

Formal Verification and AI Research Director

Advancements in AI bring both opportunities and serious risks, which simultaneously enable and necessitate more powerful approaches to high-assurance systems. Atlas Computing is a nonprofit working to ensure robust democratic oversight and control of critical infrastructure and AI. We are building an ecosystem for AI systems with provable properties.

Read More
Formal Verification and AI Research Lead
AI Safety Atlas Computing AI Safety Atlas Computing

Formal Verification and AI Research Lead

Advancements in AI bring both opportunities and serious risks, which simultaneously enable and necessitate more powerful approaches to high-assurance systems. Atlas Computing is a nonprofit working to ensure robust democratic oversight and control of critical infrastructure and AI. We are building an ecosystem for AI systems with provable properties.

Read More
AI Safety Course Operations Specialist
AI Safety BlueDot Impact AI Safety BlueDot Impact

AI Safety Course Operations Specialist

We’re seeking an operations specialist to run our flagship AI Safety programs from January 15th through May 7th 2025. This is primarily an operations and coordination role – while knowledge of AI Safety concepts is helpful, expertise isn’t required.

Read More