Senior Economist, AI Geostrategic and Economic Mechanisms
AI Safety CARMA AI Safety CARMA

Senior Economist, AI Geostrategic and Economic Mechanisms

CARMA seeks an innovative economist to join our Geostrategic Dynamics team, exploring how advanced AI systems will reshape global economic paradigms, incentive structures, and power dynamics. You'll develop models to analyze critical risks including labor obsolescence, economic instability, concentration of power, shocks to the financial system, and degraded living standards as AI capabilities accelerate.

Read More
Public Security Policy Researcher
AI Safety CARMA AI Safety CARMA

Public Security Policy Researcher

CARMA's Public Security Policy (PSP) Program is seeking a Policy Researcher to advance our work on AI governance, risk mitigation, and emergency response frameworks. This role will focus on disaster preparedness planning for AI-related crises across multiple jurisdictional levels (national, state/provincial, multinational, and alliance-based), vulnerability assessments of critical societal systems, and development of robust governance approaches.

Read More
Digital Media Accelerator
AI Safety Future of Life Institute AI Safety Future of Life Institute

Digital Media Accelerator

The FLI Digital Media Accelerator program aims to help creators produce content, grow channels, spread the word to new audiences. We're looking to support creators who can explain complex AI issues - such as AGI implications, control problems, misaligned goals, or Big Tech power concentration - in ways that their audiences can understand and relate to.

Read More
Business Operations Lead
AI Safety Cosmos Institute AI Safety Cosmos Institute

Business Operations Lead

Cosmos Institute is a 501(c)(3) non-profit launched in 2024 to ensure AI promotes human flourishing. We train philosopher-builders—individuals who unite deep reflection with technical mastery. As our first Business Operations Lead, you’ll build the systems, processes, and dashboards that turn great ideas into insight-driven programs.

Read More
Head of Communications
AI Safety FarAI AI Safety FarAI

Head of Communications

FAR.AI seeks a Head of Communications to shape the AI safety narrative, inform top policymakers, and guide an ambitious team. Drive real impact at the forefront of responsible AI development!

Read More
Machine Learning Engineer
AI Safety Gray Swan AI Safety Gray Swan

Machine Learning Engineer

We’re looking for top-tier machine learning engineers to help develop cutting-edge AI security solutions. If you have a strong background in building, deploying, and scaling machine learning models and a passion for AI security, we’d love to hear from you.

Read More
Fellow
AI Safety SAIF AI Safety SAIF

Fellow

SAIF is seeking applications for a 6 or 12-month fellowship developing and executing on projects related to SAIF’s mission. We are open to individuals with a range of backgrounds and experience in AI safety and governance who are eager to conduct high-impact work. Fellows will be welcomed as part of the team and have access to SAIF’s broad network of experts and collaborators. The fellowship is fully remote.

Read More
Expression of Interest
AI Safety Harmony Intelligence AI Safety Harmony Intelligence

Expression of Interest

If none of the open roles is a fit for you, but you think you'd be a great future member of the Harmony team, we'd still love to hear from you. In the future, we're likely to have openings spanning Engineering, Research, Sales, Marketing, Operations, People, and more.

Read More
Content Writer, Freelance
AI Safety Civ AI AI Safety Civ AI

Content Writer, Freelance

We’re looking for a freelance writer to help us produce educational articles about AI for our varied audiences. Audiences may include any groups who stand to be impacted by AI, from elections officials to trade unions to research scientists. Articles will often include interactive demonstrations, built by the CivAI engineering team, that convey some aspect of AI capabilities or risks. We’re aiming for the articles to be short and memorable, presenting unique viewpoints on AI that help people make sense of its far-flung impacts.

Read More
Cyber Researcher
AI Safety Pattern Labs AI Safety Pattern Labs

Cyber Researcher

As a cyber researcher at Pattern Labs, you will be at the forefront of research on AI models' cyber capabilities (such as AI’s ability to discover vulnerabilities and develop exploits, carry out network attacks, etc.). This role sits at a unique intersection of cybersecurity expertise and AI capabilities, where your background in vulnerability and cyber research will help shape the future of AI security.

Read More
Software Engineer
AI Safety Harmony Intelligence AI Safety Harmony Intelligence

Software Engineer

As a Software Engineer at Harmony, you’ll have a unique opportunity to have a big impact! Your work will be multi-faceted, spanning software engineering, product ideation, project planning, applying cutting edge AI research, building customer relationships, and more.

Read More
Software Engineer
AI Safety Gray Swan AI Safety Gray Swan

Software Engineer

We are seeking a versatile Software Engineer to join our team. This role involves building and refining user interfaces, backend APIs, and database systems, with a preference for candidates who can bridge the gap between frontend and backend development.

Read More
Research Scientist
AI Safety FutureSearch AI Safety FutureSearch

Research Scientist

FutureSearch is looking for an exceptional Research Scientist to figure out the best way to research and reason about hard, judgment-laden questions.

Read More
Senior Research Fellow
AI Safety Institute for Law & AI AI Safety Institute for Law & AI

Senior Research Fellow

The Institute for Law & AI (LawAI) is looking for (Senior) Research Fellows to join its team to conduct legal research at the intersection of law and artificial intelligence.

Read More
Senior Associate, AI Governance
AI Safety The Future Society AI Safety The Future Society

Senior Associate, AI Governance

The Future Society (TFS) is seeking a driven and experienced professional to help advance our global AI governance activities. Specifically, we are looking for a Senior Associate with 6+ years of relevant experience for developing, advocating for, and/or implementing international AI policy and governance mechanisms. These include laws and regulations, voluntary frameworks, standards, or industry practices.

Read More
Evaluators, Systemic Reviews
AI Safety Elicit AI Safety Elicit

Evaluators, Systemic Reviews

We’re working to make Elicit more helpful for writing systematic reviews in life sciences and social science. To do that, we need really high-quality evaluations of Elicit’s output. So, we’re hiring PhDs to evaluate reviews written with Elicit’s help.

Read More
Technical Staff, Forecasting Tools
AI Safety Sage Future AI Safety Sage Future

Technical Staff, Forecasting Tools

Sage is looking for a member of technical staff to (1) ideate, design, build, and write interactive explainers and demos about AI progress for AI Digest, and (2) build relationships with our audience of policymakers and the public, and grow our readership.

Read More
Member of Technical Staff, AI Digest
AI Safety Sage Future AI Safety Sage Future

Member of Technical Staff, AI Digest

Sage is looking for a member of technical staff to (1) ideate, design, build, and write interactive explainers and demos about AI progress for AI Digest, and (2) build relationships with our audience of policymakers and the public, and grow our readership.

Read More