Intern, Machine Learning Engineer
AI Safety Gray Swan AI Safety Gray Swan

Intern, Machine Learning Engineer

We are seeking a Machine Learning Engineer Intern to work alongside our research and engineering teams. This role offers hands-on experience in developing, testing, and deploying AI security solutions. You will contribute to machine learning model evaluations, build tools for AI risk assessment, and help integrate ML-driven insights into our products.

Read More
Fellow
AI Safety SAIF AI Safety SAIF

Fellow

SAIF is seeking applications for a 6 or 12-month fellowship developing and executing on projects related to SAIF’s mission. We are open to individuals with a range of backgrounds and experience in AI safety and governance who are eager to conduct high-impact work. Fellows will be welcomed as part of the team and have access to SAIF’s broad network of experts and collaborators. The fellowship is fully remote.

Read More
Expression of Interest
AI Safety Harmony Intelligence AI Safety Harmony Intelligence

Expression of Interest

If none of the open roles is a fit for you, but you think you'd be a great future member of the Harmony team, we'd still love to hear from you. In the future, we're likely to have openings spanning Engineering, Research, Sales, Marketing, Operations, People, and more.

Read More
Content Writer, Freelance
AI Safety Civ AI AI Safety Civ AI

Content Writer, Freelance

We’re looking for a freelance writer to help us produce educational articles about AI for our varied audiences. Audiences may include any groups who stand to be impacted by AI, from elections officials to trade unions to research scientists. Articles will often include interactive demonstrations, built by the CivAI engineering team, that convey some aspect of AI capabilities or risks. We’re aiming for the articles to be short and memorable, presenting unique viewpoints on AI that help people make sense of its far-flung impacts.

Read More
Cyber Researcher
AI Safety Pattern Labs AI Safety Pattern Labs

Cyber Researcher

As a cyber researcher at Pattern Labs, you will be at the forefront of research on AI models' cyber capabilities (such as AI’s ability to discover vulnerabilities and develop exploits, carry out network attacks, etc.). This role sits at a unique intersection of cybersecurity expertise and AI capabilities, where your background in vulnerability and cyber research will help shape the future of AI security.

Read More
Chief of Staff
AI Safety Harmony Intelligence AI Safety Harmony Intelligence

Chief of Staff

As a Software Engineer at Harmony, you’ll have a unique opportunity to have a big impact! Your work will be multi-faceted, spanning software engineering, product ideation, project planning, applying cutting edge AI research, building customer relationships, and more.

Read More
Software Engineer
AI Safety Harmony Intelligence AI Safety Harmony Intelligence

Software Engineer

As a Software Engineer at Harmony, you’ll have a unique opportunity to have a big impact! Your work will be multi-faceted, spanning software engineering, product ideation, project planning, applying cutting edge AI research, building customer relationships, and more.

Read More
Software Engineer
AI Safety Gray Swan AI Safety Gray Swan

Software Engineer

We are seeking a versatile Software Engineer to join our team. This role involves building and refining user interfaces, backend APIs, and database systems, with a preference for candidates who can bridge the gap between frontend and backend development.

Read More
Formal Verification and AI Research Director
AI Safety Atlas Computing AI Safety Atlas Computing

Formal Verification and AI Research Director

Advancements in AI bring both opportunities and serious risks, which simultaneously enable and necessitate more powerful approaches to high-assurance systems. Atlas Computing is a nonprofit working to ensure robust democratic oversight and control of critical infrastructure and AI. We are building an ecosystem for AI systems with provable properties.

Read More
Formal Verification and AI Research Lead
AI Safety Atlas Computing AI Safety Atlas Computing

Formal Verification and AI Research Lead

Advancements in AI bring both opportunities and serious risks, which simultaneously enable and necessitate more powerful approaches to high-assurance systems. Atlas Computing is a nonprofit working to ensure robust democratic oversight and control of critical infrastructure and AI. We are building an ecosystem for AI systems with provable properties.

Read More
AI Safety Course Operations Specialist
AI Safety BlueDot Impact AI Safety BlueDot Impact

AI Safety Course Operations Specialist

We’re seeking an operations specialist to run our flagship AI Safety programs from January 15th through May 7th 2025. This is primarily an operations and coordination role – while knowledge of AI Safety concepts is helpful, expertise isn’t required.

Read More
Software Engineer
AI Safety FutureSearch AI Safety FutureSearch

Software Engineer

FUTURESEARCH is looking for exceptional Software Engineers to build features that improve our ability to answer unusually hard questions that other AI systems fall short on.

Read More
Research Scientist
AI Safety FutureSearch AI Safety FutureSearch

Research Scientist

FutureSearch is looking for an exceptional Research Scientist to figure out the best way to research and reason about hard, judgment-laden questions.

Read More
AI Red Teamer
AI Safety HiddenLayer AI Safety HiddenLayer

AI Red Teamer

As an AI Red Teamer at HiddenLayer, you will play a pivotal role in the ML Threat Operations group. In this role will evaluate the security of AI systems, focusing on both predictive and generative AI models. You will identify vulnerabilities, simulate adversarial attacks, and provide actionable recommendations to improve the security of AI systems. The ideal candidate is a proactive problem solver with hands-on experience in AI security testing and a deep understanding of machine learning models and adversarial techniques.

Read More
Director of Strategic Partnerships
AI Safety OpenMined AI Safety OpenMined

Director of Strategic Partnerships

As the Director of Strategic Partnerships, you will take the lead in establishing and overseeing OpenMined’s new fundraising function. This role is pivotal in driving our mission to advance AI safety and Privacy-Enhancing Technologies (PETs).

Read More