
Intern, Machine Learning Engineer
We are seeking a Machine Learning Engineer Intern to work alongside our research and engineering teams. This role offers hands-on experience in developing, testing, and deploying AI security solutions. You will contribute to machine learning model evaluations, build tools for AI risk assessment, and help integrate ML-driven insights into our products.

Fellow
SAIF is seeking applications for a 6 or 12-month fellowship developing and executing on projects related to SAIF’s mission. We are open to individuals with a range of backgrounds and experience in AI safety and governance who are eager to conduct high-impact work. Fellows will be welcomed as part of the team and have access to SAIF’s broad network of experts and collaborators. The fellowship is fully remote.

Expression of Interest
If none of the open roles is a fit for you, but you think you'd be a great future member of the Harmony team, we'd still love to hear from you. In the future, we're likely to have openings spanning Engineering, Research, Sales, Marketing, Operations, People, and more.

Content Writer, Freelance
We’re looking for a freelance writer to help us produce educational articles about AI for our varied audiences. Audiences may include any groups who stand to be impacted by AI, from elections officials to trade unions to research scientists. Articles will often include interactive demonstrations, built by the CivAI engineering team, that convey some aspect of AI capabilities or risks. We’re aiming for the articles to be short and memorable, presenting unique viewpoints on AI that help people make sense of its far-flung impacts.

Teaching Fellow, AI Safety
We’re looking for Teaching Fellows to guide discussions with students on our 5-day AI safety courses.

Cyber Researcher
As a cyber researcher at Pattern Labs, you will be at the forefront of research on AI models' cyber capabilities (such as AI’s ability to discover vulnerabilities and develop exploits, carry out network attacks, etc.). This role sits at a unique intersection of cybersecurity expertise and AI capabilities, where your background in vulnerability and cyber research will help shape the future of AI security.

AI Researcher
We are seeking a policy researcher excited about our AI security mission and eager to be part of our scaling up.

Policy Researcher
We are seeking a policy researcher excited about our AI security mission and eager to be part of our scaling up.

Chief of Staff
As a Software Engineer at Harmony, you’ll have a unique opportunity to have a big impact! Your work will be multi-faceted, spanning software engineering, product ideation, project planning, applying cutting edge AI research, building customer relationships, and more.

Software Engineer
As a Software Engineer at Harmony, you’ll have a unique opportunity to have a big impact! Your work will be multi-faceted, spanning software engineering, product ideation, project planning, applying cutting edge AI research, building customer relationships, and more.

Software Engineer
We are seeking a versatile Software Engineer to join our team. This role involves building and refining user interfaces, backend APIs, and database systems, with a preference for candidates who can bridge the gap between frontend and backend development.

Formal Verification and AI Research Director
Advancements in AI bring both opportunities and serious risks, which simultaneously enable and necessitate more powerful approaches to high-assurance systems. Atlas Computing is a nonprofit working to ensure robust democratic oversight and control of critical infrastructure and AI. We are building an ecosystem for AI systems with provable properties.

Formal Verification and AI Research Lead
Advancements in AI bring both opportunities and serious risks, which simultaneously enable and necessitate more powerful approaches to high-assurance systems. Atlas Computing is a nonprofit working to ensure robust democratic oversight and control of critical infrastructure and AI. We are building an ecosystem for AI systems with provable properties.

AI Safety Course Operations Specialist
We’re seeking an operations specialist to run our flagship AI Safety programs from January 15th through May 7th 2025. This is primarily an operations and coordination role – while knowledge of AI Safety concepts is helpful, expertise isn’t required.

Software Engineer
FUTURESEARCH is looking for exceptional Software Engineers to build features that improve our ability to answer unusually hard questions that other AI systems fall short on.

Research Scientist
FutureSearch is looking for an exceptional Research Scientist to figure out the best way to research and reason about hard, judgment-laden questions.

AI Red Teamer
As an AI Red Teamer at HiddenLayer, you will play a pivotal role in the ML Threat Operations group. In this role will evaluate the security of AI systems, focusing on both predictive and generative AI models. You will identify vulnerabilities, simulate adversarial attacks, and provide actionable recommendations to improve the security of AI systems. The ideal candidate is a proactive problem solver with hands-on experience in AI security testing and a deep understanding of machine learning models and adversarial techniques.

Teaching Fellow, AI Governance
We’re looking for Teaching Fellows to guide weekly discussion calls with students on our AI governance course.

Director of Strategic Partnerships
As the Director of Strategic Partnerships, you will take the lead in establishing and overseeing OpenMined’s new fundraising function. This role is pivotal in driving our mission to advance AI safety and Privacy-Enhancing Technologies (PETs).