
Machine Learning Engineer
We’re looking for top-tier machine learning engineers to help develop cutting-edge AI security solutions. If you have a strong background in building, deploying, and scaling machine learning models and a passion for AI security, we’d love to hear from you.

Expression of Interest
If none of the open roles is a fit for you, but you think you'd be a great future member of the Harmony team, we'd still love to hear from you. In the future, we're likely to have openings spanning Engineering, Research, Sales, Marketing, Operations, People, and more.

Content Writer, Freelance
We’re looking for a freelance writer to help us produce educational articles about AI for our varied audiences. Audiences may include any groups who stand to be impacted by AI, from elections officials to trade unions to research scientists. Articles will often include interactive demonstrations, built by the CivAI engineering team, that convey some aspect of AI capabilities or risks. We’re aiming for the articles to be short and memorable, presenting unique viewpoints on AI that help people make sense of its far-flung impacts.

Cyber Researcher
As a cyber researcher at Pattern Labs, you will be at the forefront of research on AI models' cyber capabilities (such as AI’s ability to discover vulnerabilities and develop exploits, carry out network attacks, etc.). This role sits at a unique intersection of cybersecurity expertise and AI capabilities, where your background in vulnerability and cyber research will help shape the future of AI security.

AI Researcher
We are seeking a policy researcher excited about our AI security mission and eager to be part of our scaling up.

Policy Researcher
We are seeking a policy researcher excited about our AI security mission and eager to be part of our scaling up.

Software Engineer
We are seeking a versatile Software Engineer to join our team. This role involves building and refining user interfaces, backend APIs, and database systems, with a preference for candidates who can bridge the gap between frontend and backend development.

Research Scientist
FutureSearch is looking for an exceptional Research Scientist to figure out the best way to research and reason about hard, judgment-laden questions.

Senior Associate, AI Governance
The Future Society (TFS) is seeking a driven and experienced professional to help advance our global AI governance activities. Specifically, we are looking for a Senior Associate with 6+ years of relevant experience for developing, advocating for, and/or implementing international AI policy and governance mechanisms. These include laws and regulations, voluntary frameworks, standards, or industry practices.

Evaluators, Systemic Reviews
We’re working to make Elicit more helpful for writing systematic reviews in life sciences and social science. To do that, we need really high-quality evaluations of Elicit’s output. So, we’re hiring PhDs to evaluate reviews written with Elicit’s help.

Technical Staff, Forecasting Tools
Sage is looking for a member of technical staff to (1) ideate, design, build, and write interactive explainers and demos about AI progress for AI Digest, and (2) build relationships with our audience of policymakers and the public, and grow our readership.

Member of Technical Staff, AI Digest
Sage is looking for a member of technical staff to (1) ideate, design, build, and write interactive explainers and demos about AI progress for AI Digest, and (2) build relationships with our audience of policymakers and the public, and grow our readership.


Machine Learning Engineer
As an ML research engineer at Elicit, you will:
Compose together tens to thousands of calls to language models to accomplish tasks that we can't accomplish with a single call.
Curate datasets for finetuning models, e.g. for training models to extract policy conclusions from papers
Set up evaluation metrics that tell us what changes to our models or training setup are improvements
Scale up semantic search from a few thousand documents to 100k+ documents

Interpretability Researcher
EleutherAI is seeking talented and motivated individuals to join our Interpretability team to perform cutting edge research with large language and vision models. We aim to better understand the features learned by today’s deep neural networks, so we can better steer their behavior and inform the public and policy makers about their risks and benefits.

AI Safety and Security Research Engineer
As an AI Safety & Security Research Engineer, you'll advance the state of the art in AI safety and security while developing practical customer-facing tools and products. Your role involves developing novel methods for controlling, monitoring, testing, and analyzing foundation models, as well as building new models with a focus on scalable, real-world deployment. Staying abreast of the latest machine learning advancements is crucial, as you'll contribute to open model innovations and ensure our products remain at the forefront of AI technology. This position blends research with hands-on implementation, requiring both theoretical expertise and practical problem-solving skills to address complex challenges in AI safety and security.