(Senior) Research Scholar, Special Projects Track
AI Safety Institute for Law & AI AI Safety Institute for Law & AI

(Senior) Research Scholar, Special Projects Track

The Institute for Law & AI (LawAI) is looking for Research Scholars and Senior Research Scholars (Special Projects Track) to join its team to launch or manage new programs or initiatives in the field of AI law & policy. Both positions are one-year, visiting roles. For those interested in a one-year program focused on research, please see our (Senior) Research Scholar (General Track) job description.

Read More
(Senior) Research Scholar, General Track
AI Safety Institute for Law & AI AI Safety Institute for Law & AI

(Senior) Research Scholar, General Track

The Institute for Law & AI (LawAI) is looking for Research Scholars and Senior Research Scholars to join its team to conduct legal research and engage with policymakers. Both positions are one-year, visiting roles.

Read More
Senior Research Fellow
AI Safety Institute for Law & AI AI Safety Institute for Law & AI

Senior Research Fellow

The Institute for Law & AI (LawAI) is looking for (Senior) Research Fellows to join its team to conduct legal research at the intersection of law and artificial intelligence.

Read More
Senior Associate, AI Governance
AI Safety The Future Society AI Safety The Future Society

Senior Associate, AI Governance

The Future Society (TFS) is seeking a driven and experienced professional to help advance our global AI governance activities. Specifically, we are looking for a Senior Associate with 6+ years of relevant experience for developing, advocating for, and/or implementing international AI policy and governance mechanisms. These include laws and regulations, voluntary frameworks, standards, or industry practices.

Read More
Evaluators, Systemic Reviews
AI Safety Elicit AI Safety Elicit

Evaluators, Systemic Reviews

We’re working to make Elicit more helpful for writing systematic reviews in life sciences and social science. To do that, we need really high-quality evaluations of Elicit’s output. So, we’re hiring PhDs to evaluate reviews written with Elicit’s help.

Read More
GTM (Go to Market) Lead
AI Safety Elicit AI Safety Elicit

GTM (Go to Market) Lead

As the GTM lead, you’ll run full sales cycles end-to-end, with the opportunity to lead a team. It’s great for people who want to own the entire customer relationship, work with engineers and product managers, and wear multiple hats. Because Elicit has broad domain appeal, you’ll also be challenged to ramp up on lots of different research areas - from biomedicine, to policy, to industrial manufacturing, though we do have an initial vertical focus.

Read More
Technical Staff, Forecasting Tools
AI Safety Sage Future AI Safety Sage Future

Technical Staff, Forecasting Tools

Sage is looking for a member of technical staff to (1) ideate, design, build, and write interactive explainers and demos about AI progress for AI Digest, and (2) build relationships with our audience of policymakers and the public, and grow our readership.

Read More
Member of Technical Staff, AI Digest
AI Safety Sage Future AI Safety Sage Future

Member of Technical Staff, AI Digest

Sage is looking for a member of technical staff to (1) ideate, design, build, and write interactive explainers and demos about AI progress for AI Digest, and (2) build relationships with our audience of policymakers and the public, and grow our readership.

Read More
Expression of Interest
AI Safety OpenMined AI Safety OpenMined

Expression of Interest

You will join OpenMined, an open source community led by Andrew Trask (author of Grokking Deep Learning), as an expression of your interest in joining the OpenMined community, this is an opportunity to tell us about your background and skills. You can contribute in a way that leverages your industry experience, offers you an opportunity to grow new skills, while helping us solve the remote data access problem for the benefit of all.

Read More
Senior Software Engineer
AI Safety OpenMined AI Safety OpenMined

Senior Software Engineer

If you want to democratize access to cutting edge AI, while averting potential harmful impacts on society, OpenMined is the place for you. We’re building PySyft, an open source software platform for doing just that, and we need you!

We are looking for a highly talented individual, with hard won industry experience, a track record of exceeding expectations, and who is ready to level up and do the most challenging and rewarding thing in their career so far.

Read More
Machine Learning Engineer
AI Safety Elicit AI Safety Elicit

Machine Learning Engineer

As an ML research engineer at Elicit, you will:

  • Compose together tens to thousands of calls to language models to accomplish tasks that we can't accomplish with a single call.

  • Curate datasets for finetuning models, e.g. for training models to extract policy conclusions from papers

  • Set up evaluation metrics that tell us what changes to our models or training setup are improvements

  • Scale up semantic search from a few thousand documents to 100k+ documents

Read More
Interpretability Researcher
AI Safety EleutherAI AI Safety EleutherAI

Interpretability Researcher

EleutherAI is seeking talented and motivated individuals to join our Interpretability team to perform cutting edge research with large language and vision models. We aim to better understand the features learned by today’s deep neural networks, so we can better steer their behavior and inform the public and policy makers about their risks and benefits.

Read More
AI Safety and Security Research Engineer
AI Safety Gray Swan AI Safety Gray Swan

AI Safety and Security Research Engineer

As an AI Safety & Security Research Engineer, you'll advance the state of the art in AI safety and security while developing practical customer-facing tools and products. Your role involves developing novel methods for controlling, monitoring, testing, and analyzing foundation models, as well as building new models with a focus on scalable, real-world deployment. Staying abreast of the latest machine learning advancements is crucial, as you'll contribute to open model innovations and ensure our products remain at the forefront of AI technology. This position blends research with hands-on implementation, requiring both theoretical expertise and practical problem-solving skills to address complex challenges in AI safety and security.

Read More