
(Senior) Research Scholar, Special Projects Track
The Institute for Law & AI (LawAI) is looking for Research Scholars and Senior Research Scholars (Special Projects Track) to join its team to launch or manage new programs or initiatives in the field of AI law & policy. Both positions are one-year, visiting roles. For those interested in a one-year program focused on research, please see our (Senior) Research Scholar (General Track) job description.

(Senior) Research Scholar, General Track
The Institute for Law & AI (LawAI) is looking for Research Scholars and Senior Research Scholars to join its team to conduct legal research and engage with policymakers. Both positions are one-year, visiting roles.

Senior Research Fellow
The Institute for Law & AI (LawAI) is looking for (Senior) Research Fellows to join its team to conduct legal research at the intersection of law and artificial intelligence.

Senior Associate, AI Governance
The Future Society (TFS) is seeking a driven and experienced professional to help advance our global AI governance activities. Specifically, we are looking for a Senior Associate with 6+ years of relevant experience for developing, advocating for, and/or implementing international AI policy and governance mechanisms. These include laws and regulations, voluntary frameworks, standards, or industry practices.

Evaluators, Systemic Reviews
We’re working to make Elicit more helpful for writing systematic reviews in life sciences and social science. To do that, we need really high-quality evaluations of Elicit’s output. So, we’re hiring PhDs to evaluate reviews written with Elicit’s help.

Teaching Fellow, AI Alignment
We’re looking for Teaching Fellows to guide weekly discussion calls with students on our AI alignment course.

GTM (Go to Market) Lead
As the GTM lead, you’ll run full sales cycles end-to-end, with the opportunity to lead a team. It’s great for people who want to own the entire customer relationship, work with engineers and product managers, and wear multiple hats. Because Elicit has broad domain appeal, you’ll also be challenged to ramp up on lots of different research areas - from biomedicine, to policy, to industrial manufacturing, though we do have an initial vertical focus.

Technical Staff, Forecasting Tools
Sage is looking for a member of technical staff to (1) ideate, design, build, and write interactive explainers and demos about AI progress for AI Digest, and (2) build relationships with our audience of policymakers and the public, and grow our readership.

Member of Technical Staff, AI Digest
Sage is looking for a member of technical staff to (1) ideate, design, build, and write interactive explainers and demos about AI progress for AI Digest, and (2) build relationships with our audience of policymakers and the public, and grow our readership.


Expression of Interest
You will join OpenMined, an open source community led by Andrew Trask (author of Grokking Deep Learning), as an expression of your interest in joining the OpenMined community, this is an opportunity to tell us about your background and skills. You can contribute in a way that leverages your industry experience, offers you an opportunity to grow new skills, while helping us solve the remote data access problem for the benefit of all.

Pro-Social Agent Engineer
Create a forkable virtual world filled with pro-social AI agents that help humans reach their goals while respecting the basic rights of meat.

Senior Software Engineer
If you want to democratize access to cutting edge AI, while averting potential harmful impacts on society, OpenMined is the place for you. We’re building PySyft, an open source software platform for doing just that, and we need you!
We are looking for a highly talented individual, with hard won industry experience, a track record of exceeding expectations, and who is ready to level up and do the most challenging and rewarding thing in their career so far.

Machine Learning Engineer
As an ML research engineer at Elicit, you will:
Compose together tens to thousands of calls to language models to accomplish tasks that we can't accomplish with a single call.
Curate datasets for finetuning models, e.g. for training models to extract policy conclusions from papers
Set up evaluation metrics that tell us what changes to our models or training setup are improvements
Scale up semantic search from a few thousand documents to 100k+ documents

Interpretability Researcher
EleutherAI is seeking talented and motivated individuals to join our Interpretability team to perform cutting edge research with large language and vision models. We aim to better understand the features learned by today’s deep neural networks, so we can better steer their behavior and inform the public and policy makers about their risks and benefits.

AI Safety and Security Research Engineer
As an AI Safety & Security Research Engineer, you'll advance the state of the art in AI safety and security while developing practical customer-facing tools and products. Your role involves developing novel methods for controlling, monitoring, testing, and analyzing foundation models, as well as building new models with a focus on scalable, real-world deployment. Staying abreast of the latest machine learning advancements is crucial, as you'll contribute to open model innovations and ensure our products remain at the forefront of AI technology. This position blends research with hands-on implementation, requiring both theoretical expertise and practical problem-solving skills to address complex challenges in AI safety and security.