Fellow

Safe AI Forum

📍 Remote (Global) 🕔 Full Time
💰$80K-140K USD p.a. 🔄 Rolling Applications

SAIF is seeking applications for a 6 or 12-month fellowship developing and executing on projects related to SAIF’s mission. We are open to individuals with a range of backgrounds and experience in AI safety and governance who are eager to conduct high-impact work. Fellows will be welcomed as part of the team and have access to SAIF’s broad network of experts and collaborators. The fellowship is fully remote.

About the Role

We are looking to expand upon our existing routes to impact by bolstering our team’s research capacity. We are excited for applicants who have AI safety and governance research expertise to join the team as Fellows. Your role will be to design and execute projects we think have great potential for impact within the AI safety and governance space. Exceptional candidates may also propose their own project. 

The team and SAIF leadership will provide support in scoping and resourcing your project, as well as oversight and advising throughout your time with us as a Fellow. Fellows choose either 6 or 12 month durations during which you will collaborate closely with the SAIF team and may assist on other team initiatives, though you will be primarily accountable for your own core research project. Upon completion of your fellowship, should both parties wish to continue, there may also be the option for permanent employment.

Qualifications & Responsibilities

  • Research Fellow

    • Interest in making an impactful contribution to AI safety and governance.

    • Familiarity with key issues in AI safety and governance.

    • Experience conducting original research on AI safety, governance, or in related fields.

    • Demonstrated ability to produce consistent high quality work.

    • Effective collaborator and communicator within a remote team.

    • Self-directed and proactive in initiating and conducting research.

  • Senior Research Fellow

    • Advanced academic or professional experience in AI safety and governance.

    • Strong track record of impactful research, publications, policy influence or events.

    • Demonstrated leadership in advancing AI safety and governance efforts

    • Ability to independently design and execute complex research projects.

    • Existing familiarity with the broader AI safety and governance community.

    • Understanding of how this Fellowship may enhance their impact potential within AI safety and governance.

  • Special Projects Fellow

    • Strong interest in advancing AI safety and governance through applied projects, policy implementation, or operational initiatives.

    • Experience designing and executing high-impact projects in AI safety, governance, or related fields.

    • Ability to analyze complex AI-related challenges and develop practical interventions.

    • Effective collaborator, comfortable working across teams.

    • Strong project management skills, with a track record of successfully leading initiatives from ideation to execution.

About the Projects

We are excited for Fellows to work on a broad range of topics, from more traditional research work to events development and execution or application-oriented work. Below are some examples of projects we would be eager for Fellows to take on.

  • Research on International Coordination

    We want better understanding of what types of international coordination (including but not limited to international agreements) may be viable between relevant states in the coming years, and whether they could be enacted quickly if relevant AI-related timelines are short.

  • Conducting Research on International AI Safety Standards and Best Practices

    Conduct research on how actors can best engage in standards bodies or how to best develop internal corporate policy (e.g., Frontier Safety Policies) that encourage safer AI development.

  • Research and Development of International Coordination Bodies for AI

    Help us understand what bodies would be most helpful to develop in the coming years. This work could include: incident-sharing platforms, trusted information channels, verification regimes, etc.

  • Bolstering Academic Collaboration on AI Safety

    Facilitating international collaboration in a set of key academic fields we think can benefit from international collaboration. Some of the fields we are currently most excited to see further cooperation on include AI verification, deception, and risk thresholds.

  • Create Your Own Project

    For exceptional candidates who have robust experience in AI safety research and who are particularly interested in further exploring a topic which aligns with SAIF’s mission, there is also an option to pitch your own project.

 

About You

You may be a good fit if you: 

  • Are impact-driven and excited about international cooperation on AI safety. We are a small, young, mission-driven team, and are laser focused on having as big of an impact as we can. 

  • Are discerning and ruthless with prioritization. We are looking for someone with a strong ability to evaluate the importance of different research directions and is able to reprioritize and pivot quickly through informed judgements. 

  • Are independent and able to ideate, scope, and execute on your project. Ideally you’d have high context on AI safety and familiarity with designing and delivering your own research projects autonomously. 

  • Are an effective communicator. We are a remote team and this is a role that may be fairly autonomous, so it will be important that you are able to clearly communicate and collaborate with the team throughout. 

  • Are a clear self starter. We want to see someone who is intrinsically motivated, who has demonstrated experience conducting independent projects.

  • Have a strong interest in or understanding of international relations, diplomacy, and their intersections with emerging technology. As global governance and technological developments significantly shape our work, familiarity with these areas provides valuable context.

  • Have a strong familiarity with and/or experience in China. A considerable amount of our work involves China and therefore either having direct experience with or knowledge of China, or a strong familiarity bolsters your ability to contribute to that work.  

We do not have specific requirements on candidates’ educational or professional background, however those with multiple years of research experience and specific knowledge of AI safety will likely be best situated for the role.

Logistics

You will be hired by SAIF, a US-based 501(c)(3) non-profit through a global HR platform called Deel. Deel allows us to hire remotely, and operates in 160+ countries, further details are available here. We aim to hire Fellows as full time employees (FTE) with benefits, however contract specifications are contingent upon employee jurisdiction.   

The role is fully remote, though our current staff are based in London, SF, Boston, Toronto and NYC, and you could likely co-work with them if you were located in one of those cities.

Other Details

  • Duration: 6 or 12 months - with possible contract extension or conversion. 

  • Compensation: $80K-140K/year - depending on experience and adjusted by location, the listed range is commensurate with a candidate based in San Francisco, CA or New York City, NY.

  • Application process: Our application process will likely include several interviews and one or more paid work tasks. We expect that filling in the application form should take less than 5 minutes. Application process: Our application process will likely include several interviews and one or more paid work tasks. We expect that filling in the application form should take approximately 5-10 minutes. 

SAIF is committed to fostering an inclusive and diverse workplace. We welcome applicants from all backgrounds and encourage individuals of all identities and experiences to apply. If you have any questions about the role, or possible application accommodations for a disability, please get in touch at info@saif.org

SAIF

The Safe AI Forum aims to foster responsible governance to reduce catastrophic risks through shared understanding and collaboration among key global actors.

Previous
Previous

Intern, Machine Learning Engineer

Next
Next

Strategy Lead: University Groups