AI Red Teamer
Trajectory Labs is a stealth AI security startup working with leading AI companies. We're hiring to red-team products from frontier AI labs.
Member of Technical Staff
We're hiring a Member of Technical Staff to help us build and study the AI Village.
AI Strategy Team Lead
Rethink Priorities (RP) is hiring a Team Lead to establish and lead a new AI Strategy Team focused on navigating transformative AI. The team will identify high-leverage, neglected opportunities to reduce existential risk and help secure beneficial outcomes from transformative AI. Its work will focus on producing decision-relevant strategic analysis and supporting the incubation of new projects and initiatives.
Expression of Interest
The AI Verification and Evaluation Research Institute (AVERI) is seeking engineers and AI policy professionals to chart the course of frontier AI auditing. Senior individual contributor candidates will be considered as policy, engineering, or hybrid candidates (i.e., those with experience in both domains).
AI Policy Fellowship
The IAPS AI Policy Fellowship is a fully-funded, three-month program for professionals from varied backgrounds seeking to strengthen practical policy skills for securing a positive future in a world with powerful AI.
Senior Research Engineer
FAR.AI is seeking a Senior Research Engineer to accelerate and scale-up our research. Your focus will be on tackling challenging engineering problems in one of our core safety agendas, mentoring and unblocking other staff members in their technical work, and increasing the depth and scale of our research work overall.
Hackathons Program Manager
We're seeking a Hackathons Program Manager to own and scale Apart Research's flagship hackathon program. Our hackathons bring together 200-1000+ participants globally to produce actionable research on critical AI safety questions, from AI-enabled CBRN risks to defensive technologies against emerging threats.
Research Fellow, Technical Governance Team
MIRI’s Technical Governance Team plans to run a small research fellowship program in early 2026. The program will run for 8 weeks, and include a $1200/week stipend. Fellows are expected to work on their projects 40 hours per week. The program is remote-by-default, with an in-person kickoff week in Berkeley, CA (flights and housing provided). Participants who already live in or near Berkeley are free to use our office for the duration of the program.
Facilitator
Excellent facilitators are critical for us to deliver a high-quality experience to course participants, and ultimately get more top talent working on the world's most pressing issues.
Mentor
Excellent mentors are key to accelerating people into high-impact AI safety work. With the pace of AI development, we want to help people gain the skills and expertise they need as quickly as possible. As a mentor, you'll be shaping the public artefacts participants use to launch their careers. This requires deep expertise, effective coaching, and commitment to the mission.
Research Scientist – CBRN Risk Modeling
SaferAI is seeking a Research Scientist with a strong ability to perform technical research on risk modeling in AI and CBRN. Ideal candidates will have experience conducting research on AI models and ideally with CBRN risk assessment.
AI Governance Senior Associate / Program Officer
The Effective Institutions Project (EIP) is recruiting an AI Governance Senior Associate or Program Officer who will drive forward tens of millions of dollars in grantmaking across AI safety and AGI and democracy, with the possibility to influence more. We’re looking for someone to support our Senior Program Officer with evaluating individual opportunities, writing clearly for a donor audience, and big-picture thinking about new large-scale initiatives.
Data Scientist
Epoch AI is seeking part-time data scientists to assist with our AI research efforts. This role involves reviewing technical literature, tracking benchmark data, compiling technical infrastructure details, and analyzing various sources to build comprehensive insights about AI technologies and companies.
AI Governance Fund Lead
We are seeking an exceptional leader to join Astralis Foundation, to take our Shared Horizons fund - a philanthropic initiative focused on International AI Governance - from the piloting stage to the scaling stage.
Field Strategist
Atlas Computing is looking for a cohort of Field Strategists to join us in scoping and building new organizations to secure society against the most severe risks it may face as AI becomes more capable.
Societal Defense Researcher
CARMA's Public Security Policy (PSP) Program is seeking a Policy Researcher to advance our work on AI governance, risk mitigation, and emergency response frameworks. This role will focus on disaster preparedness planning for AI-related crises across multiple jurisdictional levels (national, state/provincial, multinational, and alliance-based), vulnerability assessments of critical societal systems, and development of robust governance approaches.
Lab Operations Coordinator
We're seeking a Lab Operations Coordinator to ensure Apart Lab runs smoothly and is able to continue to scale efficiently. You'll manage the operational details that enable our research teams to focus on their work, from onboarding new participants and tracking compliance to coordinating funding and conference logistics.
Founding Generalist
Join us as a Founding Generalist and take real ownership over core programs that shape the AI safety talent pipeline. As a Founding Generalist, you'll have real ownership over core programs that shape the AI safety talent pipeline. You'll take on a wide range of responsibilities that combine strategy, relationship-building, and execution. This isn't a typical operations role—you'll be a key builder on our team, taking on high-stakes work with significant autonomy.
SPAR Part-Time Contractor
Help scale SPAR's operational capacity and deliver great programming to our community of AI safety researchers.