Founding Generalist

Kairos

📍 Remote (Global) 🕔 Full Time
💰$90,000–$150,000 USD/year 🔄 Rolling Applications

Join us as a Founding Generalist and take real ownership over core programs that shape the AI safety talent pipeline. As a Founding Generalist, you'll have real ownership over core programs that shape the AI safety talent pipeline. You'll take on a wide range of responsibilities that combine strategy, relationship-building, and execution. This isn't a typical operations role—you'll be a key builder on our team, taking on high-stakes work with significant autonomy.

About Kairos

Kairos is a nonprofit accelerating talent into AI safety and policy. In just one year, we've trained over 600 people through our flagship programs:

  • SPAR – the largest AI safety research fellowship, now with 300+ mentees mentored by researchers from Google DeepMind, the UK AI Security Institute, Open Philanthropy, RAND, METR, and others.

  • Pathfinder Fellowship – Growing the global network of AI safety university groups from a few dozen to nearly 100, distributing $1.4M+ in grants to support their work.

  • OASIS Workshops – Bay area workshops for top university organizers in AI safety with guest speakers from Constellation, the AI Futures Project (AI 2027), Redwood Research, ARI, and more.

Our fellows have published at NeurIPS and ICML, been featured in Time Magazine, and gone on to lead impactful work across top research labs, think tanks, and policy orgs. We've helped dozens land AI safety roles and supported hundreds in taking meaningful actions in the space.

We see ourselves as a portfolio of highly impactful projects in a fast-evolving field, and we're just getting started. By 2026, we plan to launch even more ambitious initiatives addressing critical gaps in the ecosystem and continue rapidly scaling up our efforts.

What You'll Do

  • Drive recruitment and outreach

    • Lead outreach for SPAR, Pathfinder, and new programs—recruiting both mentors and mentees through emails, Slack, ads, referrals, and partnerships

    • Scale what's working: we've doubled applications for both mentees and mentors round on round for every round we've run

  • Grantmaking and application review

    • Review applications and differentiate between candidates with rigor and nuance

    • Help us conduct due diligence on complex grant proposals and make funding recommendations

  • Proactively ship high-impact projects

    • Identify priorities aligned with our mission and execute independently

    • Come up with new ideas, test them, then scale what works

  • Develop deep field expertise

    • Build understanding of AI safety and policy research agendas to inform program design and funding decisions

    • Map the AI safety ecosystem to identify gaps where Kairos could have an outsized impact

  • Build and grow strategic relationships

    • Cultivate relationships with key stakeholders across the AI safety ecosystem

    • Work closely with mentees and mentors, listen to what they need, and help them thrive

  • Build the team

    • Help us interview and recruit future team members as we scale in 2026 and 2027

    • Work with leadership to figure out which programs Kairos should run

Who You Are

An ideal candidate will have most or all of the following characteristics:

We are looking for someone with high agency, strong judgment, and deep alignment with our mission. You care deeply about impact and are excited to build in a fast-paced, high-trust environment.

  • High Agency

    You demonstrate strong internal motivation and ownership. You proactively upskill on complex tasks and reliably drive toward Kairos's goals. Specifically, you can:

    • Triage competing priorities and focus on what matters most

    • See the big picture and execute with minimal guidance

    • Thrive in ambiguous, fast-changing environments

    • Set clear daily goals and execute quickly

  • Impact-Minded

    You're motivated by reducing catastrophic risks from AI and ensuring a positive transition to transformative AI. You take individual responsibility while supporting the team. When challenges arise, you're the person who rolls up their sleeves and contributes wherever you can make a difference.

  • Scout Mindset

    You pursue truth over comfort. You recognize that accurate models of the world enable better decisions. You actively resist motivated reasoning, maintain intellectual humility, and stay open to changing your mind. When discussing ideas, you focus on understanding reality rather than defending your existing views.

  • Agile and Decisive

    You move fast and adapt quickly when new information arises. You develop hypotheses about how to create change and act decisively (even under uncertainty) while staying ready to pivot when needed.

  • Alliance Mentality

    You're collaborative and pro-social. You see other people working on AI safety as allies and support shared goals. You reward reflection on mistakes rather than punishing them, and you handle problems constructively.

While not required, we prefer candidates who also have:

  • Existing context on the AI safety ecosystem, needs, and gaps. This could include familiarity with AI safety technical research or policy work, whether through prior work experience, courses, casual reading, or study.

  • Entrepreneurial experience and drive, have a propensity for starting new initiatives, or seeding programs.

Why Work With Us

  • You'll be working toward reducing risks from advanced AI, potentially the most important challenge of our time.

  • You will shape major programs that influence hundreds of future AI safety and policy professionals.

  • As Kairos expands, your role will grow with your interests and skills. This is a place to gain experience across multiple domains.

  • We're building a world-class team, so you'll be in good company. Work alongside people from METR, Open Philanthropy, Rethink Priorities, and CEA who share your commitment to impact.

  • Potential to manage others as we double or triple our team size by 2027.

  • Collaborate regularly with leading AI safety researchers, policy professionals, funders, and organizers.

What We Offer

  • Base Salary

    $90,000–$150,000. This will depend on experience, seniority, and location, with the potential for additional compensation for exceptional candidates. We will also pay for work-related travel and expenses. If you work from an AI safety office in Berkeley, London, or Cambridge, MA, we'll cover food, lunches, and office expenses.

  • Retirement

    10% 401(k) contribution or equivalent 10% pension contribution

  • Location

    Access to office space in Berkeley, London, or Cambridge, MA, or optional coworking access if elsewhere. We host biyearly all-team retreats to connect in person, collaborate, and build team culture.

  • Benefits

    Flexible working hours, competitive health insurance, dental and vision coverage, generous vacation policy, and professional development budget

Logistics

  • The start date for this role would be sometime in December

  • This is a remote opportunity, but you'll be expected to travel a few times a year for conferences and events, predominantly in the US.

  • We'd also be happy for you to work out of any AI safety office in Berkeley, London, or Boston.

  • We prefer candidates who can be available for meetings during the ET time zone

  • We may be able to sponsor visas (particularly O-1 visas), depending on individual circumstances, but we can't make any guarantees.

Our Culture

Kairos is a small, dynamic, and high-trust team motivated by the urgent challenge of making advanced AI go well for humanity. We also believe meaningful work should be enjoyable. We support each other's well-being, celebrate wins, and maintain a healthy sense of humor even when the work is challenging.

Apply
Kairos

Kairos is a US nonprofit focused on finding and supporting top talent working to address risks from advanced AI.

Previous
Previous

Research Analyst

Next
Next

SPAR Part-Time Contractor