Infrastructure Engineer

FAR AI

📍Remote (Global) 🕔 Full Time
💰$100,000-$175,000 /year 🔄 Rolling Applications

FAR.AI is seeking an Infrastructure Engineer to manage our GPU cluster which supports diverse impactful research workloads, from exploratory prototypes to multi-node training of frontier open-weight models. You will work in our tight-knit Foundations team, who develop flexible and scalable research infrastructure. You will consult across the entire FAR.AI technical team, supporting bespoke and complex workloads.

About the Role

We’re seeking an Infrastructure Engineer to develop and manage scalable infrastructure to support our research workloads. You will own our existing Kubernetes cluster, deployed on top of bare-metal H100 cloud instances. You will oversee and enhance the cluster to

  1. support new workloads, such as multi-node LoRA training;

  2. new users, as we double the size of our research team in the next twelve to eighteen months;

  3. new features, such as fine-grained experiment compute usage tracking.

You will be the point-person for cluster-related work. You will work on the Foundations team alongside experienced engineers, including those who built and designed the cluster, who can provide guidance and backup. However, as our first dedicated infrastructure hire, you will need to work autonomously, design solutions to varied and complex problems, and communicate with researchers who are technically skilled but less knowledgeable about our cluster and infrastructure.

This is an opportunity to build the technical foundations of the largest independent AI safety research institute, with one of the most varied research agendas. You will be working directly with both the Foundations team and researchers across the organization to enable bleeding-edge research workloads across our research portfolio.

Responsibilities

  • Build and Maintain

    You will deliver a scalable and easy to use compute cluster to support impactful research by:

    • Empowering the research team to solve their own day-to-day compute problems, such as debugging simple issues and streamlining recurring tasks (e.g. running batch experiments, launching an interactive devbox, etc.).

    • Maintaining and developing in-cluster services, such as backups, experiment tracking, and our in-house LLM-based cluster support bot.

    • Maintaining adequate cluster stability to avoid interfering with research workloads (currently >95% uptime outside of planned maintenance windows).

    • Maintaining situational awareness of the cloud GPU market and assisting leadership with vendor comparisons to ensure we are using the most effective compute platforms.

  • Support Security

    We often collaborate with partners with stringent security requirements (e.g. governments, frontier developers) and handle sensitive information (e.g. non-public exploits, CBRN datasets). You will implement security measures towards:

    • Securing the cluster against insider threats (architecting it to have adequate isolation to provide data confidentiality and integrity for sensitive workloads) and external threats (through minimizing the attack surface, and ensuring security updates are promptly installed).

    • Making secure workflows the default, e.g. streamlining the deployment of internal web dashboards behind an OAuth reverse proxy.

    • Championing security across the FAR.AI team, including maintaining and extending our mobile device management (MDM) system.

  • Bleeding-edge Workloads

    You will work with the Foundations team and specific research teams to support novel ML workloads (e.g. fine-tuning a new open-weight model release) by:

    • Architecting our Kubernetes cluster to flexibly support novel workloads.

    • Assisting projects with bespoke requirements, designing and implementing effective infrastructure solutions, and sharing your infrastructure wisdom with ML researchers.

    • Improving observability over cluster resources and GPU utilization to allow us to rapidly diagnose and work around hardware issues or software bugs that may only arise on novel workloads.

About You

It is essential that you

  • Have Kubernetes or other system administration experience.

  • Have a curiosity and willingness to rapidly learn the needs of a new space.

  • Are self-directed and comfortable with ambiguous or rapidly evolving requirements.

  • Are willing to be on-call during waking hours for cluster issues ahead of major deadlines (for a few weeks a quarter).

  • Are interested in improving our security posture through identifying, implementing and administering security policies.

It is preferable that you

  • Have experience supporting ML/AI workloads.

  • Have previously worked in research environments or startups.

  • Are experienced in administering compute or GPU clusters.

  • Are able to adopt a security mindset.

  • Are willing to be part of an eventual on-call rotation, if required.

Example Projects

  • Configure the cluster and user-space development environments to support InfiniBand nodes for high-performance multi-node training.

  • Improve our default devbox K8s pod template to incorporate best-practice workflows for our researchers.

  • Roll out a new mobile device management system to ensure corporate devices meet our security requirements.

  • Streamline onboarding to the cluster for new starters (possibly in different timezones), and candidates on time-limited work trials.

  • Be “holder of the keys”, managing permissions and access control for FAR.AI’s team members to technical systems, including streamlining/automating (e.g. via SAML, SCIM) where appropriate.

  • Analyze storage patterns and propose infrastructure improvements for backups, disaster recovery, and usability.

Logistics

You will be a full-time employee of FAR AI, a 501(c)(3) research non-profit.

  • Location: Both remote and in-person (Berkeley, CA) are possible, though 2 hours of overlap with Berkeley timezones are required. We sponsor visas for CA in-person employees, and can also hire remotely in most countries.

  • Hours: Full-time (40 hours/week).

  • Compensation: $100,000-$175,000/year depending on experience and location. We will also pay for work-related travel and equipment expenses. We offer catered lunch and dinner at our offices in Berkeley.

  • Application process: A programming assessment, a short screening call, two 1-hour interviews, and a 1 week paid work trial.

If you have any questions about the role, please reach out at talent@far.ai. If you don't have questions, the best way to ensure a proper review of your skills and qualifications is by applying directly via the application form. Please don't email us to share your resume (it won't have any impact on our decision). Thank you!

FarAI

FAR.AI is a technical AI research and education non-profit, dedicated to ensuring the safe development and deployment of frontier AI systems

Previous
Previous

Social Media & Digital Communications Officer

Next
Next

Program Associate - Global Aid Policy