Teaching Assistant

TARA

📍 Remote (Global) 🕔 0.25 FTE
💰$80 AUD/year ⏳ 23/01/2026

The Technical Alignment Research Accelerator (TARA)  is a 14-week program launching in March 2026, designed to upskill technical APAC talent in AI safety. Using the ARENA curriculum, TARA covers advanced ML topics including transformer architectures, RL, mechanistic interpretability, sparse autoencoders, and model evaluation techniques.

As a Teaching Assistant, you'll work remotely to guide approximately 26 participants across two city cohorts within your assigned timezone cluster. All instruction and support is delivered online - you don't need to be physically present in any of the cities. You can be based anywhere in the world, as long as you're available during your cluster's Saturday session hours. You'll be part of a team of 3-4 TAs, each responsible for one of three clusters:

  • Cluster 1: Singapore + Taipei — Saturdays 10:00 AM - 5:30 PM SGT (UTC+8)

  • Cluster 2: Manila + Tokyo — Saturdays 10:00 AM PHT (UTC+8) / 11:00 AM JST (UTC+9) (~7.5 hours)

  • Cluster 3: Sydney + Melbourne + Brisbane — Saturdays 9:30 AM - 5:00 PM AEST (UTC+10/+11)

All cohorts run synchronised (everyone does Week 1 together, Week 5 together, projects together), so you can support each other with lecture prep, queue management, and project feedback.

The role involves leading Saturday sessions and providing flexible remote support. Total time commitment is approximately 10 hours weekly: 7.5 hours on Saturdays plus 2.5 hours of weekday support and preparation.

Course Structure & Weekly Schedule

Each Saturday begins with you introducing new technical concepts, followed by pair programming sessions. The standard Saturday schedule (~7.5 hours):

  • Hour 1: Lecture introducing the week's topics

  • Hours 2-3: Pair programming with remote/online TA support

  • Midday: Lunch

  • Hours 4-7: Continued pair programming with TA support

  • End of day: Daily download - participants share progress and predict challenges for the coming week

During the week, participants work independently on the material while you provide asynchronous Slack support (~2.5 hours).

Core Responsibilities

Saturday session delivery

  • Lead 1-hour lectures on weekly curriculum topics covering topics such as RL, transformer architectures, mechanistic interpretability, sparse autoencoders, and model evaluations

  • Proactively check in with pairs via Slack to see if they need help

  • Jump on Zoom calls or Slack huddles to work through problems as they arise

  • Encourage "learning in public" - when you help someone solve a problem, post the resolution so others can benefit

Weekday support

  • Offer asynchronous support via Slack for participants working independently

  • Help resolve environment setup issues, compute access problems, and technical blockers

Assessment and feedback

  • Review participant project proposals during the final curriculum phase (Weeks 8-11)

  • Provide technical feedback on projects (Weeks 12-14)

Team collaboration

  • Participate in weekly TA check-ins

  • Step in for other TAs if needed - our multi-TA model provides backup coverage

  • Share learnings and resources across clusters

Qualifications

Required

  • Completed most or all of the ARENA curriculum

  • Strong Python and PyTorch skills

  • Strong grasp of RL, transformer architectures, mechanistic interpretability, sparse autoencoders, and model evaluation

  • Experience explaining complex technical concepts

  • Ability to mentor in programming/ML

  • Patient and encouraging teaching style

  • Proactive communication habits

  • Genuine interest in AI safety

  • Available for the icebreaker session on Saturday 7 March (~1.5 hours)

  • Available every Saturday from 14 March - 13 June 2026 (~7.5 hours per session)

Nice-to-have

  • Technical AI alignment research experience

  • Previous experience running technical workshops or bootcamps

  • Experience with distributed/remote teaching

Why join us?

  • Shape APAC AI safety talent development: Help expand the first dedicated technical AI safety program across the Asia-Pacific region

  • Teaching autonomy: Help define how we teach AI safety concepts

  • Career development: Deepen your technical and teaching abilities

  • Community impact: Build technical AI safety communities across multiple cities

  • Collaborative team: Work alongside other TAs

Application Process

Applications close Friday 23 January 2026. We aim to:

  • Begin reviewing and interviewing candidates as applications come in

    • If we find the right people before January 23 we might hire them. Early applications are encouraged!

  • Conduct interviews between 2 January and 19 February

  • Make final decisions by Friday 21 February

Questions? Contact yanni@taraprogram.org and zac@taraprogram.org

Apply
TARA

The Technical Alignment Research Accelerator is a nonprofit that aims to create and accelerate AI safety careers across Asia-Pacific.

Previous
Previous

Senior Scenarios Program Manager

Next
Next

AI Governance Senior Associate / Program Officer