Interpretability Researcher
EleutherAI is seeking talented and motivated individuals to join our Interpretability team to perform cutting edge research with large language and vision models. We aim to better understand the features learned by today’s deep neural networks, so we can better steer their behavior and inform the public and policy makers about their risks and benefits.
AI Safety and Security Research Engineer
As an AI Safety & Security Research Engineer, you'll advance the state of the art in AI safety and security while developing practical customer-facing tools and products. Your role involves developing novel methods for controlling, monitoring, testing, and analyzing foundation models, as well as building new models with a focus on scalable, real-world deployment. Staying abreast of the latest machine learning advancements is crucial, as you'll contribute to open model innovations and ensure our products remain at the forefront of AI technology. This position blends research with hands-on implementation, requiring both theoretical expertise and practical problem-solving skills to address complex challenges in AI safety and security.