Research Engineer, Infrastructure, Tinker
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
About the Role
We’re looking for an infrastructure research engineer to design, scale, and harden the systems behind Tinker so our internal teams and external customers can fine-tune models smoothly, reliably, and cost-effectively. You’ll sit at the intersection of large-scale training systems and product infrastructure: building multi-tenant scheduling, storage, observability, and reliability into a developer-friendly API.
Your work will ensure all Tinker users can focus on research and building without sparing a thought to infrastructure.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
What You’ll Do
- Scale Tinker by designing and implementing distributed job orchestration placement, preemption, and fair-share scheduling for multi-tenant workloads.
- Optimize GPU utilization, throughput, and reliability across clusters (autoscaling, bin-packing, quotas).
- Develop reusable frameworks and libraries to improve Tinker transparency, reproducibility, and performance.
- Co-design with our researchers and developer experience engineers, turning fine-tuning challenges into product features.
- Publish and share learnings through internal documentation, open-source libraries, or technical reports that advance the field of scalable AI infrastructure.
Skills and Qualifications
Minimum qualifications:
- Bachelor’s degree or equivalent experience in computer science, electrical engineering, statistics, machine learning, physics, robotics, or similar.
- Understanding of deep learning frameworks (e.g., PyTorch, JAX) and their underlying system architectures.
- Thrive in a highly collaborative environment involving many, different cross-functional partners and subject matter experts.
- A bias for action with a mindset to take initiative to work across different stacks and different teams where you spot the opportunity to make sure something ships.
- Strong engineering skills, ability to contribute performant, maintainable code and debug in complex codebases
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
- Hands-on experience with container orchestration and CI/CD for long-running GPU workloads.
- Background in multi-tenant platform design (quotas, fairness, isolation), storage systems for ML artifacts, and cost governance.
- Contributions to ML systems OSS (e.g., PyTorch/DeepSpeed/XLA, orchestration/tooling), or prior platform work on an ML API.
- Clear, precise communication with product and external users; comfort translating field learnings into platform roadmaps.
Logistics
- Location: This role is based in San Francisco, California.
- Compensation: Depending on background, skills and experience, the expected annual salary range for this position is $350,000 - $475,000 USD.
- Visa sponsorship: We sponsor visas. While we can't guarantee success for every candidate or role, if you're the right fit, we're committed to working through the visa process together.
- Benefits: Thinking Machines offers generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Create a Job Alert
Interested in building your career at Thinking Machines Lab? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field
.png?1755715693)