Back to jobs

Research Engineer, Infrastructure, Training Systems

San Francisco

Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals. 

We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.

About the Role

We’re looking for an infrastructure research engineer to design and build the core systems that enable scalable, efficient training of large models for deployment and research. Your goal is to make experimentation and training at Thinking Machines fast and reliable to ensure our research teams can focus on science, not system bottlenecks.

This role is ideal for someone who blends deep systems and performance expertise with a curiosity for machine learning at scale. You’ll take ownership of the training stack end to end, ensuring every GPU cycle drives scientific progress.

Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.

What You’ll Do

  • Design, implement, and optimize distributed training systems that scale across thousands of GPUs and nodes for large-scale training workloads.
  • Develop high-performance optimizations to maximize throughput and efficiency.
  • Develop reusable frameworks and libraries to improve training reproducibility, reliability, and scalability for new model architectures.
  • Establish standards for reliability, maintainability, and security, ensuring systems are robust under rapid iteration.
  • Collaborate with researchers and engineers to build scalable infrastructure.
  • Publish and share learnings through internal documentation, open-source libraries, or technical reports that advance the field of scalable AI infrastructure.

Skills and Qualifications

Minimum qualifications:

  • Bachelor’s degree or equivalent experience in computer science, electrical engineering, statistics, machine learning, physics, robotics, or similar.
  • Strong engineering skills, ability to contribute performant, maintainable code and debug in complex codebases
  • Understanding of deep learning frameworks (e.g., PyTorch, JAX) and their underlying system architectures.
  • Thrive in a highly collaborative environment involving many, different cross-functional partners and subject matter experts.
  • A bias for action with a mindset to take initiative to work across different stacks and different teams where you spot the opportunity to make sure something ships.

Preferred qualifications — we encourage you to apply if you meet some but not all of these:

  • Past experience working on distributed training for the world’s largest models to make them stable, reliable, and performant.
  • Track record of improving research productivity through infrastructure design or process improvements.
  • Contributions to open-source ML infrastructure such as PyTorch, XLA, Megatron-LM, or DeepSpeed.

Logistics

  • Location: This role is based in San Francisco, California. 
  • Compensation: Depending on background, skills and experience, the expected annual salary range for this position is $350,000 - $475,000 USD.
  • Visa sponsorship: We sponsor visas. While we can't guarantee success for every candidate or role, if you're the right fit, we're committed to working through the visa process together.
  • Benefits: Thinking Machines offers generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.

As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.

Create a Job Alert

Interested in building your career at Thinking Machines Lab? Get future opportunities sent straight to your email.

Apply for this job

*

indicates a required field

Phone
Resume/CV*

Accepted file types: pdf, doc, docx, txt, rtf


Education

Select...
Select...
Select...

Please provide the URL to your LinkedIn; if you don't have one, please write "none".

Please provide the URL to your Github; if you don't have one, please write "none".

Please provide the URL to your personal website, Google Scholar, etc if you have one. Put "none" if you do not.

Please tell us the name of your current employer (today if you are employed). Put "none" if this does not apply to you; for example, if you are in school or not currently employed -- this does not disqualify you. Feel free to enter previous roles in the field below in "Past Company 1".

Please enter your current title at your current employer. If you are not currently employed (or in school etc) please enter "none" and feel free to enter previous roles in the field below in "Past Company".

Please enter the Company name of your most recent previous employer. If you have not worked at another company before your current one, please enter “none”. 

Please enter your title at your most recent previous employer. If you have not worked at another company before your current one, please enter “none”.

If you would like, please enter the Company name of your second previous employer. If you have not worked at another company before your current or previous one, please enter “none” or skip this question.

If you would like, please enter the Title or Job of your second previous employer. If you have not worked at another company before your current or previous one, please enter “none” or skip this question.

What domains of research infrastructure do you have expertise in? *

Select all that apply where you have actively completed work in and would be able to interview for in a technical interview. This will help us when picking between teams or projects!

First name and last name of your Advisor / what program this was

Links to any publications, please list here

Please list 3 projects you're proud of, using 1 sentence each. Feel free to add a link if helpful.

Select...