Research, Pre-Training Data
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
About the Role
The role of pre-training researchers sits at the core of our roadmap. This work blends research with large-scale data engineering to help assemble the pre-training datasets and data systems that underpin the next generation of AI models. You’ll design and implement methods for sourcing, curating, and analyzing pre-training data for quality and performance.
You’ll work with automated pipelines and human-in-the-loop processes, contributing both scientific insight and production-grade code. It’s ideal for someone who enjoys working at the intersection of data, machine learning, and systems, and who’s excited by the challenge of shaping frontier AI.
This role blends fundamental research and practical engineering, as we do not distinguish between the two roles internally. You will be expected to write high-performance code and read technical reports. It’s an excellent fit for someone who enjoys both deep theoretical exploration and hands-on experimentation, and who wants to shape the foundations of how AI learns.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest in this research area. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
What You’ll Do
- Design and implement techniques for curating, sourcing, and filtering large-scale text, code, and multimodal data.
- Develop data quality metrics and analysis to measure coverage, diversity, and representativeness across sources.
- Collaborate with research and infrastructure teams to scale data processing systems efficiently and reproducibly.
- Investigate and mitigate data risks, including privacy, safety, and licensing concerns, to ensure responsible and ethical data use.
- Continuously evaluate dataset improvements by analyzing their downstream effects on model learning and behavior.
- Publish and present research that moves the entire community forward. Share code, datasets, and insights that accelerate progress across industry and academia.
Skills and Qualifications
Minimum qualifications:
- Proficiency in Python and familiarity with at least one deep learning framework (e.g., PyTorch, TensorFlow, or JAX). Comfortable with debugging distributed training and writing code that scales.
- Bachelor’s degree or equivalent experience in Computer Science, Machine Learning, Physics, Mathematics, or a related discipline with strong theoretical and empirical grounding.
- Clarity in communication, an ability to explain complex technical concepts in writing.
Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:
- A strong grasp of probability, statistics, and ML fundamentals. You can look at experimental data and distinguish between real effects, noise, and bugs.
- Experience with curation, preprocessing, and analysis of large-scale text, code, or multimodal datasets.
- Prior experience in data engineering, dataset construction, or large-scale web data processing for machine learning models.
- Experience evaluating or improving training data quality and knowledge of data ethics, safety, and licensing frameworks relevant to AI dataset creation.
- Contributions to open datasets, research publications, or data tooling.
- PhD in Computer Science, Machine Learning, Physics, Mathematics, or a related discipline with strong theoretical and empirical grounding; or, equivalent industry research experience.
Logistics
- Location: This role is based in San Francisco, California.
- Compensation: Depending on background, skills and experience, the expected annual salary range for this position is $350,000 - $475,000 USD.
- Visa sponsorship: We sponsor visas. While we can't guarantee success for every candidate or role, if you're the right fit, we're committed to working through the visa process together.
- Benefits: Thinking Machines offers generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Create a Job Alert
Interested in building your career at Thinking Machines Lab? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field
.png?1755715693)