Research Intern RL & Post-Training Systems, Turbo (Summer 2026)
About Together AI
Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancements such as FlashAttention, Mamba, FlexGen, SWARM Parallelism, Mixture of Agents, and RedPajama.
Role Overview
The Turbo Research team investigates how to make post-training and reinforcement learning for large language models efficient, scalable, and reliable. Our work sits at the intersection of RL algorithms, inference systems, and large-scale experimentation, where the cost and structure of inference dominate overall training efficiency and shape what learning algorithms are practical.
As a research intern, you will study RL and post-training methods whose performance and scalability are tightly coupled to inference behavior, co-designing algorithms and systems rather than treating them independently. Projects aim to unlock new regimes of experimentation—larger models, longer rollouts, and more complex evaluations—by rethinking how inference, scheduling, and training interact.
Requirements
We’re looking for research interns who want to work on foundational questions in RL and post-training, grounded in realistic inference systems.
You might be a strong fit if you:
- Are pursuing a PhD or MS in Computer Science, EE, or a related field (exceptional undergraduates considered).
- Have research experience in one or more of:
- RL or post-training for large models (e.g., RLHF, RLAIF, GRPO, preference optimization).
- ML systems (inference engines, runtimes, distributed systems).
- Large-scale empirical ML research or evaluation.
- Are comfortable with empirical research:
- Designing controlled experiments and ablations.
- Interpreting noisy results and drawing principled conclusions.
- Can work across abstraction layers:
- Strong Python skills for experimentation.
- Willingness to modify inference or training systems (experience with C++, CUDA, or similar is a plus).
- Care about research insight, not just benchmarks:
- You ask why methods work or fail under real system constraints.
- You think about how infrastructure assumptions shape algorithmic outcomes.
Example Research Directions
Intern projects are tailored to your background and interests, and may include:
- Inference-Aware RL & Post-Training
- Designing RL or preference-optimization objectives that explicitly account for inference cost and structure (e.g., speculative decoding, partial rollouts, controllable sampling).
- Studying how inference-time approximations affect learning dynamics in GRPO-, RLHF-, RLAIF-, or DPO-style methods.
- Analyzing bias, variance, and stability trade-offs introduced by accelerated inference within RL loops.
- RL-Centric Inference Systems
- Developing inference mechanisms that support deterministic, reproducible RL rollouts at scale.
- Exploring batching, scheduling, and memory-management strategies optimized for RL and evaluation workloads rather than pure serving.
- Investigating how KV-cache policies, sampling controls, or runtime abstractions influence learning efficiency.
- Scaling Laws & Cost–Quality Trade-offs
- Empirically characterizing how reward improvement and generalization scale with rollout cost, latency, and throughput.
- Quantifying when systems-level optimizations change algorithmic behavior rather than only reducing runtime.
- Identifying regimes where inference efficiency unlocks qualitatively new learning capabilities.
- Evaluation & Measurement
- Designing rigorous benchmarks and diagnostics for post-training and RL efficiency.
- Studying failure modes in long-horizon training and how system constraints shape outcomes.
Preferred Qualifications
- Prior research experience with foundation models or efficient machine learning
- Publications at leading ML and NLP conferences (such as NeurIPS, ICML, ICLR, ACL, or EMNLP)
- Understanding of model optimization techniques and hardware acceleration approaches
- Contributions to open-source machine learning projects
Application Process
Please submit your application with:
- Resume/CV
- A cover letter that includes your preferred research areas, academic transcript (unofficial is acceptable), and links to relevant projects or publications
Internship Program Details
Our summer internship program spans over 12 weeks where you’ll have the opportunity to work with industry-leading engineers building a cloud from the ground up and possibly contribute to influential open source projects. Our internship dates are May 18th to August 7th or June 15th to September 4th.
Compensation
We offer competitive compensation, housing stipends, and other competitive benefits. The estimated US hourly rate for this role is $58-63/hr. Our hourly rates are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.
Equal Opportunity
Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https://www.together.ai/privacy
Create a Job Alert
Interested in building your career at Together AI? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field