Back to jobs

AI Researcher, Core ML (Turbo)

San Francisco

About the Role

The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale.

Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design.

You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal.

Requirements

We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay.

You might be a good fit if you:

  • Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others:
    • Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving.
    • RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models.
    • Model architecture design for Transformers or other large neural nets.
    • Distributed systems / high‑performance computing for ML.
  • Are comfortable working from algorithms to engines:
    • Strong coding ability in Python
    • Experience profiling and optimizing performance across GPU, networking, and memory layers.
    • Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack.
  • Have a solid research foundation in your area(s) of depth:
    • Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems).
    • Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API).
  • Operate well as a full‑stack problem solver:
    • You naturally ask: “Where in the stack is this really bottlenecked?”
    • You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins.

Minimum qualifications

  • 3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source).
  • Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience.
  • Demonstrated experience owning complex technical projects end‑to‑end.

If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement.

Responsibilities

  • Advance inference efficiency end‑to‑end
    • Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference.
    • Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc.
    • Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost.
  • Unify inference with RL / post‑training
    • Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems.
    • Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper.
    • Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack.
    • Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers.
    • Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design.
  • Own critical systems at production scale
    • Profile, debug, and optimize inference and post‑training services under real production workloads.
    • Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed.
    • Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously.
  • Provide technical leadership (Staff level)
    • Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training.
    • Mentor other engineers and researchers on full‑stack ML systems work and performance engineering.

About Together AI

Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.

Compensation

We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.

Equal Opportunity

Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.

Please see our privacy policy at https://www.together.ai/privacy  

 

Create a Job Alert

Interested in building your career at Together AI? Get future opportunities sent straight to your email.

Apply for this job

*

indicates a required field

Phone
Resume/CV*

Accepted file types: pdf, doc, docx, txt, rtf

Cover Letter

Accepted file types: pdf, doc, docx, txt, rtf


Education

Select...
Select...
Select...

Select...

U.S. Standard Demographic Questions

We invite applicants to share their demographic background. If you choose to complete this survey, your responses may be used to identify areas of improvement in our hiring process.
Select...
Select...
Select...
Select...
Select...
Select...

Voluntary Self-Identification

For government reporting purposes, we ask candidates to respond to the below self-identification survey. Completion of the form is entirely voluntary. Whatever your decision, it will not be considered in the hiring process or thereafter. Any information that you do provide will be recorded and maintained in a confidential file.

As set forth in Together AI’s Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.

Select...
Select...
Race & Ethnicity Definitions

If you believe you belong to any of the categories of protected veterans listed below, please indicate by making the appropriate selection. As a government contractor subject to the Vietnam Era Veterans Readjustment Assistance Act (VEVRAA), we request this information in order to measure the effectiveness of the outreach and positive recruitment efforts we undertake pursuant to VEVRAA. Classification of protected categories is as follows:

A "disabled veteran" is one of the following: a veteran of the U.S. military, ground, naval or air service who is entitled to compensation (or who but for the receipt of military retired pay would be entitled to compensation) under laws administered by the Secretary of Veterans Affairs; or a person who was discharged or released from active duty because of a service-connected disability.

A "recently separated veteran" means any veteran during the three-year period beginning on the date of such veteran's discharge or release from active duty in the U.S. military, ground, naval, or air service.

An "active duty wartime or campaign badge veteran" means a veteran who served on active duty in the U.S. military, ground, naval or air service during a war, or in a campaign or expedition for which a campaign badge has been authorized under the laws administered by the Department of Defense.

An "Armed forces service medal veteran" means a veteran who, while serving on active duty in the U.S. military, ground, naval or air service, participated in a United States military operation for which an Armed Forces service medal was awarded pursuant to Executive Order 12985.

Select...

Voluntary Self-Identification of Disability

Form CC-305
Page 1 of 1
OMB Control Number 1250-0005
Expires 04/30/2026

Why are you being asked to complete this form?

We are a federal contractor or subcontractor. The law requires us to provide equal employment opportunity to qualified people with disabilities. We have a goal of having at least 7% of our workers as people with disabilities. The law says we must measure our progress towards this goal. To do this, we must ask applicants and employees if they have a disability or have ever had one. People can become disabled, so we need to ask this question at least every five years.

Completing this form is voluntary, and we hope that you will choose to do so. Your answer is confidential. No one who makes hiring decisions will see it. Your decision to complete the form and your answer will not harm you in any way. If you want to learn more about the law or this form, visit the U.S. Department of Labor’s Office of Federal Contract Compliance Programs (OFCCP) website at www.dol.gov/ofccp.

How do you know if you have a disability?

A disability is a condition that substantially limits one or more of your “major life activities.” If you have or have ever had such a condition, you are a person with a disability. Disabilities include, but are not limited to:

  • Alcohol or other substance use disorder (not currently using drugs illegally)
  • Autoimmune disorder, for example, lupus, fibromyalgia, rheumatoid arthritis, HIV/AIDS
  • Blind or low vision
  • Cancer (past or present)
  • Cardiovascular or heart disease
  • Celiac disease
  • Cerebral palsy
  • Deaf or serious difficulty hearing
  • Diabetes
  • Disfigurement, for example, disfigurement caused by burns, wounds, accidents, or congenital disorders
  • Epilepsy or other seizure disorder
  • Gastrointestinal disorders, for example, Crohn's Disease, irritable bowel syndrome
  • Intellectual or developmental disability
  • Mental health conditions, for example, depression, bipolar disorder, anxiety disorder, schizophrenia, PTSD
  • Missing limbs or partially missing limbs
  • Mobility impairment, benefiting from the use of a wheelchair, scooter, walker, leg brace(s) and/or other supports
  • Nervous system condition, for example, migraine headaches, Parkinson’s disease, multiple sclerosis (MS)
  • Neurodivergence, for example, attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder, dyslexia, dyspraxia, other learning disabilities
  • Partial or complete paralysis (any cause)
  • Pulmonary or respiratory conditions, for example, tuberculosis, asthma, emphysema
  • Short stature (dwarfism)
  • Traumatic brain injury
Select...

PUBLIC BURDEN STATEMENT: According to the Paperwork Reduction Act of 1995 no persons are required to respond to a collection of information unless such collection displays a valid OMB control number. This survey should take about 5 minutes to complete.