Back to jobs

Member of Technical Staff - Pretraining / Inference Optimization

San Francisco (USA)

What if the difference between a research breakthrough and something people can actually use is squeezing 10x more performance out of the same hardware?

We're the ~50-person team behind Stable Diffusion, Stable Video Diffusion, and FLUX.1—models with 400M+ downloads. But here's the reality: frontier models are computationally expensive. Training runs that take weeks could take days with better optimization. Inference that takes seconds could be near-instantaneous. The gap between theoretical performance and what we're achieving is your opportunity. Your job is to push our models closer to the physical limits of GPUs.

What You'll Pioneer

You'll optimize training and inference for models at the cutting edge of what's possible—not by applying standard techniques from documentation, but by profiling deeply, understanding bottlenecks at the hardware level, and writing custom kernels when existing solutions aren't fast enough. This is low-level optimization work where every percentage point of improvement compounds across billions of operations.

You'll be the person who:

  • Finds ideal training strategies (parallelism approaches, precision trade-offs) for a variety of model sizes and compute loads—because one-size-fits-all doesn't work at frontier scale
  • Profiles, debugs, and optimizes single and multi-GPU operations using tools like Nsight and stack trace viewers to understand what's actually happening at the hardware level
  • Reasons about the speed-quality trade-offs of quantization for model inference—knowing when reduced precision helps and when it hurts
  • Develops and improves low-level kernel optimizations for state-of-the-art inference and training, writing custom implementations when off-the-shelf solutions leave performance on the table
  • Innovates new ideas that bring us closer to the theoretical limits of GPU performance—exploring techniques that haven't been documented yet

Questions We're Wrestling With

  • What's the optimal parallelism strategy for training transformer models at different scales, and how does it change with model architecture?
  • Where are we memory-bound versus compute-bound, and what optimizations matter for each?
  • How do you quantize diffusion models for inference without degrading generation quality?
  • Which attention algorithms work best for our specific model architectures and sequence lengths?
  • When should we write custom CUDA versus Triton kernels versus using existing implementations?
  • How do we ensure kernel correctness while dealing with floating point errors that compound across billions of operations?
  • What's the gap between our current performance and the theoretical limit of the hardware—and what's preventing us from closing it?

These aren't abstract questions—they're optimizations that determine whether training takes weeks or days, whether inference is interactive or frustrating.

Who Thrives Here

You understand GPUs at a deep level—memory hierarchy, computation capabilities, the gap between theoretical and achieved performance. You've written custom kernels and debugged why they're slower than expected. You know the difference between optimizations that work in microbenchmarks and optimizations that matter for real workloads. You get excited by profiler outputs and disappointed by wasted compute cycles.

You likely have:

  • Familiarity with the latest and most effective techniques in optimizing inference and training workloads—not from reading papers, but from implementing them
  • Experience optimizing for both memory-bound and compute-bound operations and understanding when each constraint matters
  • Deep understanding of GPU memory hierarchy and computation capabilities—knowing what the hardware can do theoretically and what prevents us from achieving it
  • Expertise with efficient attention algorithms and their performance characteristics at different scales
  • Experience implementing both forward and backward Triton kernels and ensuring their correctness while considering floating point errors
  • Proficiency using tools like pybind to integrate custom-written kernels into PyTorch frameworks

We'd be especially excited if you:

  • Have experience with diffusion and autoregressive models and understand their specific optimization challenges
  • Bring deep experience in low-level CUDA kernel optimizations beyond what Triton provides
  • Have shipped optimizations that materially improved training or inference speed for production models
  • Understand the tradeoffs between development time and performance gains

What We're Building Toward

We're not just optimizing models—we're pushing toward the physical limits of what's possible with current hardware. Every optimization you ship makes training faster and cheaper. Every kernel you write makes inference more responsive. Every technique you develop becomes part of how frontier models get built. If that sounds more compelling than applying existing optimizations, we should talk.

Base Annual Salary: $180,000–$300,000 USD


We're based in Europe and value depth over noise, collaboration over hero culture, and honest technical conversations over hype. Our models have been downloaded hundreds of millions of times, but we're still a ~50-person team learning what's possible at the edge of generative AI.

Create a Job Alert

Interested in building your career at Black Forest Labs? Get future opportunities sent straight to your email.

Apply for this job

*

indicates a required field

Phone
Resume/CV

Accepted file types: pdf, doc, docx, txt, rtf

Cover Letter

Accepted file types: pdf, doc, docx, txt, rtf



U.S. Standard Demographic Questions

We invite applicants to share their demographic background. If you choose to complete this survey, your responses may be used to identify areas of improvement in our hiring process.
Select...
Select...
Select...
Select...
Select...
Select...

Voluntary Self-Identification

For government reporting purposes, we ask candidates to respond to the below self-identification survey. Completion of the form is entirely voluntary. Whatever your decision, it will not be considered in the hiring process or thereafter. Any information that you do provide will be recorded and maintained in a confidential file.

As set forth in Black Forest Labs’s Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.

Select...
Select...
Race & Ethnicity Definitions

If you believe you belong to any of the categories of protected veterans listed below, please indicate by making the appropriate selection. As a government contractor subject to the Vietnam Era Veterans Readjustment Assistance Act (VEVRAA), we request this information in order to measure the effectiveness of the outreach and positive recruitment efforts we undertake pursuant to VEVRAA. Classification of protected categories is as follows:

A "disabled veteran" is one of the following: a veteran of the U.S. military, ground, naval or air service who is entitled to compensation (or who but for the receipt of military retired pay would be entitled to compensation) under laws administered by the Secretary of Veterans Affairs; or a person who was discharged or released from active duty because of a service-connected disability.

A "recently separated veteran" means any veteran during the three-year period beginning on the date of such veteran's discharge or release from active duty in the U.S. military, ground, naval, or air service.

An "active duty wartime or campaign badge veteran" means a veteran who served on active duty in the U.S. military, ground, naval or air service during a war, or in a campaign or expedition for which a campaign badge has been authorized under the laws administered by the Department of Defense.

An "Armed forces service medal veteran" means a veteran who, while serving on active duty in the U.S. military, ground, naval or air service, participated in a United States military operation for which an Armed Forces service medal was awarded pursuant to Executive Order 12985.

Select...

Voluntary Self-Identification of Disability

Form CC-305
Page 1 of 1
OMB Control Number 1250-0005
Expires 04/30/2026

Why are you being asked to complete this form?

We are a federal contractor or subcontractor. The law requires us to provide equal employment opportunity to qualified people with disabilities. We have a goal of having at least 7% of our workers as people with disabilities. The law says we must measure our progress towards this goal. To do this, we must ask applicants and employees if they have a disability or have ever had one. People can become disabled, so we need to ask this question at least every five years.

Completing this form is voluntary, and we hope that you will choose to do so. Your answer is confidential. No one who makes hiring decisions will see it. Your decision to complete the form and your answer will not harm you in any way. If you want to learn more about the law or this form, visit the U.S. Department of Labor’s Office of Federal Contract Compliance Programs (OFCCP) website at www.dol.gov/ofccp.

How do you know if you have a disability?

A disability is a condition that substantially limits one or more of your “major life activities.” If you have or have ever had such a condition, you are a person with a disability. Disabilities include, but are not limited to:

  • Alcohol or other substance use disorder (not currently using drugs illegally)
  • Autoimmune disorder, for example, lupus, fibromyalgia, rheumatoid arthritis, HIV/AIDS
  • Blind or low vision
  • Cancer (past or present)
  • Cardiovascular or heart disease
  • Celiac disease
  • Cerebral palsy
  • Deaf or serious difficulty hearing
  • Diabetes
  • Disfigurement, for example, disfigurement caused by burns, wounds, accidents, or congenital disorders
  • Epilepsy or other seizure disorder
  • Gastrointestinal disorders, for example, Crohn's Disease, irritable bowel syndrome
  • Intellectual or developmental disability
  • Mental health conditions, for example, depression, bipolar disorder, anxiety disorder, schizophrenia, PTSD
  • Missing limbs or partially missing limbs
  • Mobility impairment, benefiting from the use of a wheelchair, scooter, walker, leg brace(s) and/or other supports
  • Nervous system condition, for example, migraine headaches, Parkinson’s disease, multiple sclerosis (MS)
  • Neurodivergence, for example, attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder, dyslexia, dyspraxia, other learning disabilities
  • Partial or complete paralysis (any cause)
  • Pulmonary or respiratory conditions, for example, tuberculosis, asthma, emphysema
  • Short stature (dwarfism)
  • Traumatic brain injury
Select...

PUBLIC BURDEN STATEMENT: According to the Paperwork Reduction Act of 1995 no persons are required to respond to a collection of information unless such collection displays a valid OMB control number. This survey should take about 5 minutes to complete.