Member of Technical Staff - Pretraining / Inference Optimization
What if the difference between a research breakthrough and something people can actually use is squeezing 10x more performance out of the same hardware?
We're the ~50-person team behind Stable Diffusion, Stable Video Diffusion, and FLUX.1—models with 400M+ downloads. But here's the reality: frontier models are computationally expensive. Training runs that take weeks could take days with better optimization. Inference that takes seconds could be near-instantaneous. The gap between theoretical performance and what we're achieving is your opportunity. Your job is to push our models closer to the physical limits of GPUs.
What You'll Pioneer
You'll optimize training and inference for models at the cutting edge of what's possible—not by applying standard techniques from documentation, but by profiling deeply, understanding bottlenecks at the hardware level, and writing custom kernels when existing solutions aren't fast enough. This is low-level optimization work where every percentage point of improvement compounds across billions of operations.
You'll be the person who:
- Finds ideal training strategies (parallelism approaches, precision trade-offs) for a variety of model sizes and compute loads—because one-size-fits-all doesn't work at frontier scale
- Profiles, debugs, and optimizes single and multi-GPU operations using tools like Nsight and stack trace viewers to understand what's actually happening at the hardware level
- Reasons about the speed-quality trade-offs of quantization for model inference—knowing when reduced precision helps and when it hurts
- Develops and improves low-level kernel optimizations for state-of-the-art inference and training, writing custom implementations when off-the-shelf solutions leave performance on the table
- Innovates new ideas that bring us closer to the theoretical limits of GPU performance—exploring techniques that haven't been documented yet
Questions We're Wrestling With
- What's the optimal parallelism strategy for training transformer models at different scales, and how does it change with model architecture?
- Where are we memory-bound versus compute-bound, and what optimizations matter for each?
- How do you quantize diffusion models for inference without degrading generation quality?
- Which attention algorithms work best for our specific model architectures and sequence lengths?
- When should we write custom CUDA versus Triton kernels versus using existing implementations?
- How do we ensure kernel correctness while dealing with floating point errors that compound across billions of operations?
- What's the gap between our current performance and the theoretical limit of the hardware—and what's preventing us from closing it?
These aren't abstract questions—they're optimizations that determine whether training takes weeks or days, whether inference is interactive or frustrating.
Who Thrives Here
You understand GPUs at a deep level—memory hierarchy, computation capabilities, the gap between theoretical and achieved performance. You've written custom kernels and debugged why they're slower than expected. You know the difference between optimizations that work in microbenchmarks and optimizations that matter for real workloads. You get excited by profiler outputs and disappointed by wasted compute cycles.
You likely have:
- Familiarity with the latest and most effective techniques in optimizing inference and training workloads—not from reading papers, but from implementing them
- Experience optimizing for both memory-bound and compute-bound operations and understanding when each constraint matters
- Deep understanding of GPU memory hierarchy and computation capabilities—knowing what the hardware can do theoretically and what prevents us from achieving it
- Expertise with efficient attention algorithms and their performance characteristics at different scales
- Experience implementing both forward and backward Triton kernels and ensuring their correctness while considering floating point errors
- Proficiency using tools like pybind to integrate custom-written kernels into PyTorch frameworks
We'd be especially excited if you:
- Have experience with diffusion and autoregressive models and understand their specific optimization challenges
- Bring deep experience in low-level CUDA kernel optimizations beyond what Triton provides
- Have shipped optimizations that materially improved training or inference speed for production models
- Understand the tradeoffs between development time and performance gains
What We're Building Toward
We're not just optimizing models—we're pushing toward the physical limits of what's possible with current hardware. Every optimization you ship makes training faster and cheaper. Every kernel you write makes inference more responsive. Every technique you develop becomes part of how frontier models get built. If that sounds more compelling than applying existing optimizations, we should talk.
Base Annual Salary: $180,000–$300,000 USD
We're based in Europe and value depth over noise, collaboration over hero culture, and honest technical conversations over hype. Our models have been downloaded hundreds of millions of times, but we're still a ~50-person team learning what's possible at the edge of generative AI.
Create a Job Alert
Interested in building your career at Black Forest Labs? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field
.png?1754920013)
