
Senior Research Engineer - Inference ML
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.
About The Role
As a Senior Research Engineer on the Inference ML team at Cerebras Systems, you will adapt today's most advanced language and vision models to run efficiently on our flagship Cerebras architecture. You'll work alongside ML researchers and engineers to design, prototype, validate, and optimize models, gaining end-to-end exposure to cutting-edge inference research on the world's fastest AI accelerator.
You will focus on pushing the frontier of speculative decoding, large-model pruning and compression, sparse attention, and sparsity-driven techniques to deliver low-latency, high-throughput inference at scale.
Responsibilities
- Design, implement, and optimize state-of-the-art transformer architectures for NLP and computer vision on Cerebras hardware.
- Research and prototype novel inference algorithms and model architectures that exploit the unique capabilities of Cerebras hardware, with emphasis on speculative decoding, pruning/compression, sparse attention, and sparsity.
- Train models to convergence, perform hyperparameter sweeps, and analyze results to inform next steps.
- Bring up new models on the Cerebras system, validate functional correctness, and troubleshoot any integration issues.
- Profile and optimize model code using Cerebras tools to maximize throughput and minimize latency.
- Develop diagnostic tooling or scripts to surface performance bottlenecks and guide optimization strategies for inference workloads.
- Collaborate across teams, including software, hardware, and product, to drive projects from inception through delivery.
Minimum Qualifications
- One of the following education and experience combinations:
- Bachelor’s degree in Computer Science, Software Engineering, Computer Engineering, Electrical Engineering, or a related technical field AND 7+ years of ML software development experience, OR
- Master’s degree in Computer Science or related technical field AND 4+ years of software development experience, OR
- PhD in Computer Science or related technical field with 2+ years of relevant research or industry experience, OR
- Equivalent practical experience.
- 4+ years of experience testing, maintaining, or launching software products, including 2+ years of experience with software design and architecture.
- 3+ years of experience in software development focused on machine learning (e.g., deep learning, large language models, or computer vision).
- Strong programming skills in Python and/or C++.
- Experience with Generative AI and Machine Learning systems.
Preferred Qualifications
- Master’s degree or PhD in Computer Science, Computer Engineering, or a related technical field.
- Experience independently driving complex ML or inference projects from prototype to production-quality implementations.
- Hands-on experience with relevant ML frameworks such as PyTorch, Transformers, vLLM, or SGLang.
- Experience with large language models, mixture-of-experts models, multimodal learning, or AI agents.
- Experience with speculative decoding, neural network pruning and compression, sparse attention, quantization, sparsity, post-training techniques, and inference-focused evaluations.
- Familiarity with large-scale model training and deployment, including performance and cost trade-offs in production systems.
- Triton/CUDA experience is a big plus.
Required Skills & Attributes
- Proficiency with at least one major ML framework (PyTorch, Transformers, vLLM, or SGLang).
- Deep understanding of transformer-based models in language and/or vision domains, with demonstrated experience implementing and optimizing them.
- Proven ability to implement custom layers, operators, and backpropagation logic.
- Strong foundation in performance optimization on specialized hardware (e.g., GPUs, TPUs, or HPC interconnects).
- Deep understanding of modern ML architectures and strong intuition for optimizing their performance, particularly for inference workloads using sparse attention, pruning/compression, and speculative decoding.
- Track record of owning problems end-to-end and autonomously acquiring whatever knowledge is needed to deliver results.
- Self-directed mindset with a demonstrated ability to identify and tackle the most impactful problems.
- Collaborative approach with humility, eagerness to help colleagues, and commitment to team success.
- Genuine passion for AI and a drive to push the limits of inference performance.
- Hybrid role in Toronto, ON, CA or Sunnyvale, CA, USA.
Why Join Cerebras
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
- Build a breakthrough AI platform beyond the constraints of the GPU.
- Publish and open source their cutting-edge AI research.
- Work on one of the fastest AI supercomputers in the world.
- Enjoy job stability with startup vitality.
- Our simple, non-corporate work culture that respects individual beliefs.
Read our blog: Five Reasons to Join Cerebras in 2025.
Apply today and become part of the forefront of groundbreaking advancements in AI!
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Create a Job Alert
Interested in building your career at Cerebras Systems? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field