
Applied AI/ML Scientist
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.
About The Role
As an Applied AI Scientist in the FieldML team, you will be responsible for developing and customizing large language models and more broadly large-scale deep learning models to solve specific customer problems. You won't just advise; you will build. You will bridge the gap between state-of-the-art research and real-world applications by helping customers harness the power of the Cerebras Wafer-Scale Engine (WSE) for their AI initiatives.
We are looking for experienced AI Scientists who are passionate about the "applied" side of machine learning - those who enjoy not just reading papers, but implementing, training, and scaling models to solve complex business and scientific problems. You will work on a diverse range of projects, from training bespoke models from scratch to fine-tuning and optimizing the latest Large Language Models (LLMs) for specific industry verticals, to designing and building components for custom agentic systems.
The ideal candidate has experience in large model training and/or post-training, a deep understanding of training dynamics and model convergence, and expertise in data curation, combined with strong communication skills.
Key Responsibilities
- Customer Use Case Discovery & Project Scoping
- Collaborate with customer stakeholders to identify the best approaches to their business problem with AI.
- Contribute to the technical scoping of engagements, including feasibility analysis, data quality/availability/readiness assessments, and the selection of optimal model architectures.
- Define project milestones, success metrics, and rigorous evaluation benchmarks to ensure the solution delivers measurable value to the customer’s business.
- Custom SOTA Models and AI Systems Development
- Architect and execute end-to-end training recipes for custom models, tailoring model architecture and training recipes to meet customer-specific performance and accuracy requirements.
- Design and implement sophisticated adaptation strategies, including continuous pre-training on private datasets, supervised fine-tuning (SFT), and post-training alignment via RLHF or DPO.
- Take full ownership of the training pipeline, from high-performance data preprocessing and tokenization to hyperparameter tuning and loss-curve analysis.
- Navigate the nuances of model convergence on specialized hardware, performing deep-dive analysis into loss dynamics and gradient stability.
- Scale training workloads across Cerebras clusters, ensuring efficient utilization of the hardware for multi-billion parameter models.
- Build and optimize the core components of agentic systems, focusing on tool-use capabilities, long-context reasoning, and multi-step planning.
- Technical Customer Leadership
- Serve as an AI/ML subject matter expert during technical deep-dives, translating customer requirements into precise training recipes.
- Build and maintain strong customer relationships to become their go-to AI/ML expert.
- Internal Research and Engineering Collaboration
- Act as the "voice of the customer" for internal R&D and engineering teams to drive improvements in our software stack and hardware utilization.
- Partner with internal ML teams and product teams on prioritization of novel model architectures with Cerebras software stack, development of training recipes and internal case studies.
- Distill customer-facing successful projects into internal playbooks, helping scale the FieldML team’s ability to deliver specialized models.
Skills And Qualifications
- Education: Master’s or PhD in Computer Science, Machine Learning, or related fields.
- Broad Deep Learning Expertise: Expert-level understanding of modern model architectures, including dense transformers, MoEs, multimodal and sequence models, scaling laws and training dynamics.
- Hands-on Trainig Experience: Proven track record of training and/or fine-tuning large models (1B+ parameters) and direct experience with the challenges of large-scale model training.
- Engineering Proficiency: Mastery of Python and PyTorch, experience with distributed training frameworks and large-scale distributed data processing pipelines and tools.
- Strong Interpersonal and Communication Skills: Effective in collaborative and fast-paced team settings, able to work autonomously and within a team in a dynamic environment, managing multiple projects and pivoting as customer needs evolve. Able to present complex technical results to diverse audience - from C-level executives to research scientists, and to work collaboratively to solve customers’ unique challenges.
Why Join Cerebras
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
- Build a breakthrough AI platform beyond the constraints of the GPU.
- Publish and open source their cutting-edge AI research.
- Work on one of the fastest AI supercomputers in the world.
- Enjoy job stability with startup vitality.
- Our simple, non-corporate work culture that respects individual beliefs.
Read our blog: Five Reasons to Join Cerebras in 2025.
Apply today and become part of the forefront of groundbreaking advancements in AI!
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Create a Job Alert
Interested in building your career at Cerebras Systems? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field