
Forward Deployed Product Manager
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.
Why Cerebras?
Here at Cerebras, we have built the world’s first wafer-scale compute platform and software stack, purpose-designed to accelerate generative AI by over 10-20x what is possible on legacy processors today. AI developers are limited today by the constant tradeoffs between model quality, speed, and cost, and Cerebras’ mission is to remove these limitations to unlock AI creativity and potential.
- Unmatched speed. Our third-generation Wafer-Scale Engine (WSE-3) delivers sub-ms inference latencies and training throughput that eclipses GPU clusters by over 10x. Think instant code generation, instant design creation, agents that interact seamlessly and responsively to their users and environment.
- Full-stack innovation. From custom silicon to compilers, model research, and turnkey cloud inference, we are innovating and integrating at every layer so customers can focus on breakthroughs, not bottlenecks.
- Real-world impact. Cerebras customers are transforming industries across healthcare, energy, science, government, startup ecosystems, and more. We’re proud to be serving customers spanning the Fortune 500, government labs, and AI-native unicorns.
- Backed to win. Cerebras is supported by top investors, like Benchmark, Altimeter, Eclipse, and Coatue.
- Fearless and fun culture. We’re a close-knit, creative team that tackles all challenges with optimism and collaboration. We’ve already productized the world’s largest chip by over 50x. How hard can the next problem be? :)
About The Role
As a Forward Deployed Product Manager at Cerebras, you are the tip of the spear for our company. You’ll embed with our most strategic customers, from AI-native startups shipping 0-to-1 products to Fortune 500 enterprises transforming their industries, to translate and guide their ambitions into blazing-fast, production-ready AI solutions.
Think of yourself as part product leader, part technical expert, and part GTM strategist:
- Own the outcome – From first whiteboard session to scaled deployment, you are directly accountable for customer success, adoption, and expansion.
- Design for speed – Craft PoCs that showcase Cerebras’ latency super-powers, advise on model selection / fine-tuning, and benchmark end-to-end performance.
- Navigate complexities – pitch new ideas, align internal and customer stakeholders, unblock hurdles, and convert customer interest into long-term, thriving partnerships.
- Shape the roadmap – Distill customer insights into structured product feedback requirements, influencing future software features and chip and cluster designs.
Successful candidates will be passionate about creative problem solving and idea generation, learning and embedding into new domains, building relationships, and delighting customers.
You’ll have the opportunity to learn about and enable some of the most impactful AI products in the world, with industry-leading organizations across each vertical. You will get to work closely with a tight-knit product team, in a fast-moving but supportive environment. Your scope and career here will be driven by your passion, ability, and impact – not by your seniority or prior experience.
Key Responsibilities
You will:
- Be the product leader on our most critical lighthouse accounts, each pushing the limits of what’s possible with GenAI.
- Engage directly with companies from AI Natives at the cutting edge to large enterprises transforming their industries, to deeply understand their needs, goals, and requirements.
- Co-architect solutions— partner with Solutions Architects, Account Managers, and our Engineering and Product teams to design tailored solutions that leverage our 10x faster speed advatanges to transform customer applications.
- Directly advise on customers’ long-term AI strategies
- Become a go-to-market ninja. You will be co-owning the end-to-end customer journey, working across Sales, Solutions Architects, Marketing, Engineering, and Product teams to convert interest into long-term usage and expansion. As part of this, you will also be continuously helping to improve and optimize our processes.
- Identify new collaboration opportunities and use cases within accounts to expand Cerebras’ partnership with them.
- Drive the product roadmap, working closely with engineering, ML, and other product teams across the company to bring your deep understanding of customer requirements to drive future feature development.
Skills & Qualifications
Minimum Requirements
- Deep passion for creative problem solving and customer success.
- Strong technical background (CS/EE background, or prior experience as a SWE), and familiarity with LLMs, inference needs, agents, etc.
- 5+ years of experience as a product manager, currently at or above the level of Senior PM. Ideally, on a developer-facing product.
- Excellent ability to communicate with customers and navigate complex, high-stakes scenarios.
- Ability to thrive in a fast-paced, dynamic environment.
- An entrepreneurial sense of ownership of overall team and product success, and the ability to make things happen around you. A bias towards getting things done, owning the solution, and driving problems to resolution.
Preferred Requirements
- Experience with LLM serving stacks (vLLM, TensorRT-LLM, TGI), agent frameworks, etc.
- Interest in developer platforms and tooling.
- MBA of equivalent professional experience.
Location
- Hybrid at our Sunnyvale, CA or Toronto ON, Canada preferred.
- Remote possible for candidates willing to travel 1-2x per quarter.
Why Join Cerebras
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
- Build a breakthrough AI platform beyond the constraints of the GPU.
- Publish and open source their cutting-edge AI research.
- Work on one of the fastest AI supercomputers in the world.
- Enjoy job stability with startup vitality.
- Our simple, non-corporate work culture that respects individual beliefs.
Read our blog: Five Reasons to Join Cerebras in 2025.
Apply today and become part of the forefront of groundbreaking advancements in AI!
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Create a Job Alert
Interested in building your career at Cerebras Systems? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field