Software Engineer, AI Training Infrastructure
About Us:
Here at Fireworks, we’re building the future of generative AI infrastructure. Fireworks offers the generative AI platform with the highest-quality models and the fastest, most scalable inference. We’ve been independently benchmarked to have the fastest LLM inference and have been getting great traction with innovative research projects, like our own function calling and multi-modal models. Fireworks is funded by top investors, like Benchmark and Sequoia, and we’re an ambitious, fun team composed primarily of veterans from Pytorch and Google Vertex AI.
The Role:
As a Training Infrastructure Engineer, you'll design, build, and optimize the infrastructure that powers our large-scale model training operations. Your work will be essential to developing high-performance AI training infrastructure. You'll collaborate with AI researchers and engineers to create robust training pipelines, optimize distributed training workloads, and ensure reliable model development.
Key Responsibilities:
- Design and implement scalable infrastructure for large-scale model training workloads
- Develop and maintain distributed training pipelines for LLMs and multimodal models
- Optimize training performance across multiple GPUs, nodes, and data centers
- Implement monitoring, logging, and debugging tools for training operations
- Architect and maintain data storage solutions for large-scale training datasets
- Automate infrastructure provisioning, scaling, and orchestration for model training
- Collaborate with researchers to implement and optimize training methodologies
- Analyze and improve efficiency, scalability, and cost-effectiveness of training systems
- Troubleshoot complex performance issues in distributed training environments
Minimum Qualifications:
- Bachelor's degree in Computer Science, Computer Engineering, or related field, or equivalent practical experience
- 3+ years of experience with distributed systems and ML infrastructure
- Experience with PyTorch
- Proficiency in cloud platforms (AWS, GCP, Azure)
- Experience with containerization, orchestration (Kubernetes, Docker)
- Knowledge of distributed training techniques (data parallelism, model parallelism, FSDP)
Preferred Qualifications:
- Master's or PhD in Computer Science or related field
- Experience training large language models or multimodal AI systems
- Experience with ML workflow orchestration tools
- Background in optimizing high-performance distributed computing systems
- Familiarity with ML DevOps practices
- Contributions to open-source ML infrastructure or related projects
Compensation is determined by various factors including individual qualifications, experience, skills, interview performance, market data, and work location. The listed salary range for this role is a guideline and may be modified.
Redwood City Pay Range
$175,000 - $190,000 USD
Compensation is determined by various factors including individual qualifications, experience, skills, interview performance, market data, and work location. The listed salary range for this role is a guideline and may be modified.
New York Pay Range
$175,000 - $190,000 USD
Why Fireworks AI?
- Solve Hard Problems: Tackle challenges at the forefront of AI infrastructure, from low-latency inference to scalable model serving.
- Build What’s Next: Work with bleeding-edge technology that impacts how businesses and developers harness AI globally.
- Ownership & Impact: Join a fast-growing, passionate team where your work directly shapes the future of AI—no bureaucracy, just results.
- Learn from the Best: Collaborate with world-class engineers and AI researchers who thrive on curiosity and innovation.
Fireworks AI is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all innovators.
Apply for this job
*
indicates a required field