Member of Technical Staff — Training
About the Role
RadixArk is seeking a Member of Technical Staff — Training to build and scale the systems that train frontier AI models.
You will work on large-scale distributed training infrastructure for LLMs and generative models, pushing the limits of scale, efficiency, and reliability across thousands of GPUs. This role sits at the intersection of ML, systems, and performance engineering.
Your work will directly impact how next-generation AI models are trained and scaled.
This is a deeply technical, high-impact role for engineers who enjoy solving hard systems problems at extreme scale.
Requirements
-
5+ years of experience in ML systems, distributed systems, or large-scale training infrastructure
-
Strong experience with large-scale distributed training (data, tensor, and pipeline parallelism)
-
Deep understanding of GPU/TPU architecture and performance trade-offs
-
Strong knowledge of PyTorch or JAX distributed training stacks
-
Experience debugging performance and stability issues in large training jobs
-
Solid distributed systems fundamentals (networking, consensus, fault tolerance)
-
Proficiency in Python plus a systems language (C++, Go, or Rust)
-
Experience operating production ML systems at scale
Strong Plus
-
Experience training multi-billion-parameter models
-
Familiarity with DeepSpeed, Megatron-LM, FSDP, or custom training stacks
-
Experience with RDMA, InfiniBand, or high-speed interconnects
-
Background in HPC or performance-critical computing
-
Contributions to ML systems open-source projects
-
Experience with checkpointing, fault recovery, and elastic training
-
Experience optimizing training cost efficiency at scale
Responsibilities
-
Design and operate large-scale distributed training systems
-
Optimize throughput, scalability, and hardware efficiency
-
Improve reliability and fault tolerance for long-running training jobs
-
Develop training frameworks and infrastructure tooling
-
Collaborate with model researchers to support frontier experiments
-
Debug and resolve cross-layer performance bottlenecks
-
Build observability systems for training performance and reliability
-
Drive capacity planning and cluster utilization strategies
-
Contribute to long-term training infrastructure architecture
About RadixArk
RadixArk is an infrastructure-first company built by engineers who've shipped production AI systems, created SGLang (20K+ GitHub stars, the fastest open LLM serving engine), and developed Miles (our large-scale RL framework).
We're on a mission to democratize frontier-level AI infrastructure by building world-class open systems for inference and training.
Our team has optimized kernels serving billions of tokens daily, designed distributed training systems coordinating 10,000+ GPUs, and contributed to infrastructure that powers leading AI companies and research labs.
We're backed by well-known infrastructure investors and partner with Nvidia, Google, AWS, and frontier AI labs.
Join us in building infrastructure that gives real leverage back to the AI community.
Compensation
We offer competitive compensation with meaningful equity, comprehensive benefits, and flexible work arrangements. Compensation depends on location, experience, and level.
Equal Opportunity
RadixArk is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Apply for this job
*
indicates a required field