Member of Technical Staff -- Cluster / Platform
About the Role
RadixArk is looking for a Member of Technical Staff (Cluster / Platform) to architect and scale the core compute platform that powers frontier-level AI training and inference.
You will design and operate highly reliable, high-performance GPU/TPU clusters, build next-generation scheduling and resource management systems, and push the limits of large-scale distributed infrastructure for AI workloads.
This role focuses on deep systems engineering across cluster architecture, networking, scheduling, and performance optimization. Your work will directly impact how efficiently frontier AI models are trained and served.
Requirements
-
5+ years of experience in distributed systems, infrastructure, or large-scale compute platforms
-
Strong background in distributed systems design and systems architecture
-
Deep experience with cluster management systems (Kubernetes, Slurm, Ray, or custom schedulers)
-
Hands-on experience with GPU/TPU infrastructure in production environments
-
Strong Linux systems and networking fundamentals
-
Proficiency in Go, Rust, C++, or Python for production systems
-
Experience debugging complex multi-layer issues across hardware, OS, networking, and distributed services
-
Proven ability to design reliable, scalable systems in production
Strong Plus:
-
Experience with large-scale ML/AI workloads
-
Familiarity with RDMA, InfiniBand, or high-performance networking
-
Experience operating clusters at 1000+ GPU scale
-
Background in HPC or performance-critical systems
-
Open-source contributions in systems or infrastructure
Responsibilities
-
Architect and scale large AI compute clusters for training and inference
-
Design cluster management, scheduling, and resource allocation systems
-
Optimize performance, utilization, and reliability of GPU/TPU clusters
-
Improve fault tolerance and system resilience at scale
-
Drive observability, monitoring, and performance profiling for cluster infrastructure
-
Collaborate with ML and systems engineers to support frontier AI workloads
-
Lead capacity planning and infrastructure scaling strategies
-
Build internal platforms and tooling to improve developer productivity
-
Document architecture, operational practices, and reliability strategies
-
Contribute to long-term platform vision and technical direction
About RadixArk
RadixArk is an infrastructure-first company built by engineers who've shipped production AI systems, created SGLang (20K+ GitHub stars, the fastest open LLM serving engine), and developed Miles (our large-scale RL framework).
We're on a mission to democratize frontier-level AI infrastructure by building world-class open systems for inference and training.
Our team has optimized kernels serving billions of tokens daily, designed distributed training systems coordinating 10,000+ GPUs, and contributed to infrastructure that powers leading AI companies and research labs.
We're backed by well-known infrastructure investors and partner with Google, AWS, and frontier AI labs.
Join us in building infrastructure that gives real leverage back to the AI community.
Compensation
We offer competitive compensation with equity, comprehensive health benefits, and flexible work arrangements. Compensation is determined by location, level, and experience.
Equal Opportunity
RadixArk is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Apply for this job
*
indicates a required field