Member of Technical Staff — Kernel / Compiler / Communication
About the Role
RadixArk is seeking a Member of Technical Staff — Kernel / Compiler / Communication to push the limits of performance for frontier AI systems.
You will work at the lowest layers of the stack — kernels, runtimes, compilers, and communication libraries — to unlock maximum efficiency from modern accelerators and interconnects.
This role is critical to scaling training and inference across thousands of GPUs, where microseconds and memory bandwidth matter. Your work will directly shape the performance envelope of next-generation AI systems.
This is a deeply technical role for engineers who enjoy working close to hardware and solving performance problems that most engineers never encounter.
Requirements
-
5+ years of experience in systems, compiler, or performance engineering
-
Strong expertise in CUDA or accelerator programming
-
Deep understanding of GPU architecture and memory hierarchy
-
Experience writing or optimizing high-performance kernels
-
Strong background in compilers, runtimes, or code generation
-
Experience with distributed communication libraries (NCCL, MPI, RCCL, etc.)
-
Solid knowledge of networking and interconnect technologies
-
Proficiency in C++ and Python
-
Strong debugging and profiling skills at system level
Strong Plus
-
Experience with Triton, TVM, XLA, or MLIR
-
Experience building compiler passes or IR transformations
-
Familiarity with NVLink, InfiniBand, or RDMA
-
Experience optimizing collective communication at scale
-
Background in HPC or performance-critical systems
-
Contributions to kernel/compiler/ML systems open source
-
Experience scaling workloads to 1000+ GPUs
-
Experience with mixed-precision or quantized kernels
Responsibilities
-
Design and implement high-performance kernels for AI workloads
-
Optimize compiler and runtime stacks for ML systems
-
Improve communication efficiency across large GPU clusters
-
Reduce latency and increase throughput for distributed workloads
-
Profile and eliminate system bottlenecks across the stack
-
Collaborate with training and inference teams on performance optimization
-
Develop tooling for profiling and performance analysis
-
Contribute to long-term architecture for performance-critical systems
-
Push the limits of hardware–software co-design
About RadixArk
RadixArk is an infrastructure-first company built by engineers who've shipped production AI systems, created SGLang (20K+ GitHub stars, the fastest open LLM serving engine), and developed Miles (our large-scale RL framework).
We're on a mission to democratize frontier-level AI infrastructure by building world-class open systems for inference and training.
Our team has optimized kernels serving billions of tokens daily, designed distributed training systems coordinating 10,000+ GPUs, and contributed to infrastructure that powers leading AI companies and research labs.
We're backed by well-known infrastructure investors and partner with Nvidia, Google, AWS, and frontier AI labs.
Join us in building infrastructure that gives real leverage back to the AI community.
Compensation
We offer competitive compensation with meaningful equity, comprehensive benefits, and flexible work arrangements. Compensation depends on location, experience, and level.
Equal Opportunity
RadixArk is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Apply for this job
*
indicates a required field