Inference Software Engineer
About Etched
Etched is building AI chips that are hard-coded for individual model architectures. Our first product (Sohu) only supports transformers, but has an order of magnitude more throughput and lower latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep & parallel chain-of-thought reasoning agents.
Key responsibilities
- Contribute to the architecture and design of the Sohu host software stack
- Implement high-performance, modular code across the complete Etched software stack, consisting of a mix of Rust, C++ and Python.
- Interface with firmware and drivers teams delivering highest-performance HW/SW stack.
- Work with AI model researchers and product-facing teams building out the Etched serving front-end.
Representative projects
- Build scheduling logic for handling continuous batching and real time inference
- Implement inference-time acceleration techniques such as speculative decoding, tree search, KV cache sharing, etc.
- Implement distributed networking primitives for efficient multi-server inference
You may be a good fit if you have
- Experience with C++ and Python
- Familiarity with transformer model architectures and inference serving stacks (vLLM, SGLang, etc.) or experience working in distributed inference/training environments
- Experience working cross-functionally in large software and hardware organizations
Strong candidates may also have
- Experience with Rust
- Familiarity with GPU kernels, the CUDA compilation stack and related tools, or other hardware accelerators
- Understanding of distributed systems, networking, and parallel programming
Benefits
- Full medical, dental, and vision packages, with 100% of premium covered
- Housing subsidy of $2,000/month for those living within walking distance of the office
- Daily lunch and dinner in our office
- Relocation support for those moving to Cupertino
How we’re different
Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.
We are a fully in-person team in Cupertino, and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.
Apply for this job
*
indicates a required field