SDE III - GPU Engineer
Glance AI is an AI commerce platform shaping the next wave of e-commerce with inspiration-led shopping, less about searching for what you want and more about discovering who you could be. Operating in 140 countries, Glance AI transforms every screen into a stage for instant, personal, and joyful discovery, where inspiration becomes something you can explore, feel, and shop in the moment.
Its proprietary models, seamlessly integrated with Google’s most advanced AI platforms Gemini and Imagen on Vertex AI, deliver hyper-realistic, deeply personal shopping experiences across fashion, beauty, travel, accessories, home décor, pets, and more. With an open architecture designed for effortless adoption across hardware and software ecosystems, Glance AI is building a platform that can become a staple in everyday consumer technology.
Glance AI partners with the world’s leading smartphone makers, connected TV manufacturers, telecom providers and global brands, meeting people where they are: on mobile, smart TVs and brand websites. Part of the InMobi Group, a global technology and advertising leader reaching over 2 billion devices and serving more than 30,000 enterprise brands worldwide, Glance AI is backed by Google, Jio Platforms and Mithril Capital.
About the Role
We are looking for a Senior Software Engineer (SDE III) who will build, profile, and optimize GPU workloads powering next-generation generative AI experiences — from Stable Diffusion image generation to transformer-based multimodal models.
You’ll work closely with research and infrastructure teams to make model inference faster, more cost-efficient, and production-ready.
This role is ideal for engineers passionate about pushing GPUs to their limits, writing high-performance kernels, and turning cutting-edge research into scalable systems.
Key Responsibilities
- Develop, optimize, and maintain GPU kernels (CUDA, Triton, ROCm) for diffusion, attention, and convolution operators.
- Profile end-to-end inference pipelines (data movement, kernel scheduling, memory transfers) to identify and resolve bottlenecks.
- Apply techniques like operator fusion, tiling, caching, and mixed-precision compute to maximize GPU throughput.
- Collaborate with researchers to productionize experimental layers or model architectures.
- Build benchmarking tools and micro-tests for latency, memory, and throughput regressions.
- Integrate kernel improvements into serving stacks, ensuring reliability and repeatable performance.
- Work with platform teams to tune runtime configurations and job scheduling for GPU utilization.
Required Qualifications
- 4+ years of experience in systems or ML engineering, with 2+ years working on GPU or accelerator optimization.
- Strong hands-on skills with CUDA programming, memory hierarchies, warps, threads, and shared memory.
- Familiarity with profiling tools (Nsight, nvprof, CUPTI) and performance analysis.
- Working knowledge of PyTorch, JAX, or TensorFlow internals.
- Proficiency in C++ and Python.
- Experience with mixed precision, FP16/BF16, or quantization.
- Deep curiosity about system bottlenecks and numerical correctness.
Preferred Qualifications
- Experience building fused operators or integrating custom kernels with PyTorch extensions.
- Understanding of NCCL / distributed inference frameworks.
- Contributions to open-source GPU or compiler projects (Triton, TVM, XLA, TensorRT).
- Familiarity with multi-GPU / multi-node training and inference setups.
"Glance collects and processes personal data such as your name, contact details, resume and other information that may contain personal data for the purpose of processing your application. Glance utilizes Greenhouse, a third-party platform. Please review Greenhouse's Privacy Policy to understand how the data collected from you is processed and managed. By clicking on 'Submit Application', you acknowledge and agree to the above privacy terms. Should you have any privacy concerns, you may contact us through the details mentioned in your application confirmation email."
Apply for this job
*
indicates a required field