
Staff AI Engineer, Inference & Optimization
Sonatus is a well-funded, fast-paced, and rapidly growing company whose software products and solutions help automakers build dynamic software-defined vehicles. With over four million vehicles already on the road with top global OEM brands, our vehicle and cloud software solutions are at the forefront of automotive digital transformation. The Sonatus team is a talented and diverse collection of technology and automotive specialists hailing from many of the most prominent companies in their respective industries.
The Opportunity:
We're looking for a highly skilled and experienced Staff AI Engineer with domain expertise in optimizing AI models for production Edge environments. You’ll own the full lifecycle of model inference and hardware acceleration, from initial optimization to large-scale deployment. In this role, you will be a key contributor to our team, ensuring our AI solutions are not just functional but also incredibly fast, efficient, and reliable on various inference hardware platforms.
Role and Responsibilities:
- Design, build, and maintain robust pipelines and runtime environments for deploying and serving machine learning models at the Edge. Ensure high availability, low latency, and efficient resource utilization for inference at scale.
- Collaborate with researchers and hardware engineers to optimize models for performance, latency, and power consumption on specific hardware, including GPUs, TPUs, NPUs, and FPGAs. This includes a strong focus on inference optimization techniques like quantization, pruning, and knowledge distillation.
- Use of AI compilers and specialized software stacks (e.g., TensorRT, OpenVINO, TVM) to accelerate model execution, ensuring models are compiled and optimized for peak performance on target hardware.
- Design, build, and maintain MLOps pipelines for deploying models to various edge devices (e.g., highly integrated vehicle compute), with a specific focus on performance and efficiency constraints.
- Implement and maintain monitoring and alerting systems to track model performance, data drift, and overall model health in production.
- Work with cloud platforms and on-device environments to provision and manage the necessary infrastructure for scalable and reliable model serving.
- Proactively identify and resolve issues related to model performance, deployment failures, and data discrepancies, with a specific focus on inference bottlenecks.
- Work closely with Machine Learning Engineers, Software Engineers, and Product Managers to bring models from design to high-performance production systems.
Qualifications:
- Minimum 7 years of work experience in MLOps or a similar role with a strong focus on high-performance machine learning systems.
- Proven experience with inference optimization techniques such as quantization (INT8, FP16), pruning, and model distillation.
- Deep hands-on experience with hardware acceleration for machine learning, including familiarity with GPUs, TPUs, NPUs and related software ecosystems.
- Strong experience with AI compilers and runtime environments like TensorRT, OpenVINO, and TVM.
- Proven experience deploying and managing ML models on edge devices (e.g., NVIDIA Jetson, Raspberry Pi, NXP, Renesas).
- Strong experience in designing and building distributed systems. Proficiency with inter-process communication protocols like gRPC, message queuing systems like MQTT, and efficient data handling techniques such as buffering and callbacks.
- Hands-on experience with popular ML frameworks such as PyTorch, TensorFlow, TFLite, and ONNX.
- Proficiency in programming languages, including Python and C++.
- Solid understanding of machine learning concepts, the ML development lifecycle, and the challenges of deploying models at scale.
- Proficiency with containerization technologies (Docker, Kubernetes) and cloud platforms (AWS, Azure).
- Expertise in CI/CD principles and tools applied to machine learning workflows.
- Bachelor's or Master's degree in Computer Science, Electrical Engineering, or a related quantitative field.
Benefits:
Sonatus is a tight-knit team aligned around a unified vision. You can expect a strong engineering-oriented culture that focuses on building the best products and solutions for our customers. We embrace equality and diversity in all regards because respect is ingrained in our every fiber. Other benefits Sonatus offers include:
- Stock option plan
- Health care plan (Medical, Dental & Vision)
- Retirement plan (401k, IRA)
- Life Insurance (Basic, Voluntary & AD&D)
- Unlimited paid time off (Vacation, Sick & Public Holidays)
- Family leave (Maternity, Paternity)
- Flexible work arrangements
- Free food & snacks in office
The posted salary range is a general guideline and represents a good faith estimate of what Sonatus ("Company") could reasonably expect to pay for a base salary for this position. The pay offered to a selected candidate will be determined based on factors such as (but not limited to) the scope and responsibilities of the position, the qualifications of the selected candidate, departmental budget availability, geographic location and external market pay for comparable jobs. The Company reserves the right to modify this range in the future, as needed, as market conditions change.
Pay range for this role
$197,500 - $260,000 USD
Create a Job Alert
Interested in building your career at Sonatus? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field