Staff ML Ops Engineer
About Engine
At Engine, we’re transforming business travel into something personalized, rewarding, and simple. For too long, managing travel and spend has been overwhelming and fragmented — we’re here to change that. We believe the future of travel should be seamless and powered by technology that delights customers at every step. That’s why we’re building a platform that brings together corporate travel, a powerful charge card, and modern spend management in one place.
To make this vision real, we’re looking for exceptional, mission-driven people to help redefine how businesses manage and experience travel.
More than 20,000 companies already rely on Engine to support over 1 million travelers and billions in annual bookings each year. Cash flow positive with rapid growth, we pair exclusive Engine-only rates, industry-leading rewards, and intelligent automation to help businesses save money while delivering world-class personalization and convenience.
Backed by Telescope Partners, Blackstone, and Permira, Engine has been recognized as one of the fastest-growing travel and fintech platforms in North America, with honors including the Deloitte Fast 500 and Built In’s Best Places to Work.
Your Mission:
The Anti-Fraud & AI team is building cutting-edge ML and LLM-powered features that transform how businesses experience travel, and stop fraud and malicious use of our systems. As a Staff ML Ops Engineer, you'll architect and scale the infrastructure that powers our AI systems while also driving the development of intelligent features that help millions of travelers. You'll be the bridge between advanced ML research and production-ready systems, ensuring our AI capabilities are both innovative and reliably delivered at scale.
Here’s what you’ll take charge of:
Infrastructure & Platform Engineering:
- Model Serving Excellence: Deploy and operate ML models optimized for low-latency, high-throughput inference in production environments to power fraud detection, personalized user experiences, and more.
 - API Development: Build and maintain clean gRPC and REST interfaces to expose model predictions and AI features to upstream services
 - Platform Architecture: Design and build the infrastructure that powers RAG pipelines, vector databases, and real-time inference systems
 - Performance Optimization: Tackle cold-start issues, implement intelligent batching strategies, optimize serialization, and manage memory efficiently for LLM workloads
 
AI/ML Feature Development:
- LLM Systems Architecture: Own and optimize our LLM-based systems through systematic experimentation, prompt engineering, and infrastructure optimization
 - RAG Pipeline Engineering: Architect production-ready RAG systems including data ingestion pipelines, chunking strategies, vector database management, and retrieval optimization
 - Feature Delivery: Lead development of new AI-powered features from proof-of-concept through production deployment
 - System Optimization: Design and implement evaluation frameworks using metrics like latency, throughput, accuracy, and user engagement
 
Platform Operations & Scaling:
- Observability: Instrument comprehensive metrics (latency, throughput, error rates, model drift) and build dashboards for real-time monitoring
 - CI/CD & Versioning: Containerize ML workloads, automate model promotions across environments, and manage blue-green deployments with rollback capabilities
 - Infrastructure as Code: Manage deployments via Github Actions and Terraform, ensuring reproducible and scalable infrastructure
 
Cross-Functional Leadership:
- Stakeholder Collaboration: Partner with product, data science, and engineering teams to translate business requirements into scalable AI solutions
 - Innovation: Research and evaluate emerging technologies to keep Engine at the forefront of AI capabilities
 
What You’ll Bring to Engine:
We’re looking for someone who’s ready to make an impact and grow alongside us:
- Experience: 6+ years of industry experience building and scaling ML infrastructure, with hands-on experience deploying LLM-powered applications to production
 - ML Infrastructure Expertise: Deep experience with model serving frameworks (TensorFlow Serving, TorchServe, Triton), and building low-latency inference pipelines
 - LLM & RAG Systems: Proven experience building production RAG systems, including vector database management (Pinecone, Weaviate, Qdrant), embedding strategies, and retrieval optimization
 - Technical Excellence: Expert-level proficiency in Python, with strong skills in modern ML frameworks (PyTorch, TensorFlow) and LLM tooling (LangChain, LlamaIndex)
 - Production Systems: Experience with Docker, Kubernetes, and orchestrating ML workloads at scale
 - API Development: Strong experience building production-grade gRPC and REST APIs that handle millions of requests
 - Cloud & DevOps: Hands-on experience with AWS/GCP/Azure, Infrastructure as Code (Terraform), and CI/CD pipelines
 - Monitoring & Observability: Experience with Datadog, Prometheus, OpenTelemetry, or similar tools for production monitoring
 
Bonus Points:
- Experience with Argo CD, MLflow, or other ML lifecycle management tools
 - Background in travel, e-commerce, or marketplace platforms
 - Experience with multi-armed bandits or online learning systems
 - Familiarity with feature stores and real-time feature serving (Redis, Kafka)
 - Contributions to open-source ML/LLM projects
 - Publications or presentations in the ML/MLOps community
 
Applications for this role will be accepted through March 1, 2026 or until the role is filled. We encourage you to apply early, as we may begin reviewing applications before the deadline.
Compensation
Our compensation packages are based on several factors, including your experience, expertise, and location. In addition to a competitive base salary, total compensation may include equity and/or variable pay (OTE). Your recruiter will share your complete compensation package as you move through the process.
Base Pay Range
$148,200 - $205,000 USD
The Engine Edge: Perks & Compensation
We believe in rewarding great work with great benefits:
- Compensation: Competitive base pay tied to role and experience, with opportunities for bonuses, commissions, and equity.
 - Benefits: Check out our full list at engine.com/culture.
 - Environments for Success: Different roles have different needs in terms of the environments that drive success which is why we have a hybrid-hub model. Whether you are in one of our amazing offices or fully remote, we’ll make sure you have what you need to succeed.
 
Perks and benefits may vary based on employment type, location, and more.
Ready to Build the Future of Work Travel?
Join us on our mission to transform how work travel works—for businesses, for travelers, and for the industry. Apply now and let’s make travel simpler, smarter, and more enjoyable—together.
Create a Job Alert
Interested in building your career at Engine? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field
_(1).png?1761832223)
.png?1726587305)