Back to jobs
New

Senior ML Engineer – ML/Inference

Remote

MARA is redefining the future of sovereign, energy-aware AI infrastructure. We’re building a modular platform that unifies IaaS, PaaS, and SaaS which will enable governments, enterprises, and AI innovators to deploy, scale, and govern workloads across data centers, edge environments, and sovereign clouds. 

MARA is seeking a Machine Learning Engineer to lead the deployment, optimization, and lifecycle management of AI models powering our inference and agentic platforms. This role sits at the intersection of ML research, infrastructure, and systems engineering—responsible for taking foundation and custom models from prototype to production with efficiency, observability, and scalability. The ideal candidate combines deep knowledge of inference optimization, orchestration frameworks, and RAG pipelines with a strong hands-on background in MLOps and distributed systems. 

 

ESSENTIAL DUTIES AND RESPONSIBILITIES

  • Own the end-to-end lifecycle of ML model deployment—from training artifacts to production inference services.
  • Design, build, and maintain scalable inference pipelines using modern orchestration frameworks (e.g., Kubeflow, Airflow, Ray, MLflow).
  • Implement and optimize model serving infrastructure for latency, throughput, and cost efficiency across GPU and CPU clusters.
  • Develop and tune Retrieval-Augmented Generation (RAG) systems, including vector database configuration, embedding optimization, and retriever–generator orchestration.
  • Collaborate with product and platform teams to integrate model APIs and agentic workflows into customer-facing systems.
  • Evaluate, benchmark, and optimize large language and multimodal models using quantization, pruning, and distillation techniques.
  • Design CI/CD workflows for ML systems, ensuring reproducibility, observability, and continuous delivery of model updates.
  • Contribute to the development of internal tools for dataset management, feature stores, and evaluation pipelines.
  • Monitor production model performance, detect drift, and drive improvements to reliability and explainability.
  • Explore and integrate emerging agentic and orchestration frameworks (LangChain, LangGraph, CrewAI, etc.) to accelerate development of intelligent systems.

 

 QUALIFICATIONS

  • 5+ years of experience in applied ML or ML infrastructure engineering.
  • Proven expertise in model serving and inference optimization (TensorRT, ONNX, vLLM, Triton, DeepSpeed, or similar).
  • Strong proficiency in Python, with experience building APIs and pipelines using FastAPI, PyTorch, and Hugging Face tooling.
  • Experience configuring and tuning RAG systems (vector databases such as Milvus, Weaviate, LanceDB, or pgvector).
  • Solid foundation in MLOps practices: versioning (MLflow, DVC), orchestration (Airflow, Kubeflow), and monitoring (Prometheus, Grafana, Sentry).
  • Familiarity with distributed compute systems (Kubernetes, Ray, Slurm) and cloud ML stacks (AWS Sagemaker, GCP Vertex AI, Azure ML).
  • Understanding of prompt engineering, agentic frameworks, and LLM evaluation.
  • Strong collaboration and documentation skills, with ability to bridge ML research, DevOps, and product development. 

 

PREFERRED EXPERIENCE

  • Background in HPC, ML infrastructure, or sovereign/regulated environments.
  • Familiarity with energy-aware computing, modular data centers, or ESG-driven infrastructure design.
  • Experience collaborating with European and global engineering partners.
  • Strong communicator who can bridge engineering, business, and vendor ecosystems seamlessly. 

Create a Job Alert

Interested in building your career at MARA? Get future opportunities sent straight to your email.

Apply for this job

*

indicates a required field

Phone
Resume/CV

Accepted file types: pdf, doc, docx, txt, rtf

Cover Letter

Accepted file types: pdf, doc, docx, txt, rtf