Back to jobs
New

Senior Machine Learning Engineer

US

EVOCS OVERVIEW

EVOCS’s journey began with a mission to empower businesses with advisory expertise, empowered with idealtechnologies to provide them with comprehensive solutions to grow and prosper.

Founded by a team of passionate experts, EVOCS has grown into a trusted partner to a growing number of leaders across their respective industries. Our roots in employee-managed operations reflect our commitment to quality, consistency, and client success.

If you enjoy working in a hyper-fast-growing company, are eager to be part of an agile team, and want to be part of our success story, then let’s talk!

🎯 Role Overview

As a Senior Machine Learning Engineer, you will be the person we trust with the training side of our AI work. You’ll decide what to build, how to build it, and whether to build it at all. You will be responsible for the quality of the models we ship: the data they learn from, the pipelines that produce them, and the judgment calls that separate a useful model from an expensive one. You’ll be mentoring engineers who’ve never watched a loss curve diverge and felt something. 

 

🧩 What you will do

In this role, you will:

  • Be the primary person responsible for the data piece of our AI initiatives. This could be called the unglamorous stuff that decides whether the model works, but this is your passion. You put the enthusiast in MLE.
  • Build and maintain training and retraining pipelines in Azure AI Foundry. We’re talking working on the model catalog, fine-tuning workflows, deployment, drift monitoring, and closing the loop when production data reveals the eval set was lying to you.
  • Make the real model design calls because you know best which way to go: full fine-tune vs. LoRA/QLoRA vs. DPO vs. “better prompting would save us three weeks.” Know when not to train.
  • Run hyperparameter work that isn’t a grid search copied from a 2021 Medium post.
  • Operate distributed training setups and know what breaks at scale. Pick your poison: FSDP, DeepSpeed, Megatron, accelerate, etc.
  • Design eval harnesses that catch what’s actually wrong, with a skeptical eye on benchmark contamination.
  • Ship models into production as the load-bearing piece of the product, not a feature slapped on the side.
  • Mentor engineers who can call an inference endpoint but have never trained one themselves.

🧠 What you will bring

The top candidate will have the following skills:

  • 5+ years of ML engineering experience, with meaningful time spent fine-tuning transformer models end-to-end. We’re not talking notebook demos, we mean real runs with real eval harnesses where you worked through and found the problems and fixed them.
  • Strong Python and PyTorch, plus fluency with the Hugging Face stack (transformers, datasets, accelerate, peft, trl). Bonus for JAX; extra bonus for having read a CUDA kernel and not flinched.
  • You’ve already built or seriously operated a distributed training setup and know how to set that up.
  • Azure AI Foundry experience (or strong Azure ML adjacent and willingness to get deep), plus SQL and at least one data pipeline tool (dbt, Airflow, Dagster, or Spark. We’re not religious).
  • Experiment tracking discipline (W&B, MLflow, or a spreadsheet you defend philosophically) and the usual engineer stuff. You should know Git, Docker, and have the ability to actually ship.
  • Fluent in English (written and spoken) – bilingual or near-native level
  • Strong interpersonal and communication skills – this is a client-facing role that involves frequent interaction via email, calls, and meetings

 

Ideally you have…

  • Run quantized models locally — GGUF, GPTQ, AWQ, MLX — and know what K-quants are and why Q4_K_M is usually the sweet spot.
  • Familiarity with the whisper.cpp / flash-moe universe — efficient inference on hardware that shouldn’t be able to do that. (Spoiler: our projects will go here.)
  • A strong take on MoE routing, speculative decoding, or why KV-cache management is more interesting than it has any right to be.
  • RLHF, DPO, or preference data curation experience.

👥 Our Values

We are privileged to serve our loyal customer base in our mission to build lasting relationships with our clients based on trust and mutual success. We strive to deliver exceptional quality and consistency through a white-glove approach. By empowering businesses with tailored solutions and insights, we help them achieve their goals and navigate the ever-evolving tech landscape.

The values we live by:

  • Customer-centric Solutions
  • Innovation & Excellence
  • Integrity & Transparency
  • Data-driven Decision Making

Apply for this job

*

indicates a required field

Phone
Resume/CV*

Accepted file types: pdf, doc, docx, txt, rtf

Cover Letter

Accepted file types: pdf, doc, docx, txt, rtf