
Machine Learning Engineer - Full Stack ML Pipelines
EVOCS OVERVIEW
EVOCS’s journey began with a mission to empower businesses with advisory expertise, empowered with idealtechnologies to provide them with comprehensive solutions to grow and prosper.
Founded by a team of passionate experts, EVOCS has grown into a trusted partner to a growing number of leaders across their respective industries. Our roots in employee-managed operations reflect our commitment to quality, consistency, and client success.
If you enjoy working in a hyper-fast-growing company, are eager to be part of an agile team, and want to be part of our success story, then let’s talk!
🎯 Role Overview
We are seeking an experienced Machine Learning Engineer to design, build, and deploy end-to-end ML pipelines across multi-cloud environments. This role sits at the intersection of data engineering, machine learning, and software development — requiring a rare blend of deep ML expertise and production-grade engineering skills. You will own the full lifecycle of ML systems, from data ingestion and feature engineering through model training, deployment, and monitoring at scale.
🧩 What you will do
In this role, you will:
- Architect and implement end-to-end machine learning pipelines spanning data collection, preprocessing, feature engineering, model training, evaluation, deployment, and monitoring.
- Design and deploy ML workloads across AWS (SageMaker, Lambda, EMR), Google Cloud Platform (Vertex AI, BigQuery ML, Dataflow), and Microsoft Azure (Azure ML, Databricks, Synapse Analytics).
- Build and optimize models using a broad range of methodologies including transformer-based architectures (BERT, GPT, RoBERTa), classical NLP techniques, gradient boosting frameworks (XGBoost, LightGBM, CatBoost), deep learning (CNNs, RNNs, LSTMs), and ensemble methods.
- Develop NLP solutions for text classification, named entity recognition, sentiment analysis, semantic search, summarization, and question answering.
- Implement robust feature stores, data versioning, and experiment tracking using tools such as MLflow, Weights & Biases, DVC, and Feature Store platforms.
- Build scalable data pipelines using Apache Spark, Apache Kafka, Apache Airflow, and cloud-native orchestration tools.
- Containerize and orchestrate ML services using Docker, Kubernetes, and serverless architectures for high-availability inference endpoints.
- Establish CI/CD pipelines for ML (MLOps) to automate model retraining, validation, A/B testing, and canary deployments.
- Monitor model performance in production, detect data drift and concept drift, and implement automated retraining triggers.
- Collaborate with data scientists, product managers, and software engineers to translate business requirements into scalable ML solutions.
- Maintain thorough documentation, conduct code reviews, and contribute to internal ML best practices and standards.
🧠 What you will bring
The top candidate will have the following skills:
- Education: Bachelor's or Master's degree in Computer Science, Machine Learning, Data Science, Statistics, or a related quantitative field. PhD is a plus.
- Experience: 5+ years of professional experience building and deploying ML models in production environments.
- Programming: Advanced proficiency in Python; strong familiarity with Java, Scala, or Go is a plus.
- ML Frameworks: Hands-on experience with PyTorch, TensorFlow, Hugging Face Transformers, scikit-learn, XGBoost, LightGBM, and spaCy.
- NLP Expertise: Demonstrated experience fine-tuning transformer models (BERT, DistilBERT, GPT variants), building NLP pipelines, and working with text embeddings and vector databases.
- Cloud Platforms: Production experience with at least two of AWS, GCP, and Azure, including their respective ML and data services.
- Data Engineering: Proficiency with SQL, Spark, and distributed data processing frameworks; experience with both batch and real-time streaming pipelines.
- MLOps & Infrastructure: Experience with Docker, Kubernetes, Terraform or CloudFormation, and CI/CD tools (GitHub Actions, Jenkins, GitLab CI).
- Experiment Tracking: Familiarity with MLflow, Weights & Biases, or equivalent platforms for reproducibility and model governance.
Ideally you have…
- Experience with large language models (LLMs), retrieval-augmented generation (RAG), and prompt engineering.
- Familiarity with graph neural networks, reinforcement learning, or time-series forecasting methods.
- Experience building real-time inference systems with sub-100ms latency requirements.
- Contributions to open-source ML projects or published research in ML/NLP.
- Experience with data mesh or data lakehouse architectures.
- Knowledge of responsible AI practices including fairness, explainability (SHAP, LIME), and bias mitigation.
- Professional cloud certifications (AWS ML Specialty, GCP Professional ML Engineer, Azure AI Engineer).
⚙️ Technical Environment
You'll work with a modern stack that includes but is not limited to: Python, PyTorch, TensorFlow, Hugging Face, XGBoost, Apache Spark, Airflow, Kafka, Docker, Kubernetes, Terraform, MLflow, AWS SageMaker, GCP Vertex AI, Azure ML, PostgreSQL, Redis, Elasticsearch, and vector databases (Pinecone, Weaviate, or Milvus).
👥 Our Values
We are privileged to serve our loyal customer base in our mission to build lasting relationships with our clients based on trust and mutual success. We strive to deliver exceptional quality and consistency through a white-glove approach. By empowering businesses with tailored solutions and insights, we help them achieve their goals and navigate the ever-evolving tech landscape.
The values we live by:
- Customer-centric Solutions
- Innovation & Excellence
- Integrity & Transparency
- Data-driven Decision Making
Apply for this job
*
indicates a required field
