● You’ve deployed AI/ML models in production — and understand the nuances of performance, drift, and observability
● You have hands-on experience integrating models into APIs, microservices, or enterprise workflows
● You know how to manage the lifecycle of LLMs — from prompt design and embedding generation to context tuning and monitoring
● You’re fluent in Python and comfortable working with AI/ML libraries (e.g., HuggingFace Transformers, PyTorch, TensorFlow)
● You’ve worked with LangChain, LangGraph, Databricks notebooks, and Spark pipelines
● You’re curious about GenAI trends but also pragmatic: you know what belongs in production and what doesn’t
● You’re familiar with the principles of responsible AI, including fairness, explainability, and compliance in regulated industries
● You collaborate well with cross-functional teams, and have experience working in consulting or enterprise delivery environments
Requirements
● 6+ years in AI/ML or applied data science, with 2–3+ years building production-grade systems
● Strong proficiency in Python and frameworks such as PyTorch, TensorFlow, and HuggingFace
● Practical experience with LLM applications: prompt engineering, embedding pipelines, RAG workflows
● Familiarity with vector databases (e.g., FAISS, Weaviate, Chroma, Milvus)
● Hands-on with LangChain, LangGraph, and Langfuse for agent orchestration and observability
● Experience working in Databricks, leveraging Delta Lake and Spark for data processing
● Exposure to MLOps/LLMOps workflows: prompt versioning, CI/CD pipelines, trace logging, and automated evaluations
● Demonstrated success supporting technical delivery in consulting, integration, or platform engineering teams
● Strong documentation habits and clear communication skills for working in distributed, cross-functional teams