Become a Machine Learning Engineer, Generative AI & LLMs Expert
Factored was conceived in Palo Alto, California by Andrew Ng and a team of highly experienced AI researchers, educators, and engineers to help address the significant shortage of qualified AI & Machine-Learning engineers globally. We know that exceptional technical aptitude, intelligence, communication skills, and passion are equally distributed around the world, and we are very committed to testing, vetting, and nurturing the most talented engineers for our program and on behalf of our clients.
We are seeking a Machine Learning Engineer who is passionate about building state-of-the-art Generative AI solutions, particularly Large Language Models (LLMs) and Multi-Agent Systems. You will strengthen both your technical and soft skills through a combination of online learning, collaborative sessions, and hands-on practice. Throughout the program, you’ll take part in a combination of asynchronous learning (self-paced online content) and synchronous sessions led by our Training and Talent teams—where you’ll have the chance to share insights, discuss challenges, and get guidance along the way. At Factored, you will be rewarded with an amazing team that supports you, a rich culture, shared success, and the flexibility to work from the comfort of your home.
BOOSTING PROGRAM!
We are excited to launch a six-week Boosting Program, a full-time intensive experience designed to transform strong quantitative and coding skills into cutting-edge expertise. This program focuses on building advanced capabilities in areas such as exploring key LLM concepts, experimenting with frameworks, and understanding the components of modern GenAI systems, the next evolution in AI systems: Agentic Architectures. You’ll learn how to build multi-agent workflows and design intelligent systems capable of reasoning and action, as well as improve your knowledge and exposure to RAGs and LLMOps.
As a Machine Learning Engineer (LLMs), you’ll design, implement, and optimize advanced NLP solutions to solve complex, real-world challenges across diverse domains. Our goal is to help you sharpen your LLM expertise, elevating you from an expert to a true master. You won’t just use the tools; you’ll understand their inner workings, the fundamentals behind them, and how to push their boundaries to the next level.
Functional Responsibilities:
- Engage in a combination of asynchronous learning (self-paced online content) and synchronous sessions led by our Training and Talent teams, providing opportunities to share insights, discuss challenges, and receive expert guidance.
- Lead End-to-End ML Development: Design, develop, and deploy advanced machine learning models, with a strong focus on Retrieval-Augmented Generation (RAG), LLMs, and Generative AI solutions.
- Build Intelligent Systems: Create and implement intelligent systems that effectively integrate retrieval and generation techniques, significantly enhancing model performance and real-world usability.
- Model Optimization & Deployment: Implement and optimize complex NLP and deep learning models using frameworks like PyTorch/TensorFlow for robust, scalable production environments.
- Drive Innovation: Explore and apply cutting-edge methodologies, such as advanced prompt engineering, model fine-tuning, and AI agent automation, to push the boundaries of our solutions.
- Ensure Scalability: Work with cloud computing platforms (AWS, GCP) or equivalent on-premise solutions to guarantee reliable and scalable deployment of all AI systems.
- Cross-Functional Collaboration: Work closely with software engineers, data scientists, and product teams to seamlessly integrate AI solutions into real-world applications.
- Effective Communication: Clearly and effectively communicate complex engineering challenges and innovative solutions to both fellow engineers and business partner teams.
Qualifications:
- Education: Bachelor’s or Master’s degree in Computer Science, Statistics, Mathematics, or a related quantitative field.
- Experience: 5+ years of hands-on experience in developing and deploying machine learning models, with 2+ years of direct experience working with production NLP and deep learning models.
- Excellent English communication skills.
- Technical Stack: Strong Python skills and adherence to coding best practices.
- Framework Proficiency: Deep proficiency with PyTorch/TensorFlow for deep learning and NLP applications.
- Generative AI Exposure:
- Expertise in Retrieval-Augmented Generation (RAG) and other techniques to enhance AI model capabilities.
- Hands-on experience with LLMs and Generative AI frameworks (e.g., Hugging Face, OpenAI API, LangChain).
- Strong understanding of prompting techniques, including the trade-offs between prompt engineering and fine-tuning.
- Proficiency in developing AI agents and automation workflows.
- Infrastructure: Experience with cloud computing (AWS, GCP) or equivalent on-premise platforms for model deployment.
- Communication: Excellent communication skills, with the ability to articulate complex engineering concepts clearly and effectively to diverse audiences.
Create a Job Alert
Interested in building your career at Factored? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field
