Back to jobs
New

Machine Learning Systems Engineer

Remote , Remote Non-EU

Who We Are

At RelationalAI, we are building the future of intelligent data systems through our cloud-native relational knowledge graph management system—a platform designed for learning, reasoning, and prediction.

We are a remote-first, globally distributed team with colleagues across six continents. From day one, we’ve embraced asynchronous collaboration and flexible schedules, recognizing that innovation doesn’t follow a 9-to-5.

We are committed to an open, transparent, and inclusive workplace. We value the unique backgrounds of every team member and believe in fostering a culture of respect, curiosity, and innovation. We support each other’s growth and success—and take the well-being of our colleagues seriously. We encourage everyone to find a healthy balance that affords them a productive, happy life, wherever they choose to live.

We bring together engineers who love building core infrastructure, obsess over developer experience, and want to make complex systems scalable, observable, and reliable.

Machine Learning Systems Engineer 

Location: Remote (San Francisco Bay Area / North America / South America)

Experience Level: 3+ years of experience in machine learning engineering or research

About ScalarLM

This role will involve heavily working with the ScalarLM framework and team.

ScalarLM unifies vLLM, Megatron-LM, and HuggingFace for fast LLM training, inference, and self-improving agents—all via an OpenAI-compatible interface. ScalarLM builds on top of the vLLM inference engine, the Megatron-LM training framework, and the HuggingFace model hub. It unifies the capabilities of these tools into a single platform, enabling users to easily perform LLM inference and training, and build higher lever applications such as Agents with a twist - they can teach themselves new abilities via back propagation.

ScalarLM is inspired by the work of Seymour Roger Cray (September 28, 1925 – October 5, 1996), an American electrical engineer and supercomputer architect who designed a series of computers that were the fastest in the world for decades, and founded Cray Research, which built many of these machines. Called "the father of supercomputing", Cray has been credited with creating the supercomputer industry.

It is a fully open source project (CC-0 Licensed) focused on democratizing access to cutting-edge LLM infrastructure that combines training and inference in a unified platform, enabling the development of self-improving AI agents similar to DeepSeek R1.

ScalarLM is supported and maintained by TensorWave in addition to RelationalAI.

The Role:

As a Machine Learning Engineer, you will contribute directly to our machine learning infrastructure, to the ScalarLM open source codebase, and build large-scale language model applications on top of it. You’ll operate at the intersection of high-performance computing, distributed systems, and cutting-edge machine learning research, developing the fundamental infrastructure that enables researchers and organizations worldwide to train and deploy large language models at scale.

This is an opportunity to take on technically demanding projects, contribute to foundational systems, and help shape the next generation of intelligent computing.

You Will: 

  • Contribute code and performance improvements to the open source project.
  • Develop and optimize distributed training algorithms for large language models.
  • Implement high-performance inference engines and optimization techniques.
  • Work on integration between vLLM, Megatron-LM, and HuggingFace ecosystems.
  • Build tools for seamless model training, fine-tuning, and deployment.
  • Optimize performance of advanced GPU architectures.
  • Collaborate with the open source community on feature development and bug fixes.
  • Research and implement new techniques for self-improving AI agents.

Who You Are

Technical Skills:

  • Programming Languages: Proficiency in both C/C++ and Python
  • High Performance Computing: Deep understanding of HPC concepts, including:
    • MPI (Message Passing Interface) programming and optimization
    • Bulk Synchronous Parallel (BSP) computing models
    • Multi-GPU and multi-node distributed computing
    • CUDA/ROCm programming experience preferred
  • Machine Learning Foundations:
    • Solid understanding of gradient descent and backpropagation algorithms
    • Experience with transformer architectures and the ability to explain their mechanics
    • Knowledge of deep learning training and its applications
    • Understanding of distributed training techniques (data parallelism, model parallelism, pipeline parallelism, large batch training, optimization)

Research and Development 

  • Publications: Experience with machine learning research and publications preferred
  • Research Skills: Ability to read, understand, and implement techniques from recent ML research papers
  • Open Source: Demonstrated commitment to open source development and community collaboration

Experience

  • 3+ years of experience in machine learning engineering or research.
  • Experience with large-scale distributed training frameworks (Megatron-LM, DeepSpeed, FairScale, etc.).
  • Familiarity with inference optimization frameworks (vLLM, TensorRT, etc.).
  • Experience with containerization (Docker, Kubernetes) and cluster management.
  • Background in systems programming and performance optimization.

Bonus points if:

  • PhD or MS in Computer Science, Computer Engineering, Machine Learning, or related field.
  • Experience with SLURM, Kubernetes, or other cluster orchestration systems.
  • Knowledge of mixed precision training, data parallel training, and scaling laws.
  • Experience with transformer architecture, pytorch, decoding algorithms.
  • Familiarity with high performance GPU programming ecosystem. 
  • Previous contributions to major open source ML projects.
  • Experience with MLOps and model deployment at scale.
  • Understanding of modern attention mechanisms (multi-head attention, grouped query attention, etc.).

Why RelationalAI

RelationalAI is committed to an open, transparent, and inclusive workplace. We value the unique backgrounds of our team. We are driven by curiosity, value innovation, and help each other to succeed and to grow. We take the well-being of our colleagues seriously, and offer flexible working hours so each individual can find a healthy balance that affords them a productive, happy life wherever they choose to live.

🌎 Global Benefits at RelationalAI

At RelationalAI, we believe that people do their best work when they feel supported, empowered, and balanced. Our benefits prioritize well-being, flexibility, and growth, ensuring you have the resources to thrive both professionally and personally.

  • We are all owners in the company and reward you with a competitive salary and equity.
  • Work from anywhere in the world.
  • Comprehensive benefits coverage, including global mental health support
  • Open PTO – Take the time you need, when you need it.
  • Company Holidays, Your Regional Holidays, and RAI Holidays—where we take one Monday off each month, followed by a week without recurring meetings, giving you the time and space to recharge.
  • Paid parental leave – Supporting new parents as they grow their families.
  • We invest in your learning & development
  • Regular team offsites and global events – Building strong connections while working remotely through team offsites and global events that bring everyone together.
  • A culture of transparency & knowledge-sharing – Open communication through team standups, fireside chats, and open meetings.

Country Hiring Guidelines:

RelationalAI hires around the world. All of our roles are remote; however, some locations might carry specific eligibility requirements.

Because of this, understanding location & visa support helps us better prepare to onboard our colleagues.

Our People Operations team can help answer any questions about location after starting the recruitment process.

 

Privacy Policy: EU residents applying for positions at RelationalAI can see our Privacy Policy here.

California residents applying for positions at RelationalAI can see our Privacy Policy here

 

RelationalAI is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, color, gender identity or expression, marital status, national origin, disability, protected veteran status, race, religion, pregnancy, sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances.

Create a Job Alert

Interested in building your career at RelationalAI? Get future opportunities sent straight to your email.

Apply for this job

*

indicates a required field

Phone
Resume/CV*

Accepted file types: pdf, doc, docx, txt, rtf

Cover Letter

Accepted file types: pdf, doc, docx, txt, rtf