Back to jobs
New

ML Infrastructure Engineer

San Francisco, US

About Us
Symbolica is an AI research lab pioneering the application of category theory to enable logical reasoning in machines.

We’re a well-resourced, nimble team of experts on a mission to bridge the gap between theoretical mathematics and cutting-edge technologies, creating symbolic reasoning models that think like humans – precise, logical, and interpretable. While others focus on scaling data-hungry neural networks, we’re building AI that understands the structures of thought, not just patterns in data.

Our approach combines rigorous research with fast-paced, results-driven execution. We’re reimagining the very foundations of intelligence while simultaneously developing product-focused machine learning models in a tight feedback loop, where research fuels application.

Founded in 2022, we’ve raised over $30M from leading Silicon Valley investors, including Khosla Ventures, General Catalyst, Abstract Ventures, and Day One Ventures, to push the boundaries of applying formal mathematics and logic to machine learning.

Our vision is to create AI systems that transform industries, empowering machines to solve humanity’s most complex challenges with precision and insight. Join us to redefine the future of AI by turning groundbreaking ideas into reality.

About the Role

As ML Infrastructure Engineer, working closely with our ML Infrastructure Lead, you will design, build, and optimize the infrastructure and tools that enable our research and development efforts. You'll accelerate the development of scalable infrastructure that powers our machine learning experiments, model training, and deployment.

Your work will be at the intersection of research and engineering, ensuring our R&D team has the robust platform they need to push the boundaries of AI, working with our GPU vendors, cloud providers, and on-prem servers.

📍 This is an onsite role that can be based in our SF office.

Key Responsibilities

  • Expanding and improving our infrastructure for large-scale machine learning workflows, including training systems and model deployment.
  • Developing tools and frameworks to support the global team’s experiments, ensuring reproducibility and scalability.
  • Optimizing compute resources and ensuring efficient use of cloud and on-prem hardware for training and inference.
  • Building and maintaining CI/CD pipelines tailored for machine learning development.
  • Collaborating closely with machine learning scientists, researchers and engineers to identify and address infrastructure needs.

About You

  • 5+ years of experience in software engineering or infrastructure roles, with at least 2 years in machine learning infrastructure or MLOps.
  • Proficiency in scaling DevOps pipelines for both traditional software and (based on ArgoCD) as well as MLOps pipelines using orchestration tools like ZenML and Kubernetes.
  • Experienced with Linux, containers, Nix, Kubernetes and an interest in making sure the infrastructure behind our models is secure by design.
  • Exceptional problem-solving skills with the ability to nimbly solve edge-cases with minimum disruption.

What We Offer

  • Competitive salary and early-stage equity package.
  • A high-trust, execution-first culture with minimal bureaucracy.
  • Direct ownership of meaningful projects with real business impact.
  • A rare opportunity to sit at the interface between deep research and real-world productisation.

Read more about Symbolica:

Symbolica is an equal opportunities employer. We celebrate diversity and are committed to creating an inclusive environment for all employees, regardless of race, gender, age, religion, disability, or sexual orientation.

Apply for this job

*

indicates a required field

Resume/CV*

Accepted file types: pdf, doc, docx, txt, rtf

Cover Letter

Accepted file types: pdf, doc, docx, txt, rtf


Select...