Software Engineer - Distributed ML Training
The world will be unrecognisable in 5 years.
Machine learning models are driving our cars, testing our eyesight, detecting our cancer, giving sight to the blind, giving speech to the mute, and dictating what we consume, enjoy, and think. These AI systems are already an integral part of our lives and will shape our future as a species.
Soon, we'll conjure unlimited content: from never-ending TV series (where we’re the main character) to personalised tutors that are infinitely patient and leave no student behind. We’ll augment our memories with foundation models—individually tailored to us through RLHF and connected directly to our thoughts via Brain-Machine Interfaces—blurring the lines between organic and machine intelligence and ushering in the next generation of human development.
This future demands immense, globally accessible, uncensorable, computational power. Gensyn is the machine learning compute protocol that translates machine learning compute into an always-on commodity resource—outside of centralised control and as ubiquitous as electricity—accelerating AI progress and ensuring that this revolutionary technology is accessible to all of humanity through a free market.
Our Principles:
AUTONOMY
- Don’t ask for permission - we have a constraint culture, not a permission culture.
- Claim ownership of any work stream and set its goals/deadlines, rather than waiting to be assigned work or relying on job specs.
- Push & pull context on your work rather than waiting for information from others and assuming people know what you’re doing.
- No middle managers - we don’t (and will likely never) have middle managers.
FOCUS
- Small team - misalignment and politics scale super-linearly with team size. Small protocol teams rival much larger traditional teams.
- Thin protocol - build and design thinly.
- Reject waste - guard the company’s time, rather than wasting it in meetings without clear purpose/focus, or bikeshedding.
REJECT MEDIOCRITY
- Give direct feedback to everyone immediately rather than avoiding unpopularity, expecting things to improve naturally, or trading short-term pain for extreme long-term pain.
- Embrace an extreme learning rate rather than assuming limits to your ability/knowledge.
- No quit - push to the final outcome, despite any barriers.
Responsibilities:
- System Design: develop an elegant system for orchestrating machine learning execution to enable training across our uniquely decentralised and heterogeneous infrastructure.
- Performance Optimisation: continuously profile and optimise training algorithms to ensure peak performance.
- Research and Development: implement novel research and build out new mechanisms and algorithms to solve open distributed ML training problems.
- Engineering Support: collaborate with the team on broader ML-related issues, such as reproducible training.
- Documentation and Communication: contribute to technical reports and papers for publication within the wider ML community.
Minimum Requirements:
- Parallel Training Experience: proven experience in parallelising training for a wide range of models on different types of hardware and network topologies.
- Distributed Training Expertise: hands-on experience designing training systems on large clusters, using various parallelisation models (pipeline, data, tensor parallelism), training modes (local optimisation, global optimisation), and optimisation techniques (quantisation, gradient compression, etc.).
- Networking Knowledge: deep understanding and experience with common networking protocols (IP, TCP, UDP, HTTP) and communication backends (NCCL, GLOO, MPI).
- Computer Science Background: strong understanding of computational complexity (time, space) and broad knowledge of algorithms and data structures.
- Applied Research Experience: comfortable working in an applied research environment with high autonomy and unpredictable timelines, requiring a high degree of collaboration and communication.
Nice to haves:
- Rust: strong experience with systems programming in Rust (you know what a 'lifetime' is and understand the purpose of Pin).
- Open source work: experience working with large open source codebases - either as maintainer or trusted contributor.
- Research background: published research in the distributed systems or ML domains.
- Blockchain: understanding of blockchain fundamentals.
Compensation / Benefits:
- Competitive salary + share of equity and token pool
- Fully remote work - we hire between the West Coast (PT) and Central Europe (CET) time zones
- Relocation Assistance - available for those that would like to relocate after being hired (anywhere from PST through CET time zones)
- 4x all expenses paid company retreats around the world, per year
- Whatever equipment you need
- Paid sick leave
- Private health, vision, and dental insurance - including spouse/dependents [🇺🇸 only]
Apply for this job
*
indicates a required field