Founding GPU Software Engineer
About us
Symbolica is an AI research lab pioneering the application of category theory to enable logical reasoning in machines.
We’re a well-resourced, nimble team of experts on a mission to bridge the gap between theoretical mathematics and cutting-edge technologies, creating symbolic reasoning models that think like humans – precise, logical, and interpretable. While others focus on scaling data-hungry neural networks, we’re building AI that understands the structures of thought, not just patterns in data.
Our approach combines rigorous research with fast-paced, results-driven execution. We’re reimagining the very foundations of intelligence while simultaneously developing product-focused machine learning models in a tight feedback loop, where research fuels application.
Founded in 2022, we’ve raised over $30M from leading Silicon Valley investors, including Khosla Ventures, General Catalyst, Abstract Ventures, and Day One Ventures, to push the boundaries of applying formal mathematics and logic to machine learning.
Our vision is to create AI systems that transform industries, empowering machines to solve humanity’s most complex challenges with precision and insight. Join us to redefine the future of AI by turning groundbreaking ideas into reality.
About the role
As a Founding GPU Software Engineer at Symbolica, you will specialize in the design, development, and optimization of GPU kernels and algorithms to support the training and inference of symbolic reasoning models. You will leverage frameworks like CUDA and CUTLASS, along with compiler optimization techniques, to push the boundaries of performance for high-dimensional computation.
Your focus
- Developing and optimizing GPU kernels for high-performance symbolic reasoning and numerical algorithms using CUDA.
- Designing and implementing domain-specific compiler optimizations for GPU acceleration, ensuring efficient transformation and execution of symbolic computation workloads.
- Collaborating with mathematicians and researchers to design highly efficient implementations of complex algorithms.
- Analyzing and optimizing GPU performance, focusing on memory management, thread utilization, compiler-generated optimizations, and computation throughput.
- Building and maintaining scalable, reusable GPU-accelerated libraries tailored for symbolic reasoning workloads.
- Profiling and benchmarking kernel performance, identifying compiler inefficiencies, and implementing solutions for maximum efficiency.
About you
- Strong proficiency in at least one high-performance programming language (C, C++, Rust, Haskell, or Julia) and familiarity with Python.
- Proficiency in GPU programming with CUDA, including experience with kernel development, compiler optimizations, and performance tuning.
- Experience with CUTLASS and familiarity with tensor operations and matrix multiplications is a plus.
- In-depth knowledge of GPU architecture, including memory hierarchies, thread blocks, warps, and scheduling.
- Experience with compiler development, LLVM, or domain-specific language (DSL) optimizations.
- Proven optimizing numerical algorithms for high-performance computing environments.
- Familiarity with LSP (Language Server Protocol) and a background in linear algebra, symbolic computation, or related mathematical fields are strong pluses.
We offer competitive compensation, including an attractive equity package, with salary and equity levels aligned to your experience and expertise.
📍 This is an onsite role based in our London office (66 City Rd).
Symbolica is an equal opportunities employer. We celebrate diversity and are committed to creating an inclusive environment for all employees, regardless of race, gender, age, religion, disability, or sexual orientation.
Apply for this job
*
indicates a required field