Infrastructure Engineer, Security
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
About the Role
We’re looking for an infrastructure engineer to own and evolve the security infrastructure that underpins our foundation models. In this role, you’ll work across compute, storage, networking, and data platforms, making sure our systems are secure, reliable, and built to scale. You’ll shape controls, architecture, and tooling so that security is part of how the platform works by default. You’ll partner closely with research and product teams, enabling them to move quickly while keeping our models, data, and environments protected.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
What You’ll Do
- Architect security patterns for platforms and services, including network segmentation, service-to-service authentication, RBAC, and policy enforcement in Kubernetes and cloud environments.
- Manage identity, access, and secrets for humans and services: workload and cross-cloud identity, least-privilege IAM, and secrets management.
- Build secure platforms for data ingestion, processing, and curation: classification, encryption, access controls, and safe sharing patterns across teams.
- Write threat models and review designs with researchers and engineers to help them ship features and experiments in a safe, scalable way.
- Automate security checks and build guardrails: policy-as-code, secure infrastructure baselines, validation in CI/CD, and tools that make the secure path the easiest one.
Skills and Qualifications
Minimum qualifications:
- Bachelor’s degree or equivalent experience in engineering, or similar.
- Strong background with containers and orchestration (e.g., Kubernetes) and how to secure them (namespaces, network policies, pod security, admission controls, etc.)
- Practical experience with Infrastructure as Code (Terraform or similar), including secure patterns for provisioning networks, IAM, and shared services.
- Solid understanding of cloud networking and security: VPCs, load balancers, service discovery, mTLS, firewalls, and zero-trust-style architectures.
- Proficiency with a systems language such as Rust and scripting in Python for building platform components and internal tools.
- Evidence of owning complex, production-critical systems, including debugging issues that span infra, security, and application layers.
Preferred qualifications — we encourage you to apply if you meet some even if you don't meet all of these:
- Experience with ML infrastructure, GPU clusters, or large-scale training environments (schedulers, job queues, shared storage, multi-tenant clusters).
- Background in AI labs, HPC environments, or ML-heavy organizations where both security and performance are first-class concerns.
- Experience profiling and tuning high-throughput systems, and an ability to reason about the cost of additional security layers.
- Talks, blogs, or publications on infrastructure security, distributed systems, or performance engineering.
- Open-source contributions to security, orchestration, observability, or infrastructure tooling.
- Familiarity with securing specialized hardware (GPUs, TPUs) and their integrations into training and inference pipelines.
Logistics
- Location: This role is based in San Francisco, California.
- Compensation: Depending on background, skills and experience, the expected annual salary range for this position is $200,000 - $475,000 USD.
- Visa sponsorship: We sponsor visas. While we can't guarantee success for every candidate or role, if you're the right fit, we're committed to working through the visa process together.
- Benefits: Thinking Machines offers generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Create a Job Alert
Interested in building your career at Thinking Machines Lab? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field
.png?1755715693)