Software Engineer, Cloud Inference Safeguards
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role
We are seeking a Software Engineer to build and operate the safety, oversight, and intervention mechanisms that protect Claude on third-party cloud service provider (CSP) platforms. As the engineer responsible for Safeguards on those surfaces, you will ensure that every request served through our CSP partners is monitored for misuse, enforced against policy, and compliant with the data residency and privacy commitments that enterprise CSP customers expect.
You will sit at the seam between the Safeguards organization and the Cloud Inference team: taking classifiers, detection signals, and enforcement policies developed by Safeguards and making them run reliably inside a CSP partner’s infrastructure at serving-path latency and scale. You will own the architecture that lets our safeguards operate within those constraints without gaps. You will build, deploy and operate the multi-layered defenses that catch unwanted model behavior in real time, the telemetry pipelines that give us situational awareness over CSP traffic, and the enforcement hooks that let us act quickly when something goes wrong. Your work will directly determine whether Anthropic can ship frontier models on CSP platforms at the same safety bar we hold ourselves to on our first-party API.
Responsibilities:
- Build, deploy and operate real-time safeguards infrastructure—classifiers, rate limits, enforcement actions, and intervention hooks—embedded directly in the third-party CSP inference serving path
- Design and maintain the data residency and privacy architecture for safeguards signals on CSP platforms, ensuring we can detect abuse and monitor model behavior while honoring regionalization boundaries and enterprise contractual commitments
- Develop telemetry, logging, and evaluation pipelines that give Safeguards, Policy, and T&S operational teams situational awareness over CSP traffic and close the visibility gap between third-party and first-party serving
- Dive into the CSP serving stack to identify the lowest-impact points to gather signals or introduce interventions without degrading latency, stability, or overall architecture
- Hold a high operational bar: own on-call, drive root-cause analyses and postmortems for safeguards incidents on CSP platforms, and build systems that reduce the human intervention required to keep Claude safe
- Work closely with Safeguards research, Policy & Enforcement, the Cloud Inference team, and CSP partner contacts to turn detection research and policy decisions into production enforcement that works inside a partner’s cloud.
You may be a good fit if you:
- Have a Bachelor’s degree in Computer Science, Software Engineering, or comparable experience
- Have 4–10+ years of experience in high-scale, high-reliability software development, ideally with exposure to trust & safety, anti-abuse, fraud, or integrity systems
- Are proficient in Python and comfortable working across the stack—from request-path services to data pipelines to internal tooling
- Think adversarially: you can see a system from a bad actor’s perspective, anticipate how they will respond to countermeasures, and design defenses in depth rather than single points of enforcement
- Have experience scaling infrastructure to accommodate rapid traffic growth while keeping latency and reliability within tight budgets
- Are deeply interested in the potential transformative effects of advanced AI systems and are committed to ensuring their safe development
- Have strong communication skills and can explain complex technical and risk tradeoffs to non-technical stakeholders across Policy, Legal, and partner organizations
- Enjoy working in a fast-paced, early environment; comfortable with adapting priorities as driven by the rapidly evolving AI space
Strong candidates may also have experience with:
- Building trust and safety, anti-spam, fraud, or abuse detection and mitigation mechanisms for AI/ML systems, or the infrastructure to support these systems at scale
- Machine learning serving infrastructure (GPUs/TPUs, inference servers, load balancing) and the operational realities of running models in production
- Major cloud platform internals—IAM, Network/service perimeter controls, regional resource constraints, cloud-native logging/monitoring—or experience shipping software that runs inside a partner’s cloud rather than your own
- Data residency, privacy engineering, or compliance-constrained architectures, particularly where telemetry has to stay within regional or contractual boundaries
- Working closely with operational and human-review teams to build custom internal tooling, admin UX, and alerting
- Adversarial mindset: has shipped defenses against motivated attackers before, knows what it feels like when they adapt, and can sprint to close a gap before it becomes an incident
- Comfortable operating at the intersection of platform/infra engineering and trust & safety—neither a pure infra engineer nor a pure T&S engineer, but someone who can credibly do both
- Has shipped software that runs inside someone else’s infrastructure (partner cloud, embedded deployment, or similar) and knows how to get things done when you don’t control the whole stack
- Senior enough to own a cross-team seam independently, drive consensus across orgs, and make latency/safety tradeoff calls without escalation
- TypeScript or Rust, and agentic coding tools such as Claude Code
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Annual Salary:
$405,000 - $485,000 USD
Logistics
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Create a Job Alert
Interested in building your career at Anthropic? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field
