Model Quality Software Engineer, Claude Code
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the Role
We're looking for a Staff Software Engineer to set technical direction at the intersection of engineering and research on the Claude Code team. In this role, you'll partner directly with Anthropic's researchers and engineering leadership to shape how we measure, understand, and improve Claude's coding capabilities. You'll architect the systems, tooling, and evaluation infrastructure that determine how quickly our research can move—and you'll be accountable for the technical decisions that ripple across the team and beyond. This is a senior individual contributor role for someone who has already built and owned systems at significant scale, and who is ready to operate as a technical leader: driving architecture, mentoring engineers, and influencing the direction of Claude Code itself.
Responsibilities
-
Set technical direction for evaluation systems, research infrastructure, and internal tooling across the Claude Code team
-
Architect eval frameworks that measure model capabilities across diverse coding tasks and scale with our research roadmap
-
Lead the design of infrastructure that enables researchers to run experiments at scale, and make the foundational tradeoffs that shape how the team operates for years
-
Identify the highest-leverage engineering investments—often before anyone has asked for them—and drive them to completion
-
Serve as a senior technical bridge between product and research, using strong product intuition to influence which capabilities we prioritize and how we measure progress against them
-
Mentor and raise the bar for other engineers on the team; review designs, unblock peers, and model the engineering standards we want to scale
-
Partner with research leads to translate ambiguous research questions into durable engineering solutions
-
Own critical systems end-to-end, from architecture through production reliability, and take responsibility for their long-term health
You may be a good fit if you:
-
Have 10+ years of software engineering experience, with a track record of operating as a Staff or Principal engineer (or equivalent) at a high-caliber organization
-
Have architected and owned complex, high-stakes systems—pipelines, infrastructure, or platforms that orchestrate many components, handle significant state and logic, and serve multiple teams
-
Have a history of setting technical direction that others follow—through design docs, architectural decisions, or technical strategy that shaped how a team or org operates
-
Thrive in high-intensity environments with fast iteration cycles, and have the judgment to know when to move fast and when to invest in durability
-
Take full ownership of ambiguous, open-ended problems and drive them to completion with minimal direction
-
Are a power user of agentic coding tools with deep intuition about model capabilities and limitations
-
Can dive into unfamiliar technical domains—ML systems, research workflows, novel infrastructure—and get to the frontier quickly
-
Care deeply about correctness and reliability, and have raised engineering standards on teams you've been part of
-
Are energized by working at the boundary between engineering and AI research, and by the prospect of influencing both
Strong candidates may also have experience with:
-
Designing or scaling eval/evaluation frameworks for ML systems
-
Reinforcement learning infrastructure or training systems
-
Leading technical initiatives in high-performance, demanding environments—trading firms, quant funds, frontier research labs, or fast-moving startups where intensity and technical excellence are the norm
-
Research computing, scientific infrastructure, or developer platforms at scale
-
A strong quantitative foundation (math, physics, or related fields)
-
Expertise in Python and TypeScript
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Annual Salary:
$405,000 - $485,000 USD
Logistics
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Create a Job Alert
Interested in building your career at Anthropic? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field
