Technical Program Manager, Security - Coordinated Vulnerability Disclosure
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the Role
As a Technical Program Manager for Security, Coordinated Vulnerability Disclosure (CVD), you will build and lead the programs that govern how Anthropic responsibly discloses software vulnerabilities discovered by our AI-powered tools, including Claude, Patchy, and Claude Code. These tools have already found real zero-days in Firefox, the Linux kernel, and other critical software. The challenge is no longer just finding vulnerabilities; it is managing the consequences of finding them at unprecedented scale and speed.
Traditional coordinated disclosure frameworks were designed for a world where a researcher might find one serious vulnerability every few weeks. AI-powered discovery has changed that equation entirely; Claude can surface hundreds of findings in a single codebase in a single day. This role exists to ensure that every finding reaches the right maintainer, at the right pace, with the right context, and that Anthropic meets its Responsible Scaling Policy (RSP) commitments in the process.
You will own the end-to-end CVD lifecycle: from internal triage and human validation of AI-generated findings, through tiered disclosure timelines, to external coordination with vendors, open-source maintainers, and organizations. This role requires deep collaboration across Security Engineering, Legal, Communications, Product, and Frontier Red Team to ensure Anthropic operates as a responsible steward of the vulnerabilities its tools discover.
Responsibilities:
- Own end-to-end CVD program strategy and execution: Define and drive the roadmap for coordinated vulnerability disclosure, from AI-generated finding through maintainer notification, remediation tracking, and public disclosure. Ensure alignment with Anthropic’s security posture and RSP compliance requirements.
- Lead internal triage and quality assurance: Establish and manage the human review process that validates all AI-generated findings before external disclosure. Set minimum confidence thresholds, deduplicate against known CVEs, and ensure every report sent to a maintainer meets Anthropic’s quality bar.
- Design and operate tiered disclosure timelines: Implement severity-based disclosure windows with appropriate extension policies.
- Build and manage pacing and submission models: Develop rate-limiting frameworks that govern how many findings are submitted to each project, scaled to maintainer capacity and project size.
- Lead external coordination and partner engagement: Manage relationships with open-source maintainers and closed-source vendors. Serve as the primary point of contact for vulnerability coordination, including escalation when maintainers are unresponsive. Drive the phased rollout from initial trusted partners through broader open-source engagement.
- Establish program metrics and reporting: Define and track the metrics that determine program health, including fix rates, false-positive rates, median time-to-patch, and qualitative maintainer feedback. Use these metrics to inform decisions about program expansion, pacing adjustments, and policy updates.
- Drive response category classification: Manage the process for classifying findings into response categories (latent vulnerability, active exploitation, ecosystem-level pattern) and ensure the appropriate response protocol is triggered for each category.
- Lead cross-functional coordination: Manage stakeholder relationships across Security Engineering, Legal, Communications, Product, and Frontier Red Team to drive alignment and execution on disclosure initiatives. Ensure legal review of disclosure timelines and coordinate public communications around significant findings.
- Collaborate with senior leadership and executives: Communicate program vision, risks, and progress with executive presence. Influence strategic priorities and secure alignment across leadership teams, including the CISO and CTO organizations.
You May Be a Good Fit If You Have:
- 10+ years of experience in cybersecurity, vulnerability management, or security operations, with at least 4+ years leading vulnerability disclosure, vulnerability management, or coordinated response programs
- Deep understanding of coordinated vulnerability disclosure processes, including experience working with CERT/CC, MITRE CVE, or similar coordination bodies
- Technical familiarity with vulnerability discovery tooling, static analysis, fuzzing infrastructure (e.g., OSS-Fuzz, CodeQL), and the triage workflows that turn raw findings into actionable reports
- Experience engaging directly with open-source maintainers and understanding the dynamics of open-source project governance, contributor capacity, and maintainer burnout
- Proven experience as a Technical Program Manager or similar role in a cybersecurity or technology-focused environment, with a track record of leading complex, cross-organizational programs to successful completion
- Executive communication skills with demonstrated ability to influence decisions at the senior leadership and C-suite level
- Ability to manage highly ambiguous problems and navigate challenges to achieve program objectives in a fast-paced, evolving environment
- Strong collaboration skills with proven ability to partner across diverse technical and non-technical stakeholders including Security Engineering, Legal, Communications, and Product teams
Strong Candidates May Also Have:
- Experience building vulnerability disclosure or coordinated response programs from the ground up in high-growth technology companies
- Background as a CVE Numbering Authority (CNA) operator, or experience managing the operational requirements of CVE issuance, embargo coordination, and formal vulnerability tracking
- Familiarity with AI/ML-powered security tooling and the unique challenges of managing AI-generated vulnerability reports at scale, including false-positive filtering and quality assurance
- Experience with vulnerability management platforms and tracking systems (e.g., HackerOne, Bugcrowd, or custom internal tooling)
- Prior work in security research, penetration testing, or red teaming that provides firsthand understanding of the vulnerability lifecycle from discovery through remediation
- Familiarity with compliance frameworks (SOC 2, ISO 27001, FedRAMP) and their intersection with vulnerability disclosure requirements
- Experience managing multi-stakeholder disclosure scenarios involving ecosystem-level vulnerabilities that affect multiple projects simultaneously
Deadline to Apply: None, applications will be received on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Annual Salary:
$290,000 - $405,000 USD
Logistics
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Create a Job Alert
Interested in building your career at Anthropic? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field
