Senior Technical Program Manager, Security & Privacy
Snapshot
At Google DeepMind, we are building the future of artificial intelligence, and safeguarding our research, people, and products is a foundational mission. The Security & Privacy organization enables the entire company to pursue its ambitious goals responsibly.
About Us
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
The Role
Google DeepMind is seeking a highly experienced and technical Senior Technical Program Manager to drive impact across research efforts in our security and privacy team. In this role, you will partner closely with our world-class researchers and engineers to to translate research on distillation defenses into robust, scalable production realities for our industry-leading frontier models. Your work will be the driving force behind ensuring our most powerful AI innovations remain secure from unauthorized cloning while remaining accessible to legitimate users and developers.
Key responsibilities:
- Safeguard Model Intellectual Property: Support the end-to-end roadmap for anti-distillation defenses, partnering with product managers and researchers to operationalize novel techniques that detect and mitigate model extraction attacks at scale.
- Strategize and Drive Programs: Scope and drive complex, ambitious programs that span multiple teams, managing simultaneous projects while balancing immediate delivery needs with long-term strategic success.
- Orchestrate Execution & Visibility: Unite diverse teams for fast-paced execution while serving as the trusted owner of project status, proactively identifying dependencies, risks, and ensuring clear progress visibility across all stakeholders.
- Bridge Technical and Strategic Worlds: Rapidly grasp complex AI modeling concepts and translate them into actionable program strategies, ensuring alignment between research breakthroughs and production engineering realities.
- Build and Influence: Cultivate strong relationships with key stakeholders, influencing actions and outcomes without direct authority, ensuring alignment among the stakeholders.
- Communicate Effectively: Have excellent written and verbal communication skills, with the ability to articulate complex technical concepts clearly and concisely to diverse audiences including executive leadership.
- Navigate Ambiguity & Drive Efficiency: Thrive in rapidly evolving research environments by adeptly adjusting plans as conditions change, while consistently driving process improvements and tooling efficiencies to streamline execution.
Qualifications
Minimum Qualifications
- Bachelor’s degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience.
- 8+ years of experience leading large-scale, highly complex technical programs, preferably across multiple geographies and time zones.
- Strong foundational understanding of cybersecurity and privacy concepts, such as attack surfaces, threat modeling, rate limiting, and abuse detection.
- Excellent communication skills, with experience managing complex stakeholder relationships and translating technical concepts for executive leadership.
- Demonstrated technical acumen with the ability to quickly grasp and deeply understand new technical domains.
Preferred Qualifications
- In-depth knowledge of AI/ML fundamentals (training, inference, fine-tuning) and specific familiarity with distillation prevention techniques (watermarking, adversarial defenses, API security).
- Track record of transitioning research prototypes into robust, globally scaled production defenses, ideally working directly with PhD-level researchers and engineering teams.
- Hands-on technical experience with large-scale data systems, and the ability to dive into data (e.g., SQL, Python) to independently validate research findings.
- Experience collaborating with cross-functional legal, policy, and trust & safety teams to land complex security initiatives.
- Familiarity with industry standards relevant to AI and security (e.g., OWASP Top 10 for LLMs, ISO/IEC 42001).
The US base salary range for this full-time position is between $183,000 - $271,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.
Application deadline: November 24, 2025
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Create a Job Alert
Interested in building your career at DeepMind? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field