Research Scientist/Engineer, Counter Abuse Specialist, Model Threat Defense
Snapshot
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
About Us
Model distillation is a key innovation enabling the acceleration of AI, turning large general models into small and specialized models used across the industry. However, distillation techniques can also be used to steal critical model capabilities, representing a significant threat to the intellectual property and integrity of our foundational models.
The Role
As a Counter Abuse Specialist on the Model Threat Defense team, you will be at the forefront of defending our models against sophisticated abuse. You will combine expertise in abuse data science and adversarial ML to identify threats and operationalize defenses. Working closely with cross-functional partners, you will drive the response to large-scale actors and refine our security defenses.
Key Responsibilities
- Work with cross-functional teams to build and improve approaches to distillation attack detection and mitigation.
- Leverage data science skills to identify emerging attacks and trends, operationalize new strategies for detecting them, and inform the development of robust counter-measures.
- Coordinate responses to attackers and abuse campaigns.
About You
We are looking for a Data Scientist, Software Engineer or Research Scientist who is passionate about protecting the future of AI. You have a strong background in counter-abuse systems, adversarial machine learning, and data science. You thrive on ambiguity and are skilled at bridging the gap between technical analysis and operational response.
Minimum qualifications:
- Bachelor's degree in Computer Science, Data Science, or a related technical field, or equivalent practical experience.
- Experience in Counter Abuse Data Science, Adversarial Machine Learning, or building Counter-Abuse Systems.
- Experience working with cross-functional teams (e.g., Trust & Safety) to implement security or abuse solutions.
Preferred qualifications:
- Master's or PhD in a related quantitative field.
- Deep expertise in analyzing and mitigating abuse in large-scale systems.
- Industry experience in threat intelligence, OSINT, mission assurance, and/or related fields.
- Strong understanding of model distillation, model stealing, and other capability extraction techniques.
- Experience with ML frameworks and data analysis tools.
- A track record of coordinating responses to active security incidents or large-scale attacks.
The US base salary range for this full-time position is between $166,000 - $244,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Create a Job Alert
Interested in building your career at DeepMind? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field