Research Engineering Manager, Gemini Security & Privacy, UK
Snapshot
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
About Us
Leveraging our best-in-industry auto-red teaming capabilities, our mission is to unblock the strongest and most helpful agentic GenAI capabilities in the real world. Doing so will make Gemini and other GenAI models as capable as highly experienced privacy and security engineers in handling sensitive user data and permissions. We have already delivered a substantial improvement in prompt injection resilience in Gemini 2.5, and are continuing to partner closely with other GDM teams to bring security and privacy into all aspects of Gemini post-training.
Key responsibilities:
As a Research Engineering Manager in the GDM Security & Privacy Research team, you will be responsible for:
- Identifying unsolved, impactful privacy & security problems present in generative models through auto-red teaming, with priorities guided by frontier agentic capabilities being developed in Gemini and other GenAI models.
- Building post-training data and tools hypothesised to improve model capabilities in the problem areas, testing the hypotheses through evaluations and auto-red teaming, and contributing successful solutions into Gemini and other models.
- Amplifying the impact by generalizing solutions into reusable libraries and frameworks for protecting agents and models across Google, and by sharing knowledge through publications, open source, and education.
About you:
In order to set you up for success as a Research Engineer at Google DeepMind, we look for the following skills and experience:
- B.S./M.S. in Computer Science or related quantitative field with 5+ years of relevant experience.
In addition, any of the following would be an advantage:
- Experience with JAX, PyTorch, or similar machine learning platforms.
- Demonstrated experience in Python through strong artifacts in building readable, scalable, reusable ML software.
- Demonstrated experience in adapting research outputs into impactful model improvements, in a rapidly shifting landscape and with a strong sense of ownership.
- Research experience and publications in ML security, privacy, safety, or alignment.
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Apply for this job
*
indicates a required field