Research Scientist, Gemini Safety
Snapshot
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
The Gemini Safety team is accountable for the safety and fairness behavior of GDM’s latest Gemini models. The role of the Research Scientist / Research Engineer will be to apply and develop data and algorithmic cutting edge solutions to advance GDM’s latest user-facing models. The workstyle is fast paced, and highly collaborative. The team has a strong culture of support, dedication and collaboration.
About Us
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
The Role
We’re looking for a versatile Research Scientist at ease both with figuring out how to approach new research questions, and the technical implementation of research ideas. Our team focuses on advancing the safety and fairness behavior of state of the art AI models. We drive the development of the foundational technology adopted by numerous product areas including Gemini App, Cloud API, and Search.
Key responsibilities:
- Post-training / instruction tuning state of the art LLMs, focusing on text-to-text, image/video/audio-to-text modalities and agentic capabilities
- Exploring data, reasoning and algorithmic solutions to make sure Gemini Models are safe, maximally helpful, and work for everyone.
- Improve Gemini’s adversarial robustness, with a focus on high-stakes abuse risks.
- Design and maintain high quality evaluation protocols to assess model behavior gaps and headroom related to safety and fairness.
- Develop and execute experimental plans to address known gaps, or construct entirely new capabilities
- Drive innovation and enhance understanding of Supervised Fine Tuning and Reinforcement Learning fine-tuning at scale
About You
In order to set you up for success as a Research Scientist on the Gemini Safety team we look for the following skills and experience:
- PhD in Computer Science, a related field, or equivalent practical experience.
- Significant LLM post-training experience
In addition, the following would be an advantage:
- Experience in Reward modeling and Reinforcement Learning for LLMs Instruction tuning
- Experience with Long-range Reinforcement learning
- Experience in areas such as Safety, Fairness and Alignment
- Track record of publications at NeurIPS, ICLR, ICML, RL/DL, EMNLP, AAAI, UAI
- Experience taking research from concept to product
- Experience with collaborating or leading an applied research project
- Experience with JAX
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Create a Job Alert
Interested in building your career at DeepMind? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field