Research Engineer, Agentic Safety
Snapshot
Accelerate research in strategic projects that enable trustworthy, robust and reliable agentic systems with a group of research scientists and engineers on a mission-driven team. Together, you will apply ML and other computational techniques to a wide range of challenging problems.
About Us
We’re a dedicated scientific community, committed to “solving intelligence” and ensuring our technology is used for widespread public benefit.
We’ve built a supportive and inclusive environment where collaboration is encouraged and learning is shared freely. We don’t set limits based on what others think is possible or impossible. We drive ourselves and inspire each other to push boundaries and achieve ambitious goals
The Role
As a Research Engineer in Strategic Initiatives, you will use your AI and software engineering expertise to collaborate with domain experts and other machine learning scientists within our strategic initiatives programs. Your primary focus will be on building technologies to make AI agents safer. AI agents are increasingly used in sensitive contexts with powerful capabilities, having abilities to access personal data, confidential enterprise data and code, interact with third party applications or websites, or write and execute code in order to fulfil user tasks. Ensuring that such agents are reliable, secure and trustworthy is a large scientific and engineering challenge, with huge potential impact. In this role, you will serve this mission by building infrastructure, researching new approaches to agentic safety, building prototypes and demos, working with partner and client teams, and most importantly, land transformative impact for GDM , our product partners, and the AI ecosystem more broadly
Key responsibilities:
- Develop frameworks to evaluate the safety, security and privacy of agentic AI systems at scale across key usecases at Google and GDM
- Work on agent orchestration prototypes combining multiple AI components to reliably solve complex tasks in nuanced scenarios
- Build leaderboards and evaluation metrics for the project to hill-climb
- Integrate novel agentic technologies into research prototypes
- Work with product teams to gather research requirements and consult on the deployment of research-based solutions to help deliver value incrementally
- Amplify the impact by generalizing solutions into reusable libraries and frameworks for privacy preserving AI agents across Google, and by sharing knowledge through design docs, open source, or external blog posts
About You
In order to set you up for success as a Research Engineer at Google DeepMind, we look for the following skills and experience:
- Bachelor in computer science, security or related field, or equivalent practical experience
- Passion for accelerating the development of secure agents using innovative technologies.
- Strong programming experience.
- Demonstrated record of python implementations of LLM pipelines.
- Quantitative skills in maths and statistics.
- Experience with common scripting languages and pipelining tools.
In addition, the following would be an advantage:
- Experience in applying machine learning techniques to problems surrounding scalable, robust and trustworthy deployments of models.
- Experience with GenAI language models, programming languages, compilers, formal methods, and/or private storage solutions.
- Demonstrated success in creative problem solving for scalable teams and systems
- A real passion for AI!
Create a Job Alert
Interested in building your career at DeepMind? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field