AI Ethics and Safety Policy Researcher
Snapshot
We are looking for an AI Ethics and Safety Policy Researcher to join our Responsible Development & Innovation (ReDI) team at Google DeepMind (GDM). In this role, you will be responsible for proactively identifying, researching, and addressing emerging AI ethics and safety challenges. Such risks relate to new AI capabilities and modalities, including but not limited to persuasion, social intelligence, personalisation, agentics, and robotics. You will conduct novel research and partner with internal and external experts to develop, adapt and implement practical guidelines and policies which mitigate against emerging risks. These guidelines and policies will ensure that GDM develops and deploys its technology in a way that is aligned with the company's AI Principles.
About us
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
The Role
As an AI Ethics and Safety Policy Researcher, your focus will be identifying, deeply understanding and mitigating emerging AI risks. You should expect your outputs to take various forms, depending on the topic or need. This may include: original research papers or other publications on emerging AI ethics and safety issues, ideal model behaviour policies that inform model development and steer evaluations, guidelines for research or governance teams to follow when developing or deploying technology; and artifacts, processes, or coordination mechanisms needed to best support the creation and implementation of those guidelines and policies at GDM and beyond.
Key responsibilities
- Systematically identify risks associated with emerging and proliferating AI capabilities
- Conduct original research on identified challenges, gathering information from a variety of sources, including external and internal experts, academic literature, and industry reports
- Design and build operational frameworks for mitigating model risks, converting them into standardized artefacts such as universal training datasets and evaluation protocols
- Collaborate with model development teams to help them adopt and apply these frameworks, guiding them in defining project-specific metrics and criteria for significant results
- Communicate findings and recommendations to stakeholders, including researchers, engineers, product managers, and executives
- Support the teams across GDM in interpreting the frameworks and ensuring that training and evaluation data as appropriate.
- Work closely with relevant across the organisation to align and update the frameworks to ensure their continued relevance in a rapidly changing environment
About you
In order to set you up for success in this role, we look for the following skills and experience:
- A PhD, or equivalent experience, in a relevant field, such as AI ethics or safety, computer science, social sciences, or public policy
- Proven expertise in AI ethics, AI policy or a related field
- Demonstrable track record of implementing policies
- Strong research and writing skills, evidenced by publications in top journal and conference proceedings
- Experience working within interdisciplinary teams
- Ability to communicate complex concepts and ideas simply for a range of collaborators
- Ability to think critically and creatively about complex ethical issues
The US base salary range for this full-time position is between $147,000 - $216,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.
Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy.
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Create a Job Alert
Interested in building your career at DeepMind? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field