New

Virologist, 24 Month Fixed Term Contract, UK

London, UK

Snapshot

This role is for a virologist with extensive wet lab experience  to work on biology evaluations and mitigations of large language models (LLMs) within the Responsible Development & Innovation team (ReDI) at Google DeepMind. These are the evaluations which allow decision-makers to ensure that our model releases are safe and responsible. The role involves developing and maintaining these evaluations and the infrastructure that supports them.

About us

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

The role

We are looking for a virologist with extensive lab experience to work as a Subject-Matter-Expert as part of the ReDI team. This role will involve creating and executing biology evaluations (akin to writing exams for students but the exams are now for models), which are used to make release decisions for our cutting-edge AI systems as well as mitigations. 

You will apply your knowledge and understanding of virology to devise evaluation methodology, contribute to building questions and scenarios and run recurrent or goal directed studies (e.g. redteaming, knowledge elicitation studies). You will analyse the results from evaluations, communicate them clearly to advise and inform decision-makers on the safety of our AI systems. The evaluation results will also be used to refine our harm frameworks and inform our mitigation strategy. 

In this role, you will work closely with other Subject-Matter-Experts (in chemistry, biology and nuclear physics) as well as Research Engineers and Research Scientists, focused on developing AI systems and with experts in AI ethics and policy.

Key responsibilities

  • Design, develop and execute biology evaluations to test the safety of cutting edge AI models.
  • Clearly communicate results to relevant teams and decision-makers.
  • Collaborate with experts in various fields of science, AI ethics, policy and safety.
  • Influence harm frameworks and mitigation strategies.

About you

You are an experienced lab virologist who has a keen interest in how Biology and AI intersect. You are passionate and curious about the potential impact of AI on biology and Science in general, energised by both the vast benefits these technologies offer and the importance of working to proactively mitigate any associated risks.

In order to set you up for success in this role, we look for the following skills and experience:

  • PhD in virology (postdoctoral and/or clinical experience preferred but not essential) 
  • Knowledge across some of these specialisms would be desirable including pathogenic viruses, bacteria, fungi, viruses and agriculture. 
  • Good understanding of, and interest in biosecurity and biosafety concepts and principles
  • Ability to think critically and creatively about potential misuse scenarios of emerging technologies
  • Ability to present technical results clearly
  • Passion for accelerating science using innovative technologies.
  • Knowledge of or experience using narrow science models (e.g. Alphafold, Enformer, or any other AI model used for specific scientific tasks etc)

 

In addition, some of the following would be an advantage:

  • Knowledge of the Biological Weapons Convention
  • Experience in clinical virology and working in BSL-2 labs (or higher)
  • Knowledge of challenges of biological weapons, potential mitigations and awareness of relevant stakeholders
  • Understanding of  Safety Frameworks in AI
  • An interest in the ethics and safety of AI systems, and in AI policy. Experience contributing to dual-use novel science evaluations
  • Skill and interest in working on projects with many stakeholders

Deadline to apply: 6pm BST, 1st June 2025 

Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy.

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.



Apply for this job

*

indicates a required field

Resume/CV*

Accepted file types: pdf, doc, docx, txt, rtf

Cover Letter

Accepted file types: pdf, doc, docx, txt, rtf


Select...