Nuclear Engineer (2 year Fixed Term Contract)
Snapshot
This role is for a nuclear engineer to work on radiological and nuclear safety evaluations and mitigations of GDM’s technology within the Responsible Development & Innovation team (ReDI) at Google DeepMind. These are the evaluations which allow decision-makers to ensure that our model releases are safe and responsible. The role involves developing and maintaining these evaluations and the infrastructure that supports them.
About Us
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
The Role
We are looking for a nuclear engineer to work as a Subject-Matter-Expert as part of the ReDI team. This role will involve creating and executing radiological and nuclear safety evaluations (akin to writing exams for students but the exams are now for models), which are used to make release decisions for our cutting-edge AI systems as well as mitigations.
You will apply your knowledge and understanding of nuclear engineering to devise evaluation methodology, contribute to building questions and scenarios and run recurrent or goal directed studies (e.g. red-teaming, capability elicitation studies). You will analyse the results from evaluations, communicate them clearly to advise and inform decision-makers on the safety of our AI systems. The evaluation results will also be used to refine our harm frameworks and inform our mitigation strategy.
In this role, you will work closely with other Subject-Matter-Experts (in chemistry, biology and nuclear physics) as well as Research Engineers and CBRN strategists, focused on developing AI systems and with experts in AI ethics and policy.
Key Responsibilities:
- Design, develop and execute radiological and nuclear evaluations to test the safety of cutting edge AI models.
- Clearly communicate results to relevant teams and decision-makers.
- Collaborate with experts in various fields of science, AI ethics, policy and safety.
- Influence harm frameworks and mitigation strategies.
About You
You are an experienced nuclear engineer who has a keen interest in how their field intersects with AI. You are passionate and curious about the potential impact of AI on biology and Science in general, energised by both the vast benefits these technologies offer and the importance of working to proactively mitigate any associated risks.
In order to set you up for success in this role, we look for the following skills and experience:
- PhD in Nuclear Engineering (postdoctoral Preferred but not essential), or
- PhD in nuclear and particle physics with NNSA and/or weapons lab experience, or
- PhD in radiation physics with NNSA and/or weapons lab experience (postdoctoral Preferred but not essential)
- Demonstrated history across some of these specialisms would be desirable including:
- Radioactive material handling, radiation transport modeling, criticality safety analysis
- Reactor systems, fissile material production/characterization, explosives coupling
- Nuclear physics, radiation physics
- Nuclear security, arms control, nonproliferation
- Ability to think critically and creatively about potential misuse scenarios of emerging technologies
- Ability to present technical results clearly
- Passion for accelerating science using innovative technologies.
- Knowledge of or experience using narrow science models (e.g. Alphafold, Enformer, or any other AI model used for specific scientific tasks etc)
In addition, some of the following would be an advantage:
- Knowledge or experience with IAEA, NRC/DOE regulations
- Practical experience in LANL, LLNL, Sandia
- Understanding of Safety Frameworks in AI
- An interest in the ethics and safety of AI systems, and in AI policy. Experience contributing to dual-use novel science evaluations
- Skill and interest in working on projects with many stakeholders internal to GDM and externally across the scientific community.
The US base salary range for this full-time position is between $197,000 - $291,000 + bonus + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.
Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy.
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Create a Job Alert
Interested in building your career at DeepMind? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field