AI Safety Research Lead
We are seeking an AI Safety Research Lead to join our team working on a novel AI safety research agenda. In this role, you will contribute to and help drive this agenda, starting from theoretical proposals to the validation of prototypes based on practical safety evaluations.
Key responsibilities
- Play an active role in the advancement of the Scientist AI research agenda by leading key research projects on short and long term objectives.
- Help shape the research agenda to maximise impact on reducing catastrophic AI risks (such as loss of control, scheming and deception).
- Identify safety weaknesses in the existing agenda, propose improvements, and communicate them effectively to other team members.
- Help set research priorities for both conceptual and empirical work.
- Communicate an understanding of core AI safety problems and objectives to other team members.
Skills and competencies
- PhD in Computer Science or a relevant field.
- 4+ years of experience leading AI safety research projects involving frontier machine learning models, with a focus on alignment, practical evaluations, or theoretical guarantees.
- The ability to think critically about AI safety research agendas in general and how they address safety problems identified in the literature.
- Experience with ML frameworks like PyTorch or TensorFlow.
- Strong communication skills, both written and verbal, with the ability to explain complex ideas to diverse audiences.
- Track record of contributing to high-quality research in AI safety and machine learning.
- Ability to work collaboratively in a team environment.
What we offer
- The opportunity to contribute to a unique mission with a major impact.
- Comprehensive health benefits.
- A minimum of 20 days vacation per year upon start.
- A minimum retirement savings employer contribution of 4%.
- Generous flexible benefits designed to contribute to your well-being.
- A team of passionate experts in their field.
- A collaborative and inclusive work environment with offices in the heart of Little Italy, in the trendy Mile-Ex district, close to public transportation.
About LawZero
LawZero is a non-profit organization committed to advancing research and creating technical solutions that enable safe-by-design AI systems. Its scientific direction is based on new research and methods proposed by Professor Yoshua Bengio, the most cited AI researcher in the world. Based in Montreal, LawZero’s research aims to build non-agentic AI that could be used to accelerate scientific discovery, to provide oversight for agentic AI systems, and to advance the understanding of AI risks and how to avoid them. LawZero believes that AI should be cultivated as a global public good—developed and used safely towards human flourishing. For more information, visit www.lawzero.org
You belong here
At LawZero, diversity is important to us. We value a work environment that is fair, open and respectful of differences. We welcome applications from highly qualified individuals interested in working towards our mission in a respectful, inclusive and collaborative setting.
Your personal information will be collected and processed by LawZero to evaluate your application for employment in compliance with our Privacy Policy. Under privacy laws in force in your country of residence, you may have several privacy rights, such as to request access to your personal information or to request that your personal information be rectified or erased. Details on how you can exercise your rights can be found in our Privacy Policy.
Apply for this job
*
indicates a required field