Research Scientist, Generative Worlds
Snapshot
Join an ambitious project to build generative models of the 3D world. World models power numerous domains, such as creative applications, visual reasoning, simulation, planning for embodied agents, and real-time interactive experiences. The team is tightly integrated with Gemini, Genie, and Veo, and builds on those models while additionally exploring new, spatial modalities beyond images and videos.
The Role
Key responsibilities:
Conduct research to build generative multimodal models of the 3D world. Solve essential problems to train world models at massive scale, develop metrics for spatial intelligence, curate and annotate training data, enable real-time interactive experiences, explore downstream applications, and study integration of spatial modalities with multimodal language models. Build and maintain large model systems and infrastructure to support research exploration. Embrace the bitter lesson and seek simple, effective methods that scale.
Areas of focus:
- 3D computer vision, spatial annotation systems
- Representations for spatial information
- Infrastructure for large-scale data pipelines and annotation.
- Quantitative evals for spatial accuracy and intelligence.
- Model scaling, efficiency, distillation, training infrastructure
About you
We seek individuals who are passionate about the intersection of large-scale generative models and spatial or 3D signals, and who believe learning that large-scale spatial information is a necessary part of the path to intelligence. We strive for simple methods that scale and look for candidates excited to improve models through infrastructure, data, evals, and compute.
In order to set you up for success as a Research Scientist/Engineer at Google DeepMind, we look for the following skills and experience:
- MSc or PhD in computer science or machine learning, or equivalent industry experience.
- Experience with large-scale transformer models and/or large-scale data pipelines.
- Track record of releases, publications, and/or open source projects relating to video generation, world models, multimodal language models, or transformer architectures.
- Exceptional engineering skills in Python and deep learning frameworks (e.g., Jax, TensorFlow, PyTorch), with a track record of building high-quality research prototypes and systems.
- Demonstrated experience in large-scale training of multimodal generative models.
- A keen eye for visual aesthetics and detail, coupled with a passion for creating high-quality, visually compelling generative content.
In addition, the following would be an advantage:
- Experience building training codebases for large-scale video or multimodal transformers.
- Expertise optimizing efficiency of distributed training systems and/or inference systems.
- Strong background in 3D representations or 3D computer vision
- Strong publication record at top-tier machine learning, computer vision, and graphics conferences (e.g., NeurIPS, ICLR, ICML, SIGGRAPH, CVPR, ICCV).
Create a Job Alert
Interested in building your career at DeepMind? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field