AI Engineer & Researcher - Pre-training (Multi-modal)
About xAI
xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge.
Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity.
We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important.
All engineers and researchers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.
About the Role
The pre-training team at xAI aims to provide an omni model that can understand the universe through text, image, video and audio. To accomplish this, we are looking for expert researchers and engineers in multi-modal pre-training.
Tech Stack
- Python
- JAX/XLA
- Rust / C++
- Spark
Location
The role is based in the Bay Area [San Francisco and Palo Alto]. Candidates are expected to be located near the Bay Area or open to relocation.
Focus
- Driving research and engineering to advance pretraining models that can understand multi-modality data.
- Driving research for models across different scales, from billion-parameter neural networks to trillion-parameter neural networks.
- Designing evaluations that capture model’s capabilities across multiple domains
Ideal Experience
- Track record in leading research that significantly improves the multi-modal capability of neural networks, whether better data or better modeling.
- Familiar with state-of-the-art techniques for preparing multi-modal training data.
- Understanding scaling paradigm of pretraining and innovating new paradigms
xAI is an equal opportunity employer.
Apply for this job
*
indicates a required field