Member of Technical Staff - Data Platform
About xAI
xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All employees are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.
ABOUT THE ROLE:
The Data Platform team builds and operates the infrastructure responsible for all large-scale data transport and processing across the company. We own and manage core systems including Apache Kafka, HDFS, Spark, Flink, and Trino, enabling real-time ML pipelines, feed ranking, experimentation, analytics, and observability at petabyte scale. Our team deals with latency-critical workloads, high-throughput streaming, and distributed compute systems that require fault tolerance, performance, and absolute reliability.
As a software engineer on the Data Platform team, you will design, build, and operate the distributed systems powering X’s data movement and compute. You will take ownership of infrastructure components that process trillions of events daily, driving the scalability, performance, and reliability of the systems that power product and ML workloads across the company.
RESPONSIBILITIES:
- Design and implement high-throughput, low-latency data ingestion and transport systems.
- Scale and optimize multi-tenant Kafka infrastructure supporting real-time workloads.
- Extend and tune Spark, Flink, and Trino for demanding production pipelines.
- Build interfaces, APIs, and pipelines enabling teams to query, process, and move data at petabyte scale.
- Debug and optimize distributed systems, with a focus on reliability and performance under load.
- Collaborate with ML, product, and infrastructure teams to unblock critical data workflows.
BASIC QUALIFICATIONS:
- Proven expertise in distributed systems, stream processing, or large-scale data platforms.
- Proficiency in Rust, Go, Scala or similar systems languages.
- Hands-on experience with Kafka, Flink, Spark, Trino, or Hadoop*in production.
- Strong debugging, profiling, and performance optimization skills.
- Track record of shipping and maintaining critical infrastructure.
- Comfortable working in fast-moving, high-stakes environments with minimal guardrails.
COMPENSATION AND BENEFITS:
$180,000 - $440,000 USD
Base salary is just one part of our total rewards package at X, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short & long-term disability insurance, life insurance, and various other discounts and perks.
xAI is an equal opportunity employer. For details on data processing, view our Recruitment Privacy Notice.
Create a Job Alert
Interested in building your career at xAI? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field
