Data Engineer
About Interval
Interval helps enterprises turn messy, underused data into governed, high-confidence intelligence—without handing control to a black box. We bring compute to your data with a private data lakehouse, verifiable audit trails, and U-AI, our contextual AI framework for secure AI workflows.
Our platform is built around three outcomes:
- Control: Keep ownership of your data and how models use it.
- Verify: Audit what happened, why it happened, and where results came from.
- Monetize: Create new revenue opportunities through private, permissioned data exchange.
Role Overview
We are seeking a highly skilled Data Engineer to join our team and revolutionize how enterprises secure, analyze, and monetize their data—on their terms. As a Data Engineer at Interval, you’ll work on building and optimizing secure, scalable data pipelines and infrastructure that ensure privacy, compliance, and enable AI-powered business transformation.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines for ingestion, transformation, and delivery of large datasets across diverse industries.
- Implement and ensure data privacy and security best practices, supporting data sovereignty and compliance with regulatory requirements.
- Collaborate closely with AI/ML engineers, Data Scientists, and Platform engineers to enable advanced analytics and AI capabilities while retaining strict data control.
- Optimize data platforms and systems for performance, reliability, and cost efficiency.
- Build tools and frameworks for secure, privacy-preserving data processing and orchestration.
- Develop and maintain documentation, data models, and technical workflows.
- Partner with cross-functional teams to launch new data-driven product features and solutions.
Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field.
- Proven experience in designing and building ETL pipelines and data infrastructure (cloud, hybrid, and/or on-premise).
- Strong proficiency with Python, SQL, and modern data engineering toolsets (e.g., Apache Spark, Kafka).
- Solid understanding of data security, privacy frameworks, and regulatory compliance such as GDPR, CCPA, or equivalent.
- Experience with privacy-first, AI-native, or data sovereignty-focused platforms is a plus.
- Familiarity with industry-specific data challenges (CPG, financial services, energy, supply chain, etc.) is advantageous.
- Excellent analytical and communication skills; proactive and detail-oriented.
Why Interval?
- Shape the frontier of AI, blockchain, and enterprise data infrastructure.
- Enjoy meaningful ownership, flexible work, and the autonomy where data, AI, and privacy meet
- Thrive in a sharp, mission-driven team backed by top-tier technical leadership and investors
Apply for this job
*
indicates a required field