Back to jobs
New

Senior Data Engineer

Jerusalem, Israel

About the Role

We're looking for a Senior Data Engineer to join our Data Engineering team and help us build and scale our production-grade data platform. This is a hybrid role, with 3 days a week (Sunday, Monday and Wednesday) required from our Jerusalem office (located in the Central Bus Station, next to the train). You'll work on high-performance systems built on self-hosted ClickHouse, optimize complex data pipelines, and collaborate closely with Product, Analytics, and Infrastructure teams to deliver reliable, fast, and scalable data solutions.

This is a hands-on technical role where you'll have a significant impact on how we ingest, model, store, and serve data that powers our analytics and AI-driven products.
You’ll play a key role in shaping the direction of our data platform and have meaningful ownership over critical components of our architecture.

What You'll Do

Data Modeling & Architecture

  • Design and evolve data models that reflect business logic and support analytical use cases
  • Collaborate with the BI and Analytics teams to understand data requirements and translate them into efficient schemas

Performance Optimization

  • Optimize ClickHouse schemas, partitioning strategies, indexing, and compression
  • Profile and tune slow queries to improve performance and reduce costs
  • Implement systems that ensure data quality, consistency, and operational efficiency (e.g., deduplication, validation, anomaly detection)

  • Monitor pipeline health, data freshness, and query performance with appropriate alerting mechanisms

SQL Compiler Development

Develop and maintain the SQL Compiler layer that translates high-level queries into optimized ClickHouse execution plansImplement query optimization and rewriting strategies to improve performanceDebug and resolve compiler issues to ensure accurate and efficient query translation

Data Pipeline Development & Collaboration

  • Review and advise the Integration team on pipeline architecture, performance, and best practices.
  • Provide guidance on data modeling, schema design, and optimization for new data sources.
  • Troubleshoot and maintain existing pipelines when issues arise or optimization is needed
  • Ensure data freshness, reliability, and quality across all ingestion pipelines.

Collaboration & Support

  • Work closely with the Integration team to ensure smooth data ingestion from new sources.
  • Partner with Infrastructure to support high availability and disaster recovery
  • Support other teams across the company in accessing and using data effectively.

What We're Looking For

Required

  • Excellent communication and collaboration skills
  • English at a high level, written and spoken required
  • Ability to work from our Jerusalem office (located in the Central Bus Station next to the train) 3 times a week (Sunday, Monday, Wednesday) is required
  • Strong attention to detail, ownership mentality, and ability to work independently
  • Quick learner who can dive into new codebases, technologies, and systems independently
  • Hands-on mentality - not afraid to roll up your sleeves, dig into unfamiliar code, and work across the stack (including backend when needed)
  • 4+ years of experience as a Data Engineer
  • Strong problem-solving skills for complex data challenges at scale - ability to debug performance issues, data inconsistencies, and system bottlenecks in high-volume environments
  • Experience with data modeling and schema design for analytical workloads
  • Strong proficiency in SQL and experience with complex analytical queries
  • Hands-on experience building and maintaining data pipelines (ETL/ELT)
  • Ability to troubleshoot and optimize systems handling large data volumes (millions+ rows, complex queries, high throughput)
  • Knowledge of query optimization techniques and execution planning
  • Familiarity with columnar databases (ClickHouse, BigQuery, Redshift, Snowflake, or similar). Columnar DB experience is a big plus.

Nice to Have

  • Experience with ClickHouse specifically
  • Experience with real-time or streaming data pipelines
  • Familiarity with SQL compilers or query engines
  • Background in data quality frameworks and observability tools
  • Experience with infrastructure as code (Terraform, Ansible, Pulumi ,etc.)
  • Contributions to open-source data projects
  • Experience with data orchestration tools (Airflow, Dagster, Prefect, etc.)
  • Experience with any scripting language for data processing
  • Understanding of distributed systems and data architecture concepts at scale

Plus

  • Experience working in e-commerce, analytics, or BI platforms

Create a Job Alert

Interested in building your career at Triple Whale? Get future opportunities sent straight to your email.

Apply for this job

*

indicates a required field

Phone
Resume/CV

Accepted file types: pdf, doc, docx, txt, rtf

Cover Letter

Accepted file types: pdf, doc, docx, txt, rtf