Senior Data Engineer
Little more about the team:
As our very first Senior Data Engineer, you’ll have a unique opportunity to lay the foundation for Honeycomb’s data-driven future. Partnering directly with the Head of Data, you will architect and build a modern, scalable data platform that not only powers our business-critical insights but also sets the standard for data quality and reliability across the organization.
What you’ll do in the role:
- Own the Data Platform: Take full ownership of our Snowflake data warehouse, DBT models, and diverse ingestion platform. You’ll design and maintain end-to-end solutions that enable access to clean, accurate and well-annotated data.
- Build Scalable Systems: Leverage modern technologies to create robust, production-grade data pipelines and models. Your work will enable rapid iteration and empower teams from R&D to Sales, Marketing, Finance, and beyond to make informed, data-driven decisions and have ownership over their data.
- Collaborate Across Functions: Work hand-in-hand with engineering, product, sales, marketing, and business stakeholders to translate complex needs into aligned data architectures and actionable insights. Your collaborative spirit will help bridge gaps and foster a culture of shared success.
- Drive Innovation and Quality: Establish best practices for data quality and reliability by setting meaningful SLO metrics and continuously refining our systems. You’ll have the autonomy to experiment with new technologies and approaches, driving innovation in a fast-paced, evolving environment.
- Lead with Impact: From planning and deployment to long-term maintenance, you’ll lead critical projects with a keen sense of ownership and strategic vision. Your ability to balance technical excellence with business value will be key to our next phase of growth.
If you are a seasoned data professional with a passion for creating scalable, robust data solutions and enjoy solving complex problems through innovative thinking, we’d love to have you help shape the future of Honeycomb. Join us, and be at the forefront of transforming our data capabilities while making a lasting impact across the entire organization.
What you’ll bring to the role:
- Extensive data development including expert-level SQL and programming experience in a scripting language (preferably Python)
- Demonstrated experience with modern data tooling including: MPP Data warehouses (e.g. Redshift or Snowflake (preferred)), DBT Workflow automation (e.g. Airflow, Dagster, Prefect)
- Experience implementing structured data models, architectures and marts (e.g. Inmon, Kimball)
- Experience collaborating with data analysts, data scientists and business users with varying levels of data savvy
- Comfortable working through ambiguous problems - this is our first DE hire so there will be a fair amount of role shaping
Bonus / preferred experience:
- Experience with any of the following: Spark, Scala, Terraform, AWS/K8s, Debezium/Flink
- Experience managing production-grade data pipelines powering customer-facing applications
- Exposure to MLOps and supporting ML/AI team’s data requirements
- Experience working with CRM, Martech and other GTM datasets and systems
Base pay range for this role is CAD $233,504 - $274,710 depending on experience
What you'll get when you join the Hive:
- A stake in our success - generous equity with employee-friendly stock program
- It’s not about how strong of a negotiator you are - our pay is based on transparent levels relative to experience
- Time to recharge - Unlimited PTO and paid sabbatical
- A remote-first mindset and culture (really!)
- Home office, co-working, and internet stipend
- 100% employee/75% for dependents coverage for all benefits
- Up to 16 weeks of paid parental leave, regardless of path to parenthood
- Annual development allowance
- And much more...
Apply for this job
*
indicates a required field