Senior Data Engineer
About Clutch:
Clutch is Canada’s largest online used car retailer, delivering a seamless, hassle-free car-buying experience to drivers everywhere. Customers can browse hundreds of cars from the comfort of their home, get the right one delivered to their door, and enjoy peace of mind with our 10-Day Money-Back Guarantee… and that’s just the beginning.
Named one of Canada’s top growing Companies two years in a row and also awarded a spot on LinkedIn’s Top Canadian Startups list, we’re looking to add curious, hard-working, and driven individuals to our growing team.
Headquartered in Toronto, Clutch was founded in 2017. Clutch is backed by a number of world-class investors, including Canaan, BrandProject, Real Ventures, D1 Capital, and Upper90. To learn more, visit clutch.ca.
What you'll do:
- Develop, test, and maintain data management solutions that support business goals and drive decision-making processes.
- Utilize ETL/ELT processes to manage complex data transformations for reporting and analytics, ensuring the flow of data between systems is smooth and efficient.
- Identify and resolve data quality issues, performing regular audits to maintain data accuracy and integrity across various sources.
- Optimize data integration processes to improve efficiency, reliability, and scalability of data pipelines.
- Design and implement data transformations to align with evolving business requirements, ensuring that data is structured to meet analytical needs.
- Support and implement modern data frameworks such as data lakes, data warehouses, and cloud-based architectures for enhanced business intelligence and analytics capabilities.
- Leverage DevOps tools (e.g., Git, Github Actions, Docker) for code versioning, deployment automation, and ensuring continuous integration and delivery of data pipelines.
- Monitor data pipelines in real time to detect and prevent data integrity issues, ensuring reliable data delivery across platforms, using tools such as (Datadog, Prometheus, Grafana)
- Document data definitions, processes, and solutions to establish clear data standards and facilitate communication across teams.
- Ensure data solutions adhere to security, scalability, and reliability requirements, following industry best practices and company policies.
What we're looking for:
- Bachelor’s or Master’s degree in Computer Science, Mathematics, Information Systems, or a related technical field.
- 3+ years of experience in data engineering, data architecture, or related fields.
- Strong programming skills in languages like (e.g. Python, Typescript, Javascript)
- Advanced SQL knowledge, with experience in writing complex queries, optimizing database performance, and working with SQL-based systems (e.g., PostgreSQL, MySQL, SQL Server).
- Experience with modern database technologies, including relational databases (e.g., Oracle, PostgreSQL, AWS Aurora), NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB), and cloud-based data solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake).
- Experience with ETL/ELT tools, such as Apache Airflow, Talend, or Informatica.
- Experience with cloud platforms, such as AWS, GCP, or Azure, and knowledge of their data services (e.g., AWS Glue, AWS S3, Sagemaker, Azure Data Factory, GCP Dataflow). AWS preferred
- Familiarity with data warehousing solutions and tools like Hadoop, Spark, or Kafka for real-time data streaming and big data processing.
- Understanding of DevOps methodologies, with experience using tools like Docker, Kubernetes, Datadog, Terraform, and Github Actions for managing data pipeline deployments.
Why you’ll love it at Clutch:
- Autonomy & ownership -- create your own path, and own your work
- Competitive compensation and equity incentives!
- Generous time off program
- Health & dental benefits
Clutch is committed to fostering an inclusive workplace where all individuals have an opportunity to succeed. If you require accommodation at any stage of the interview process, please email talent@clutch.ca.
Apply for this job
*
indicates a required field