_(1).png?1701882774)
Staff Data Engineer
We are looking to hire an exceptional Data Engineer. In this role, you will leverage your expertise in ETL development, data pipeline optimization, and data warehousing to ensure seamless data accessibility for business intelligence and analytics teams. You will work closely with business and technical stakeholders to design, build, and maintain scalable data pipelines and workflows, enabling data-driven decision-making across the organization. Additionally, you will be responsible for optimizing data warehouse performance, automating data refresh schedules, and ensuring data integrity and reliability.
You will...
- Design, develop, and maintain scalable ETL pipelines to support data integration and analytics needs.
- Build, optimize, and manage data pipelines and workflows to ensure efficient data movement and transformation.
- Monitor and troubleshoot data pipeline performance, ensuring data reliability and accuracy
- Maintain and optimize data warehouse architecture, ensuring efficient storage and data retrieval.
- Work with large, complex datasets to support analytical and business intelligence needs.
- Define, execute, and optimize SQL queries for data transformation and extraction.
- Collaborate with Business Intelligence, Data Analytics, and Engineering teams to ensure data accessibility and performance.
- Automate data ingestion, processing, and refresh schedules to maintain up-to-date datasets.
- Implement data governance, security, and compliance best practices.
- Continuously evaluate and adopt new technologies to improve data infrastructure.
You have...
- 4+ years of experience in ETL development, data pipeline engineering, or data warehouse management.
- Strong proficiency in SQL (PostgreSQL, MySQL, or similar) for data manipulation and optimization.
- Experience with data pipeline tools (e.g., Apache Airflow, AWS Glue, dbt, or similar).
- Hands-on experience with cloud-based data platforms (e.g., AWS Redshift, Snowflake, BigQuery, or Azure Synapse).
- Knowledge of data modeling, data warehousing concepts, and performance tuning.
- Experience working with structured and semi-structured data formats (JSON, Parquet, Avro, etc.).
- Proficiency in Python or another scripting language for data automation and transformation.
- Strong problem-solving and troubleshooting skills.
- Excellent communication skills and ability to work with both technical and non-technical stakeholders.
- Experience working in Agile development practices and delivering iterative solutions.
Nice to have...
- Experience with orchestration tools like Apache Airflow or Prefect.
- Familiarity with real-time data streaming (Kafka, Kinesis, or similar).
- Knowledge of NoSQL databases (MongoDB, DynamoDB, etc.).
- Experience with CI/CD pipelines for data engineering workflows.
- Certifications in AWS, Azure, GCP, or relevant data engineering technologies.
By clicking the “APPLY NOW” button and submitting your job application, you agree you have reviewed the complete Privacy Notice for Employees, Independent Contractors and Job Applicants, which explains the categories of personal information we collect about you, the purposes for which the categories of personal information shall be used and your rights with respect to our use of such personal Information.
Apply for this job
*
indicates a required field