
Bauer/ Cascade Maverik: Data Engineer
Do you have what it takes to win?
Like a championship team, a leading global sports brand is built with a solid foundation of players at all levels who have an unending desire and dedication not only to succeed, but also to win. At Peak Achievement Athletics, our championship team is deeply committed to developing the most innovative sports equipment in the industry and we are always looking to strengthen our roster with talented players.
Want to join our team as a Data Engineer?
The Data Engineer will help design, build, and maintain our data lake platform from the ground up. This role will be central to shaping our data architecture, integrating various data sources, and creating a robust foundation to support analytics, AI, and machine learning capabilities. As a critical member of our data engineering team, this role will define and implement our data lake strategy to support enterprise-wide data needs. The ideal candidate will collaborate closely with stakeholders across departments to gather requirements and develop solutions aligned with business goals. This position also offers the opportunity to contribute to our data governance framework, establishing policies and standards that will support a growing data-driven culture.
Essential Job Functions & Responsibilities:
- Drive the design, development, and maintenance of a scalable data platform on AWS, enabling ingestion of high-volume data from APIs, IoT sensors, SAP, and external partner systems (including email, FTP, EDI). It will serve as a foundation for downstream analytics, API integrations, data warehouses, and advanced data science initiatives, with a focus on automation and operational efficiency.
- Create resilient ETL/ELT pipelines using technologies such as Apache Spark, AWS Glue, and Python, with a focus on efficient data ingestion, transformation, and integration in both real-time and batch processing modes.
- Manage data quality issues and set appropriate data quality monitors and alerts to help prevent future incidents.
- Partner closely with business stakeholders to understand data requirements and use cases, translating them into robust data solutions that support analytics, machine learning, and operational insights.
- Contribute to the development and implementation of data governance practices. Define standards and best practices for data quality, security, and access control, helping to drive our company’s data strategy and governance roadmap.
- Assess and recommend appropriate tools, platforms, and processes (e.g., AWS services, orchestration tools like Airflow) to improve data ingestion, processing, and storage capabilities. Provide strategic input into the selection of tools and services to optimize the data architecture.
- Implement systems and datasets using software engineering best practices, data management fundamentals, and operational excellence principles.
- Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency.
Qualifications:
- Bachelor’s degree in computer science, Engineering or related field, or equivalent years of relevant and related practical experience.
- Minimum 3 years of data engineering experience, with hands-on experience implementing data lakes and data infrastructure, ideally on AWS.
- Demonstrated strong data modeling skills and experienced in data warehousing with building ETL pipelines.
- Familiarity with analytics platforms (e.g., QuickSight, Power BI, Tableau) and proficiency in designing data models for analysis.
- Proficiency in functional programming languages (e.g. Python) and declarative languages (e.g. SQL).
- Understanding of processing and modeling large, complex data sets for analytical use cases with database technologies such as AWS Redshift, Aurora, Vertica, Teradata or equivalent.
- Familiarity with any of the big data technologies such as: Snowflake, Databricks, BigQuery, Cloudera, EMR, etc.
- Experience with any ETL tool like Informatica, Talend, DataStage, DBT, etc.
- Experience with data ingestion tools like AWS Kinesis, and Kafka.
- Knowledge of software engineering best practices across the development life cycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations.
- Familiarity with a variety of data formats such as Parquet, Avro, JSON, XML, and best practices in handling them.
- Strong understanding of data management principles like governance, quality, accessibility, security, integration, and usability.
- Demonstrated ability to tactically deliver functional products based on gathered business requirements.
- Ability to communicate effectively and work independently with little supervision to deliver on time quality products.
- Ability to work in a fast-paced, dynamic environment.
- Adjust quickly to changing priorities and business needs.
- Travel as needed and/or required by essential job functions.
Interested yet? Good. We are, too. We're pretty sure you’ll want to know this position is eligible to participate in the Company’s Annual incentive plan. We also offer one of the most generous benefits packages around including a retirement savings plan with employer match, an employee discount program on apparel and gear, casual & hybrid work environment and a host of other perks we don't have room to mention here.
We are committed to employing a diverse workforce and are an equal opportunity employer.
Apply for this job
*
indicates a required field