Azure Data Engineer
Passionate team members, challenging projects and a great place to work! This is what you can expect if you join the Quisitive team. Founded in 2016, Quisitive is a global Microsoft services and solutions partner. We are a team of professionals with a strong reputation for successfully delivering award-winning Microsoft solutions, including being named Microsoft's 2024 Analytics Partner of the Year. Our culture of continual learning and innovation ensures that we remain committed to Microsoft’s long-term strategy.
What do we attribute our award-winning success to? The people we hire, of course! Our team members join Quisitive for more than just a job. They come to Quisitive to contribute to something bigger than themselves – to be part of a high-performing culture, continue their infinite quest to learn, and deliver innovative and exciting solutions that impact both Quisitive's and our customers’ future success. Our leaders continuously strive to provide the tools and resources that you need to do what you do best each and every day!
It is a very exciting time of growth for our Data & AI practice, and we are currently hiring an Azure Data Engineer to provide technical excellence for the team.
This role can be located anywhere in the United States, but we would prefer the Atlanta metro area.
What will this role entail?
As an Azure Data Engineer you will design, build, and maintain scalable data pipelines and analytics solutions on the Microsoft Azure platform. In this role, you will work with cutting-edge technologies – including Azure Databricks, Azure Synapse Analytics pipelines, and Microsoft Fabric – to empower our business with timely, accurate data. The ideal candidate is proficient in Azure data services, has a strong background in data engineering (ETL/ELT), and is excited about optimizing data systems from the ground up.
Key Responsibilities:
- Design & Develop Data Pipelines: Create end-to-end data pipelines using Azure tools (e.g., Azure Data Factory and Synapse Pipelines) to ingest, transform, and load data from various sources into data lakes, data warehouses, and other storage solutions
- Azure Databricks & Spark Processing: Utilize Azure Databricks (Spark) for big data processing tasks. Develop and optimize PySpark/Scala jobs to cleanse, transform, and aggregate large datasets in both batch and real-time streaming modes
- Data Warehousing & Modeling: Build and maintain data warehouses/lakehouses on Azure Synapse or Microsoft Fabric. Design star-schema or snowflake data models that enable efficient analytics and reporting, implementing best practices for performance and maintainability
- Optimize & Monitor Workloads: Tune queries, pipelines, and Spark jobs for performance and cost-efficiency. Monitor pipeline runs, data storage, and processing metrics to ensure reliability and address issues proactively (set up alerts, logging, and automation for failures/retries)
- Collaborate with Teams: Work closely with data analysts, BI developers, and data scientists to understand data needs. Provide clean, well-structured data sets that meet business requirements. Collaborate with cloud architects to ensure solutions are aligned with Azure best practices
- Implement Data Quality & Security: Ensure data quality through validation checks and testing. Implement data security measures such as encryption, access control (Azure RBAC), and data masking as needed. Maintain data governance standards and documentation for data pipelines and datasets
- Stay Current & Innovate: Continuously research and suggest new ways to improve existing data architecture. Stay up-to-date with the latest Azure data services (like Microsoft Fabric and other emerging technologies) and bring innovative ideas to optimize our data platform
What’s required to be successful in this role?
- 3+ years of experience in data engineering or BI development, with a focus on the Azure cloud data platform
- Azure Data Services Expertise: Hands-on experience with Azure Databricks, Azure Synapse Analytics (particularly Synapse pipelines and SQL pools), Azure Data Factory, and Azure Data Lake Storage. Familiarity with Microsoft Fabric or eagerness to learn it, as we are adopting Fabric for unified analytics
- Programming & DB Proficiency: Strong coding skills in Python (for data manipulation and scripting) and SQL (for querying and transforming data)
- ETL/ELT and Data Modeling: Proven ability to design and implement ETL/ELT workflows. Solid understanding of relational databases and data modeling concepts (normalization, dimensional modeling)
- Experience building data warehouses or data marts on Azure (SQL Database/Synapse)
What else would set me apart?
- Experience developing data processing scripts/notebooks in PySpark or Spark SQL.
- Knowledge of Scala or .NET for data engineering
- Previous experience with a Microsoft systems integrator
- Microsoft Fabric Data Engineer Associate Certification
We are looking for curious initiative takers to join our team, so if you are passionate about working with smart people that are committed to accomplishing great things, then apply today!
Please, no third-party agency inquiries, and we are unable to offer visa sponsorships at this time.
About Quisitive
With significant growth since 2016, Quisitive is rapidly achieving our vision of becoming the leading global Microsoft partner as we continue to expand across the United States, Canada and India. With a diversified delivery model that includes both nearshore and offshore capabilities, our team of Microsoft experts delivers cloud and artificial intelligence business solutions and services that ensure our customers achieve their digital transformation goals. In addition, Quisitive offers a portfolio of industry-focused solutions that address customer challenges in healthcare, manufacturing, state & local government and performance management.
Create a Job Alert
Interested in building your career at Quisitive? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field