Back to jobs
New

Senior Data Engineer

Gurgaon

WPP is the creative transformation company. We use the power of creativity to build better futures for our people, planet, clients, and communities.

Working at WPP means being part of a global network of more than 100,000 talented people dedicated to doing extraordinary work for our clients. We operate in over 100 countries, with corporate headquarters in New York, London and Singapore.

WPP is a world leader in marketing services, with deep AI, data and technology capabilities, global presence and unrivalled creative talent. Our clients include many of the biggest companies and advertisers in the world, including approximately 300 of the Fortune Global 500.

Our people are the key to our success. We're committed to fostering a culture of creativity, belonging and continuous learning, attracting and developing the brightest talent, and providing exciting career opportunities that help our people grow. 

Why we're hiring:

We are seeking a highly skilled and experienced Senior Data Engineer to join our growing data team. In this critical role, you will be instrumental in designing, building, and optimizing our scalable data lakehouse platform using Databricks. You will be a key player in developing robust data pipelines that ingest data from various sources, including Google Analytics 4 (GA4), and transform it into reliable, analysis-ready datasets within the Databricks environment.

This role requires deep expertise in Databricks, Apache Spark, Python (PySpark), and SQL. You will be responsible for the entire data lifecycle within the lakehouse, from ingestion and transformation to governance and optimization, ensuring data quality and performance. You should be adept at analyzing performance bottlenecks in Spark jobs, providing enhancement recommendations, and collaborating effectively with both technical and non-technical stakeholders.

What you'll be doing:

  • Design, build, and deploy robust ETL/ELT pipelines within the Databricks Lakehouse Platform using PySpark and Spark SQL.
  • Implement and manage the Medallion Architecture (Bronze, Silver, Gold layers) using Delta Lake to ensure data quality and progressive data refinement.
  • Leverage Databricks Auto Loader for efficient, scalable, and incremental ingestion of data from sources like GA4 into the Bronze layer.
  • Develop, schedule, and monitor complex, multi-task data workflows using Databricks Workflows.
  • Optimize Spark jobs and Delta Lake tables (using techniques like OPTIMIZE, Z-ORDER, and partitioning) for high performance and cost efficiency.
  • Implement data governance, security, and discovery using Unity Catalog, including managing access controls and data lineage.
  • Write complex, customized SQL queries to manipulate data and support ad-hoc analytical requests from business teams.
  • Develop strategies for data ingestion from multiple sources, using various techniques including streaming, API consumption, and replication.
  • Document data engineering processes, data models, and technical specifications for the Databricks platform.
  • Conform to agile development practices, including version control (Git), continuous integration/delivery (CI/CD), and test-driven development.
  • Provide production support for data pipelines, actively monitoring and resolving issues to ensure the continuous flow of critical data.
  • Collaborate with analytics and business teams to understand data requirements and deliver well-modelled, performant datasets in the Gold layer.

What you'll need:

  • Education: Minimum of a Bachelor's degree in Computer Science, Engineering, Mathematics, or a related technical field preferred.
  • Experience: 8+ years of relevant experience in data engineering, with a significant focus on building data pipelines on distributed systems.
  • Engineer's Core Skills:
    • Databricks Lakehouse Platform Expertise:
    • Apache Spark: Deep, hands-on experience with Spark architecture, writing and optimizing complex PySpark and Spark SQL jobs. Proven ability to use the Spark UI to diagnose and resolve performance bottlenecks.
    • Delta Lake: Mastery of Delta Lake for building reliable data pipelines. Proficient with ACID transactions, time travel, schema evolution, and DML operations (MERGE, UPDATE, DELETE).
    • Data Ingestion: Experience with modern ingestion tools, particularly Databricks Auto Loader and COPY INTO for scalable file processing.
    • Unity Catalog: Strong understanding of data governance concepts and practical experience implementing security, lineage, and discovery using Unity Catalog.
  • Core Engineering & Cloud Skills:
    • Programming: 5+ years of strong, hands-on experience in Python, with an emphasis on PySpark for large-scale data transformation.
    • SQL: 6+ years of advanced SQL experience, including complex joins, window functions, and CTEs.
    • Cloud Platforms: 5+ years working with a major cloud provider (Azure, AWS, or GCP), including expertise in cloud storage (ADLS Gen2, S3), security (IAM), and networking.
    • Data Modeling: Experience designing star schemas and applying data warehouse methodologies to build analytical models (Gold layer).
    • CI/CD & DevOps: Hands-on experience with version control (Git) and CI/CD pipelines (e.g., GitHub Actions, Azure DevOps) for automating the deployment of Databricks assets.
  • Tools & Technologies:
    • Primary Data Platform: Databricks
    • Cloud Platforms: Azure (Preferred), GCP, AWS
    • Data Warehouses (Integration): Snowflake, Google BigQuery
    • Orchestration/Transformation: Databricks Workflows, dbt (data build tool)
    • Version Control: Git/GitHub or similar repositories
    • Infrastructure as Code (Bonus): Terraform
    • BI Tools (Bonus): Looker or Power BI
  • You're good at:
    • Working independently and proactively solving complex technical problems.
    • Collaborating positively within a team and partnering with remote members in different time zones.
    • Communicating complex technical concepts clearly to non-technical audiences.
    • Thriving in a fast-paced, service-oriented environment.
    • Working within an Agile methodology.

Who you are:

You're open: We are inclusive and collaborative; we encourage the free exchange of ideas; we respect and celebrate diverse views. We are open-minded: to new ideas, new partnerships, new ways of working.

You're optimistic: We believe in the power of creativity, technology and talent to create brighter futures or our people, our clients and our communities. We approach all that we do with conviction: to try the new and to seek the unexpected.

You're extraordinary: we are stronger together: through collaboration we achieve the amazing. We are creative leaders and pioneers of our industry; we provide extraordinary every day.

 

What we'll give you:

Passionate, inspired people – We aim to create a culture in which people can do extraordinary work.

Scale and opportunity – We offer the opportunity to create, influence and complete projects at a scale that is unparalleled in the industry.

Challenging and stimulating work – Unique work and the opportunity to join a group of creative problem solvers. Are you up for the challenge?

#LI-Onsite

We believe the best work happens when we're together, fostering creativity, collaboration, and connection. That's why we’ve adopted a hybrid approach, with teams in the office around four days a week. If you require accommodations or flexibility, please discuss this with the hiring team during the interview process.

WPP is an equal opportunity employer and considers applicants for all positions without discrimination or regard to particular characteristics. We are committed to fostering a culture of respect in which everyone feels they belong and has the same opportunities to progress in their careers.

Please read our Privacy Notice (https://www.wpp.com/en/careers/wpp-privacy-policy-for-recruitment) for more information on how we process the information you provide.

Create a Job Alert

Interested in building your career at WPP? Get future opportunities sent straight to your email.

Apply for this job

*

indicates a required field

Phone
Resume/CV*

Accepted file types: pdf, doc, docx, txt, rtf

Cover Letter

Accepted file types: pdf, doc, docx, txt, rtf


Select...
Select...
Select...

Voluntary Equal Opportunities Questions (India)

WPP is an equal opportunity employer and considers applicants for all positions without discrimination or regard to particular characteristics. We are committed to fostering a culture of respect in which everyone feels they belong and has the same opportunities to progress in their careers.

To help us hold ourselves accountable for progress and monitor our efforts, we invite you to answer the following questions. You are not required to answer these questions, they are entirely voluntary and will not be viewable as part of your application or candidate profile. If you choose to answer, please know that all data is stored separately and used in aggregate for reporting purposes. 

Select...
Select...
Select...
Select...