New

(Services) Data Engineer

Hyderabad

As a Data Engineer
You design, build, and optimize large-scale data pipelines and platforms across cloud environments. You manage data integration from multiple business systems, ensuring high data quality, performance, and governance. You collaborate with cross-functional teams to deliver trusted, scalable, and secure data solutions that enable analytics, reporting, and decision-making.

Meet the job

Data Engineering: Design, build, and optimize scalable ETL/ELT pipelines using Azure Data Factory, Databricks, PySpark, and SQL;
 ● Cloud Data Platforms: Manage and integrate data across Azure (Synapse, Data Lake, Event Hub, Key Vault) and GCP (BigQuery, Cloud Storage);
 ● API Integration: Develop workflows for data ingestion and processing via REST APIs and web services, including integrations with BambooHR, Salesforce, and Oracle NetSuite;
 ● Data Modeling & Warehousing: Build and maintain data models, warehouses, and lakehouse structures to support analytics and reporting needs;
 ● Performance Optimization: Optimize Spark jobs, SQL queries, and pipeline execution for scalability, performance, and cost-efficiency;
 ● Governance & Security: Ensure data privacy, security, and compliance while maintaining data lineage and cataloging practices;
 ● Collaboration: Partner with business stakeholders, analysts, and PMO teams to deliver reliable data for reporting and operations;
 ● Documentation: Create and maintain technical documentation for data processes, integrations, and pipeline workflows;

How about you

Education: Bachelor’s/Master’s degree in Computer Science, Engineering, Analytics, Mathematics, Statistics, IT, or equivalent;
 ● Experience: 5+ years of experience in Data Engineering and large-scale data migration projects;
 ● Technical Skills: Proficient in SQL, Python, and PySpark for data processing and transformation;
 ● Big Data & Cloud: Hands-on expertise with Apache Spark, Databricks, and Azure Data Services (ADF, Synapse, Data Lake, Event Hub, Key Vault);
 ● GCP Knowledge: Exposure to Google Cloud Platform (BigQuery, Cloud Storage) and multi-cloud data workflows;
 ● Integration Tools: Exposure to tools such as Workato for API-based data ingestion and automation;
 ● Best Practices: Strong understanding of ETL/ELT development best practices and performance optimization;
 ● Added Advantage: Certifications in Azure or GCP cloud platforms;
 ● Domain Knowledge: Preferable to have knowledge of Oracle NetSuite, BambooHR, Salesforce data ingestion, and PMO data operations;
 ● Soft Skills: Strong problem-solving skills, effective communication, and ability to work both independently and in cross-functional teams while mentoring junior engineers.

Create a Job Alert

Interested in building your career at Backbase? Get future opportunities sent straight to your email.

Apply for this job

*

indicates a required field

Resume/CV

Accepted file types: pdf, doc, docx, txt, rtf

Cover Letter

Accepted file types: pdf, doc, docx, txt, rtf



APAC Demographic questions

At Backbase, we recognise the importance of a diverse workforce and belonging. As part of our commitment to  a more inclusive environment, we gather some demographic questions during the recruitment process.

Any demographic questions asked during the application process are voluntary and will not influence hiring decisions. Your responses are completely anonymous and solely used for internal diversity initiatives. Backbase is an equal opportunity employer and does not discriminate. 

If you have any concerns or questions regarding the collection and use of demographic information, please feel free to contact our Human Resources department.

 

Select...
Select...
Select...