Hyderabad - Services

Backbase

(Services) Data Engineer

We opened the doors of our Business Development Centre (BDC) in the heart of Hyderabad in December 2020. Young at heart, the BDC is a wonderful space that strengthens Backbase’s presence in the APAC region. The physical presence of great talent and support also makes way for long-term relationships with customers.

Nestled in the middle of Hyderabad, the state-of-the-art building sits amidst other global brands. The location offers stunning views of hills and lakes, while nearby there is a selection of restaurants, traditional bazaars, and other cultural hotspots.

Apply now

You design, build, and optimize large-scale data pipelines and platforms across cloud environments. You manage data integration from multiple business systems, ensuring high data quality, performance, and governance. You collaborate with cross-functional teams to deliver trusted, scalable, and secure data solutions that enable analytics, reporting, and decision-making.

What you'll do

  • Data Engineering: Design, build, and optimize scalable ETL/ELT pipelines using Azure Data Factory, Databricks, PySpark, and SQL;
  • Cloud Data Platforms: Manage and integrate data across Azure (Synapse, Data Lake, Event Hub, Key Vault) and GCP (BigQuery, Cloud Storage);
  • API Integration: Develop workflows for data ingestion and processing via REST APIs and web services, including integrations with BambooHR, Salesforce, and Oracle NetSuite;
  • Data Modeling & Warehousing: Build and maintain data models, warehouses, and lakehouse structures to support analytics and reporting needs;
  • Performance Optimization: Optimize Spark jobs, SQL queries, and pipeline execution for scalability, performance, and cost-efficiency;
  • Governance & Security: Ensure data privacy, security, and compliance while maintaining data lineage and cataloging practices;
  • Collaboration: Partner with business stakeholders, analysts, and PMO teams to deliver reliable data for reporting and operations;
  • Documentation: Create and maintain technical documentation for data processes, integrations, and pipeline workflows;

Who you are

  • Education: Bachelor’s/Master’s degree in Computer Science, Engineering, Analytics, Mathematics, Statistics, IT, or equivalent;
  • Experience: 5+ years of experience in Data Engineering and large-scale data migration projects;
  • Technical Skills: Proficient in SQL, Python, and PySpark for data processing and transformation;
  • Big Data & Cloud: Hands-on expertise with Apache Spark, Databricks, and Azure Data Services (ADF, Synapse, Data Lake, Event Hub, Key Vault);
  • GCP Knowledge: Exposure to Google Cloud Platform (BigQuery, Cloud Storage) and multi-cloud data workflows;
  • Integration Tools: Exposure to tools such as Workato for API-based data ingestion and automation;
  • Best Practices: Strong understanding of ETL/ELT development best practices and performance optimization;
  • Added Advantage: Certifications in Azure or GCP cloud platforms;
  • Domain Knowledge: Preferable to have knowledge of Oracle NetSuite, BambooHR, Salesforce data ingestion, and PMO data operations;
  • Soft Skills: Strong problem-solving skills, effective communication, and ability to work both independently and in cross-functional teams while mentoring junior engineers.
Apply now
Join us today

Related jobs