The primary business objective of this project is to modernize and scale the enterprise data platform by migrating the existing on-premises HDFS and Hive-based ecosystem to a cloud object storage solution.
This transition aims to reduce infrastructure maintenance overhead, improve scalability, and support advanced analytics through Apache Iceberg, which offers improved performance, versioned datasets, and native compatibility with modern data processing engines.
Essential functions
- Lead and mentor a team of data engineers, providing technical direction and career guidance.
- Define the target data platform architecture for migrating from on-prem HDFS/Hive to Cloud Object Storage (e.g., AWS S3, Azure Data Lake Storage, or GCP Cloud Storage).
- Select and integrate cloud-based compute and query engines (e.g., Spark).
- Lead the design of ingestion, transformation, and storage patterns optimized for scalability, cost-efficiency, and performance in the cloud.
- Define security, encryption, and compliance controls for sensitive enterprise data in the cloud.
- Develop and own the migration roadmap, including phased transition from on-prem to cloud while minimizing business disruption.
- Oversee data migration strategies (bulk historical loads, incremental sync, and cutover).
- Define and enforce coding standards, CI/CD pipelines, and automated testing for data pipelines.
- Partner with Data Architects, Cloud Engineers, and Security teams to align platform design with enterprise standards
Qualifications
- Proven experience leading data engineering teams, including distributed teams across multiple geographies and time zones.
- Effective in managing cross-team collaboration with architects, product managers, and operations.
- Knowledge of Scala and Python
- Experience with Apache Spark (batch & streaming)
- Deep knowledge of HDFS internals and migration strategies.
- Experience with Apache Iceberg (or similar table formats like Delta Lake / Apache Hudi) for schema evolution, ACID transactions, and time travel.
- Running Spark and/or Flink jobs on Kubernetes (e.g., Spark-on-K8s operator, Flink-on-K8s).
- Experience with distributed blob storages like Ceph or AWS S3 and similar
- Building ingestion, transformation, and enrichment pipelines for large-scale datasets.
- Infrastructure-as-Code (Terraform, Helm) for provisioning data infrastructure.
- Strong communication skills
- Availability to join evening calls (till 21:00 EET)
Would be a plus
- Experience with Apache Flink
We offer
- Opportunity to work on bleeding-edge projects
- Work with a highly motivated and dedicated team
- Competitive salary
- Flexible schedule
- Benefits package - medical insurance, sports
- Corporate social events
- Professional development opportunities
- Well-equipped office
About us
Grid Dynamics (NASDAQ: GDYN) is a leading provider of technology consulting, platform and product engineering, AI,
and advanced analytics services. Fusing technical vision with business acumen, we solve the most pressing technical
challenges and enable positive business outcomes for enterprise companies undergoing business transformation.
A key differentiator for Grid Dynamics is our 8 years of experience and leadership in
enterprise AI, supported by profound expertise and ongoing investment in
data,
analytics,
cloud & DevOps,
application modernization
and
customer experience.
Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India.