We are seeking a skilled Senior Data Engineer with expertise in Scala, Spark and Databricks to join our team. In this role, you will lead the design, implementation and optimization of data pipelines, ensuring scalable and efficient data processing. Your work will enable the aggregation of commerce purchase and catalog data into a centralized data lake while contributing to innovative future projects.
Please note that for positions in Ukraine, we only consider candidates who are currently based in Ukraine or plan to return in the near future. Remote work is available only from Ukraine.
Responsibilities
- Develop and maintain code bases for ETL and ELT pipelines, large batch/micro-batch processing and streaming systems
- Build infrastructure for optimal extraction, transformation and loading of data using Spark, ADF or similar technologies
- Identify and implement internal process improvements, including automation, data optimizations and infrastructure redesign for scalability
- Design and implement data services solutions leveraging technologies such as Spring Boot, ReactJS or NoSQL
- Ensure governance of processes in delivery management and production adhering to the selected delivery model
- Serve as a single point of accountability for delivery-related matters, including escalations, upsells and ramp-downs
- Lead technical delivery to ensure sound architecture and compliance with quality standards
- Define and document stories with associated acceptance criteria for agile workflows
- Coordinate with stakeholders and teams across various disciplines to ensure seamless project execution
- Deliver projects following customer processes, methodologies and agile approaches
- Establish continuous delivery risk management strategies for proactive decision-making across the lifecycle
- Enhance delivery productivity through measurement and process improvements
- Provide expert consulting and guidance to Data Engineers to ensure quality deliverables
- Manage production support and deployment activities
Requirements
- At least 3 years of working experience in data engineering roles
- Proficiency in Scala, Spark and Databricks
- Expertise in SQL, including writing and optimizing complex queries
- Familiarity with Spark Streaming and Apache Kafka
- Background in designing data pipelines using Databricks or similar tools
- Strong soft skills with excellent English proficiency (B2+) for direct client communication
Nice to have
- Skills in Apache Kafka, Apache Kafka Streams, PySpark and Spark Streaming
- Understanding of Python and its use in data engineering contexts
- Knowledge of Azure cloud platforms for data solutions