We are looking for a highly skilled and experienced Data Software Engineer to join our team to advance the development of data-centric applications.
This role involves working with cutting-edge Big Data tools, cloud technologies, and collaborating across teams to deliver innovative solutions for complex business challenges.
Responsibilities
- Develop and refine data software applications used by Data Integration Engineers
- Build and deploy sophisticated analytical solutions with Spark, PySpark, NoSQL, and other Big Data tools
- Integrate AWS cloud services to improve and automate data workflows
- Support product and engineering teams by collecting insights and enabling better decisions
- Coordinate across architects, technical leads, and other teams to ensure cohesive implementations
- Determine business needs and technical context to deliver appropriate technical solutions
- Conduct code reviews to maintain coding standards and promote best practices
- Validate and test deliverables against functional, technical, and performance requirements
- Maintain thorough project documentation for future reference and iteration
- Interact with clients to clarify needs and provide expert technical advice
Requirements
- Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a similar field
- 2+ years of experience in Data Software Engineering focused on Big Data technologies
- Comprehensive understanding of data engineering areas such as data management, storage, visualization, operations, and security
- Strong expertise in data ingestion pipelines, Data Warehousing, and Data Lakes
- Proficiency across Python, Java, Scala, or Kotlin for building data-centric software
- Extensive knowledge of SQL as well as NoSQL databases
- Proven hands-on experience with Spark and PySpark
- Ability to implement and deploy data solutions on AWS, including Glue and RedShift
- Familiarity with CI/CD pipelines and integration/deployment workflows
- Experience with containerization using Docker, Kubernetes, and Yarn
- Practical knowledge of Databricks for advanced analytics and engineering workloads
- English proficiency at B2 (Upper-Intermediate) level or above in speaking and writing
Nice to have
- Familiarity with Hadoop, Hive, Flink, and other Big Data tools
- Knowledge of SDLC methodologies, with Agile experience
- Capability to implement and manage SDLC processes effectively