We are looking for an experienced Data Integration Engineer specializing in Databricks to join our team. You will design enterprise data products and pipelines that deliver value to users and achieve business goals. You will collaborate with various stakeholders and contribute to creating engaging and delightful solutions by leveraging your design skills and methods. Come and join us to promote the role of design and achieve positive design outcomes.
Responsibilities
- Develop, maintain and support ETL/ELT solutions
- Implement and manage Databricks pipeline/notebook on Microsoft Azure or AWS Cloud platforms
- Use SQL/ Python, and PySpark or Scala to transform and analyze large datasets
- Work collaboratively with the team on Microsoft Power BI
- Ensure the adherence to best practices in development, testing, and deployment
- Create technical documentation related to the project
Requirements
- Strong experience in Data Integration, proficient in Databricks, and ETL/ELT Solutions
- Strong SQL/Python skills for data transformation and analysis
- Experience with Delta Lake concepts (tables, partitions, schema evolution)
- Familiarity with cloud platforms (Azure, AWS)
- Have hands-on experience developing data pipelines on Databricks, integrating data from multiple sources, and working closely with analytics, data science, and business teams to deliver reliable, high-quality data products
- Knowledge of data ingestion techniques and data integration best practices
- Excellent verbal and written communication skills in English
- Bachelor’s degree, ideally in Computer Science, Information Technology, or a related discipline
Nice to have
- Experience collaborating with international clients or counterparts, ideally including working with global teams