We are in search of a highly skilled Senior Data DevOps Engineer with expertise in Azure to join our team and support the implementation, delivery, and optimization of our data platforms. You will be responsible for deploying scalable solutions, streamlining workflows, and enabling data-driven processes by collaborating with cross-functional teams.
Responsibilities
- Deploy and configure the Data Platform in Databricks based on the approved architecture and solution designs, ensuring environments are production-ready, secure, and scalable
- Work closely with cross-functional teams (Data Engineering, ML, Platform, QA) to design, implement, and evolve CI/CD pipelines and supporting tooling for data workflows and services
- Diagnose and resolve issues in build/deploy pipelines, data workflows, and production workloads; participate in root cause analysis and implement preventive fixes
- Develop, standardize, and maintain configuration management practices (infrastructure configuration, environment parameters, secrets, cluster policies) to ensure consistency across environments
- Produce and maintain clear technical documentation covering deployment guides, operational runbooks, pipeline logic, and platform configuration
Requirements
- 3+ years of experience in a Build Engineer, DevOps Engineer, Platform Engineer, or similar role supporting delivery and operations
- Strong hands-on experience with Databricks, Azure Data Factory, Azure DevOps, and Microsoft Azure in general — including DataOps practices such as automated data pipeline deployment, environment promotion, and governance
- Experience with MLOps (model deployment, monitoring, lifecycle automation for ML workloads) is considered an advantage
- Strong communication and collaboration skills, with the ability to work effectively across engineering, data, and operations teams
- English at B2 level or higher, ability to participate in technical discussions and produce documentation in English