We are seeking a highly skilled Senior Data DevOps Engineer specializing in Azure to join our team and drive the deployment, configuration, and optimization of data platforms and workflows. This role combines technical expertise with collaboration, paving the way for scalable and secure data solutions.
Responsibilities
- Deploy and configure the Data Platform in Databricks based on the approved architecture and solution designs, ensuring environments are production-ready, secure, and scalable
- Work closely with cross-functional teams (Data Engineering, ML, Platform, QA) to design, implement, and evolve CI/CD pipelines and supporting tooling for data workflows and services
- Diagnose and resolve issues in build/deploy pipelines, data workflows, and production workloads; participate in root cause analysis and implement preventive fixes
- Develop, standardize, and maintain configuration management practices (infrastructure configuration, environment parameters, secrets, cluster policies) to ensure consistency across environments
- Produce and maintain clear technical documentation covering deployment guides, operational runbooks, pipeline logic, and platform configuration
Requirements
- 3+ years of experience in a Build Engineer, DevOps Engineer, Platform Engineer, or similar role supporting delivery and operations
- Strong hands-on experience with Databricks, Azure Data Factory, Azure DevOps, and Microsoft Azure in general — including DataOps practices such as automated data pipeline deployment, environment promotion, and governance
- Strong communication and collaboration skills, with the ability to work effectively across engineering, data, and operations teams
- English at B2 level or higher, ability to participate in technical discussions and produce documentation in English
Nice to have
- Experience with MLOps (model deployment, monitoring, lifecycle automation for ML workloads)