We are seeking a Senior Data Engineer with deep expertise in Azure Fabric, PySpark and AI-driven data platforms.
This role focuses on designing, building and optimizing scalable data pipelines and analytics solutions, collaborating with architects, engineers and business stakeholders to deliver modern, AI-integrated data solutions.
Responsibilities
- Design, develop and maintain scalable data pipelines using Azure Fabric
- Implement data processing and transformation with Python, PySpark and SparkSQL
- Utilize OneLake (Delta / OpenLake) for efficient data storage and analytics
- Develop and support solutions leveraging Cosmos DB (NoSQL API)
- Contribute to Fabric workloads such as Data Engineering, Data Factory Gen2 and Lakehouse
- Implement and maintain CI/CD pipelines following DevOps best practices
- Integrate data solutions with Power BI for reporting and analytics
- Collaborate with AI, data science and product teams to support AI-driven use cases
- Ensure data quality, performance, security and reliability
- Participate in Agile ceremonies and contribute to sprint delivery
- Support production issues and drive continuous improvements
Requirements
- 5+ years of experience in Data Engineering or related engineering roles
- Strong hands-on experience with Azure Fabric
- Proficiency in Python, PySpark and SparkSQL
- Experience with Cosmos DB (NoSQL API) and OneLake / Delta Lake (OpenLake concepts)
- Knowledge of DF Gen2 and M-code
- Experience with CI/CD pipelines using Azure DevOps or equivalent
- Good understanding of Azure services and Power BI integration
- Strong problem-solving and analytical skills
- Ability to work independently on complex tasks
- Clear communication and collaboration skills
- Ownership mindset with attention to quality and performance
- Experience working in Agile or Scrum environments
- Upper-Intermediate English language proficiency (B2)
Nice to have
- Experience with code generation, including non-AI and AI-assisted approaches
- Expertise with other Cosmos DB variants such as Mongo, Cassandra or Table APIs
- Exposure to Azure AI Foundry and Data Science workflows
- Strong background in Big Data and Spark ecosystems
- Knowledge of financial instruments and financial services data
- Hands-on experience with industry-standard LLMs such as GPT, Claude or similar