Are you a self-starter with strong problem-solving skills, capable of owning and implementing solutions from start to finish? Do you enjoy tackling the challenge of solving complex Big Data analytics problems using state-of-the-art technologies?
EPAM is looking for a skilled Senior Palantir Data Engineer to join our growing global team of data engineering professionals. This is an exciting opportunity to work on cutting-edge Data Transformation projects that utilize Big Data and Machine Learning to solve emerging challenges in the Property & Casualty business domain.
If you’re eager to apply your expertise in Python/PySpark, SQL and Palantir to design and implement complex data pipelines while collaborating with a multicultural and dynamic team, we’d love to hear from you!
Responsibilities
- Lead the design and implement robust, large-scale data pipelines and analytics solutions
- Oversee the monitoring and optimization of data pipelines for performance and scalability using advanced tools and techniques, including Python/PySpark and structured query languages
- Optimize data workflows to support critical decision-making processes
- Harness state-of-the-art tools and technologies (including Palantir Foundry) to address new and emerging business challenges
- Partner with cross-functional and globally distributed teams (e.g., data scientists, analysts, business stakeholders) to align project goals and execution strategies
- Contribute to a global strategic initiative focused on enhancing the ability to make data-driven decisions across the Property & Casualty value chain
- Stay ahead of emerging technologies and trends (e.g., Generative AI, Machine Learning) and recommend potential applications in the data ecosystem
Requirements
- Bachelor’s degree (or equivalent) in Computer Science, Data Science or a related discipline
- 5+ years of experience working with large-scale distributed computing systems
- Proficiency in Python/PySpark to build and optimize complex data pipelines
- Hands-on experience working with Databricks for large-scale data processing and analytics
- Strong SQL skills (preferably Spark SQL) for data querying and manipulation
- Deep understanding of data warehousing concepts and ELT techniques
- Experience with Palantir Foundry is a must
- Familiarity with Agile and Scrum development methodologies
- Strong analytical and problem-solving skills
- Self-starter with a positive outlook and an eagerness to learn
- Self-direction and effective workload management
- Ability to work enthusiastically in a global and multicultural environment
- Strong interpersonal and communication skills, including clear written and verbal expressions in complex contexts
Nice to have
- Knowledge of HTML, CSS, JavaScript and Gradle
- Experience in the insurance domain or the financial industry
- Familiarity with Microsoft Power BI
- Exposure to Machine Learning or Generative AI technologies