Are you a DevOps engineer who can easily navigate the world of Big Data? If so, we have an excellent opportunity for you.
We are currently looking for a Lead DevOps Engineer (Big Data) to join our team and be an essential resource for managing the Big Data platform environment. The primary focus is to onboard the platform into the environment function and optimize the settings, processes, and support.
Responsibilities
- Deliver and analyze requirements to find technical solutions
- Consult and assist in infrastructure overviews and cost estimations
- Use the DevOps approach to collaborate with developers
- Build and operate critical, highly loaded systems
- Develop CI/CD approaches and tools
- Write infrastructure code and automation
Requirements
- 5+ years of Release / Deployment / Application Engineering experience
- Expertise in Cloud technologies (AWS)
- Exposure to Big Data systems – Hadoop ecosystem, Cassandra, Spark
- Practical usage of auto configuration systems such as Puppet, Chef and Ansible
- Knowledge of server-side and distributed software
- Exposure to Agile, Data DevOps and ITIL processes and approaches
- Familiarity with Linux scripting
- 3rd line support of a large critical system
Technologies
- Hadoop ecosystem
- Cloudera
- Hortonworks and other vendors
- Grid Computing Technologies