We are currently hiring a Big Data Architect for a hybrid role in Atlanta, GA
Essential functions
Data Architecture & Engineering
- Design and implement medallion architecture (raw, silver, gold layers) to enable efficient data ingestion, processing, and quality management.
- Develop standardized ETL and streaming pipelines using Databricks, Apache Spark, and Apache Airflow, ensuring low-latency data processing.
- Define and enforce data quality and observability frameworks, integrating dashboards and monitoring tools to maintain high data integrity.
- Optimize data pipeline performance and infrastructure costs, identifying bottlenecks and areas for improvement.
Technical Leadership & Strategy
- Lead the technical discovering and ongoing development, assessing current systems, identifying pain points, and defining the target state architecture.
- Provide technical recommendations and a roadmap for implementation, ensuring best practices in data engineering and architecture.
- Guide the selection and implementation of cloud-based data platforms to support scalability, efficiency, and future growth.
- Ensure compliance with security, governance, and regulatory requirements in data handling and processing.
Cross-Team Collaboration & Stakeholder Engagement
- Act as the technical point of contact between engineering teams, business stakeholders, and management.
- Work closely with team members to ensure smooth collaboration and knowledge transfer.
- Translate business requirements into technical solutions, ensuring alignment between data engineering practices and business objectives.
Project Delivery & Execution
- Define best practices, coding standards, and development workflows for data engineering teams.
- Ensure a smooth transition from discovery to implementation, providing hands-on guidance and technical oversight.
- Participate in planning and work closely with the Delivery Manager to manage timelines and priorities, various program related topics
- Monitor and troubleshoot data pipeline performance, ensuring high availability and reliability of data systems.
Qualifications
- Cloud provider: AWS
- Programming language: Python
- Frameworks and technologies: AWS Glue, Apache Spark, Apache Kafka, Apache Airflow
- Experience working with on-premise will be a plus
- Databricks is a MUST
- Bachelor’s/Master’s degree in Computer Science/ Engineering or a related field.
We offer
- Opportunity to work on cutting-edge projects
- Work with a highly motivated and dedicated team
- Competitive salary
- Flexible schedule
- Benefits package - medical insurance, vision, dental, etc.
- Corporate social events
- Professional development opportunities
- Well-equipped office
About us
Grid Dynamics (NASDAQ: GDYN) is a leading provider of technology consulting, platform and product engineering, AI,
and advanced analytics services. Fusing technical vision with business acumen, we solve the most pressing technical
challenges and enable positive business outcomes for enterprise companies undergoing business transformation.
A key differentiator for Grid Dynamics is our 8 years of experience and leadership in
enterprise AI, supported by profound expertise and ongoing investment in
data,
analytics,
cloud & DevOps,
application modernization
and
customer experience.
Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India.