Project description
Provide software engineering in support of development to production of AI business cases. This role involves development and analytics of scalable solutions taking proof of concept to production. This position required someone who is a self-starter, self-sufficient with an open mindset and eager to learn.
Responsibilities
- Data Ingestion & Processing Architecture
Multiple high‑velocity data sources, including:
- Event Hub from Postgres
- Data pushed to Snowflake
Additional events across various internal systems
Heavy use of:
- Kubernetes for scalable processing
- Databricks / Spark for distributed compute
- Snowflake for storage and downstream analytics
SKILLS
Must have
- Backend Engineering:
- Python (primary)
- FastAPI (service/API development)
- Full‑stack engineering without UI (backend + data + infra only)
Cloud & Containerization:
- Azure (core cloud environment)
- Docker
- Kubernetes / AKS
- Experience running high‑scale workloads in K8s
Data Engineering & Distributed Computing:
- Spark / Databricks
- Experience handling:
Very large datasets
Complex pipelines
High message volume
Mixed batch + streaming data flows
- Designing & maintaining table schemas
- Working with Snowflake
Database & Data Handling:
- Strong SQL + data manipulation skills
- Experience integrating multiple data sources
- Comfort navigating event‑streaming ecosystems
Nice to have
Hands‑On LLM Integration:
- Experience integrating LLMs into applications
- Prompt design / optimization
- RAG pipelines
- Vector databases & embedding models
- Model orchestration patterns
- Security & compliance for AI systems
Model Monitoring & Optimization:
- Prompt evaluation frameworks
- Managing cost/performance tradeoffs
- (Nice-to-have) MLOps practices supporting AI workloads