Your Role and Responsibilities- Work in an Agile, collaborative environment to build, deploy, configure, and maintain Lakehouse (SaaS) on multiple hyperscalers.
- You will work in an innovation driven, collaborative environment to understand requirements, architect, design and implement functionalities / features.
- Use continuous integration tools like Jenkins and Artifactory to build automation pipeline to deploy different service workloads for Lakehouse.
- Collaborate multiple development teams to enable a continuous integration environment that sustains high productivity levels and emphasizes defect prevention techniques.
- Design and implement automation for deployment, monitoring, logging, alerting of large-scale Lakehouse environments.
Required Technical and Professional Expertise
- 5+ Years of strong development experience in cloud technologies.
- Working knowledge on Virtualization, containerization technologies, containers orchestration software (Kubernetes and OpenShift) and cloud platforms
- Working knowledge on Kubernetes clusters administration
- Working experience on cloud services (Amazon Web Services, IBM Cloud, Microsoft Azure, Google Cloud Platform)
- Working knowledge on Development and operation of fully managed SaaS services
- Working knowledge on Cloud SaaS security
- Languages: GO, Python, Ruby
- CI/CD Tools: Jenkins, Artifactory
- Working knowledge on RDBMS, data warehouse, data lakes
- Source Control Tools: Git, GitHub
- Excellent verbal and written communication skills with the ability to present complex technical information.
- Self-starter, organized, willing to learn and solve things on your own.
- Growth-mindset: Willingness to learn new technologies and processes.
Preferred Technical and Professional Expertise
- Familiarity with Hive metastore and open data formats (Iceberg, Delta Lake, Hudi)
- Open-source data engines: Presto, Spark
- Data governance management
- Open-source software development