Finding the best job has never been easier
Share
Key Responsibilities:
Design, develop, and maintain data pipelines using Snowflake, Spark, and AWS services
Architect and implement data warehousing solutions using Snowflake
Develop and deploy Spark applications for data processing and analytics
Utilize AWS services such as S3, Lambda, EKS, and Glue for data storage, processing, and orchestration
Implement infrastructure as code using Terraform for AWS resource management
Collaborate with data architect to understand data requirements and deliver data solutions
Ensure data quality, security, and compliance with organizational standards
Requirements:
8+ years of experience in data engineering
Strong expertise in Snowflake, Spark, and AWS services (S3, Lambda, EKS, Glue)
Familiarity with containerization technologies like Docker and container orchestration platforms like Kubernetes.
Experience with Terraform (IaaC) for infrastructure management
Proficiency in programming languages such as Python, Scala, or Java
Experience with data warehousing, ETL, and data pipelines
Strong understanding of data modeling, data governance, and data quality
Excellent problem-solving skills and attention to detail
Bachelor's degree in Engineering
Nice to Have:
Experience with cloud-based data platforms and tools
Knowledge of containerization using Docker
Familiarity with Agile development methodologies
Certification in AWS or Snowflake
Education:
Time Type:
View the " " poster. View the .
View the .
View the
These jobs might be a good fit