Design, develop, test, deploy, and maintain data pipelines on python ecosystem.
Connect with US stakeholders to understand requirement and create project plans
Desing end to end solutions for the projects with documentation
Experience in JIRA , confluence , Kanban and data modeling tools.
Write, optimize, and troubleshoot complex Snowpark and SQL code.
Integrate Snowflake with various data sources and third-party tools (e.g., Alteryx, Airflow)
Utilize AWS services like S3 for data storage and Lambda for serverless functions
Design and implement data models for efficient data storage and retrieval
Ensure data security and compliance with industry standards
Collaborate with data engineers, analysts, and other stakeholders to understand data requirements and deliver solutions
Guide junior developers in various projects.
Basic understanding and development of tableau dashboard.
Required Skills, Qualification and Capabilities:
Experience w.r.t. Data architecture, Implementation of large scale Enterprise-level Data Lake/Data Warehousing, Big Data and Analytics applications.
Professional with a background in Data and Analytics, should have led multiple engagements in Data & Analytics in terms of solutioning, architecture and delivery.
Should have excellent client interaction and presentation skills
Expertise in SQL, Python, and Spark
Experience with cloud technologies like AWS S3 and Lambda
Experience with data integration tools like Alteryx or Airflow
Excellent problem-solving and analytical skills
Ability to work independently and as part of a team