You are well-versed in SQL and Python programming languages and have a strong understanding of ETL and data modeling
You have extensive development experience with data modeling and working with data processing and transformation technologies like Apache Airflow and dbt
You are well versed in the AWS data stack including Kinesis, S3, Redshift, Athena, Glue
You have extensive development experience working with Snowflake Data Warehouse (familiarity with other data warehouse technologies is an advantage)
You have designed and developed data pipelines that process large amounts of structured and unstructured data
You have built and maintained a highly scalable data infrastructure
You have implemented effective monitoring and testing solutions to ensure smooth functioning of the data infrastructure.
Responsibilities:
Contribute to any part of the data team and solve complex issues, including performance and stability issues
Setting up complete pipelines that process large amounts of data to meet the needs of data analysts and business users.
Coming up with effective monitoring solutions to ensure the smooth functioning of the data infrastructure.
About You
You are outcome-focused and proactive. You have solved complex technical challenges to improve customer happiness, developer productivity, and system efficiency.
You enjoy working in an inclusive, collaborative environment and mentoring team members when needed. You believe in achieving collective impact and that we are “better together”.
You have customer empathy and are curious about how your work is going to impact a customer’s use case.
You're excited to pitch in wherever the team needs help, from investigating issues and debugging together to supporting customers.
You have successfully led and driven best practices related to code quality and testability across teams.