What you'll do:
- Build data models that can support dynamic and efficient data analysis, and collaborate with other teams to maintain and evolve those models over time
- Ensure alignment to coding standard methodologies and development of reusable code
- Partner with Data Analysts to understand business processes and identify opportunities to build scalable and efficient data models that can be used for scalable, refreshable analysis and reporting
- Create and enable the generation of ad-hoc / on-demand data sets for use by analysts and data scientists
- Work with Engineering teams to set up monitoring and alerting systems for business and product KPIs
Skills and knowledge you should possess:
- BS/MS in Computer Science or a related technical field.
- 5+ years working with Analytics and Data Engineering teams with a mixed data engineering and analytics background
- 5+ years of experience in scalable data architecture, fault-tolerant ETL, and monitoring of data quality in the cloud
- Experience working on or leading initiatives around data governance, master data management, data catalogs, and enterprise data warehouse architecture
- Strong analytical skills, data sensibility, and an able communicator
- Open to working on multiple projects simultaneously
- High Proficiency in:
- SQL
- Python
- Dimensional Modeling
- Data pipeline development, workflow management, and orchestration tools
- ETL optimization and best practices
- Snowflake or other column-oriented and cloud-based databases
- Relational Databases
Bonus points (nice skills to have, but not needed):
- DBT
- Git / Github
- MySQL is a big plus
- Looker/Tableau is a big plus
- large data sets (terabyte scale)
- Apache Airflow