Design, build, and maintain robust and efficient data pipelines that collect, process, and store data from various sources, including user interactions, listing details, and external data feeds.
Develop data models that enable the efficient analysis and manipulation of data for merchandising optimization. Ensure data quality, consistency, and accuracy.
Collaborate with cross-functional teams, including Data Scientists, Product Managers, and Software Engineers, to define data requirements, and deliver data solutions that drive merchandising and sales improvements.
Improve code and data quality by leveraging and contributing to internal tools to automatically detect and mitigate issues.
Strong architectural knowledge, comfort working across multiple repositories, services, and environments.
Your Expertise:
5-9+ years of relevant industry experience with a BS/Masters, or 2+ years with a PhD
Experience with distributed processing technologies and frameworks, such as Hadoop, Spark, Kafka, and distributed storage systems (e.g., HDFS, S3)
Demonstrated ability to analyze large data sets to identify gaps and inconsistencies, provide data insights, and advance effective product solutions
Expertise with ETL schedulers such as Apache Airflow, Luigi, Oozie, AWS Glue or similar frameworks
Solid understanding of data warehousing concepts and hands-on experience with relational databases (e.g., PostgreSQL, MySQL) and columnar databases (e.g., Redshift, BigQuery, HBase, ClickHouse)