Finding the best job has never been easier
Share
Are you a senior data engineer who is excited to solve complex data engineering questions, lead strategic decision-making, and drive adoption of industry-leading engineering practices. Join Shopbop as a Senior Data Engineer where you can make Shopbop "the dail destination for style inspiration and discovery". In this role you will set the technical direction for our Data Engineering team as they work to migrate from a legacy Informatic system to a modern AWS system and raise the bar on the insights Shopbop can drive with data. You will partner closely with junior data engineers, peer software engineers, and leadership to define the technical direction for Shopbop's data systems, coach teamates on best practices, and advise peers at Shopbop and our parent company Amazon on technical topics. You will also contribute directly to build data engineering solutions. Your work will improve the quality of data and insights at Shopbop, driving important customer outcomes from purchasing the right products that spark customer’s style obsessions, lowering delivery speed through shipping insights, or powering science-driven algorithms.The Shopbop Data Engineering team has 3 engineers and 8 contractors who source, clean, and host Shopbop's data mart. You will work daily with all of the engineers at standups, code reviews, and providing design input. You will represent the team in technical meetings. You will also act as an auditor and contributor to technical discussions that cross the Amazon Fashion & Fitness organization (Shopbop's parent organization). We are looking for a candidate willing to be in person at a Shopbop location in Madison WI, or New York, NY weekly, but working virtually most of the time with partners across all US Timezones. Expect 1-2/year travel to Madison WI, New York NY, or Seattle WA.
Key job responsibilities
As a Senior Data Engineer, you will work in one of the world's largest and most complex data warehouse environments, partnering with other engineers, Scientists, and Program Managers to develop scalable and maintainable data pipelines on both structured and/or unstructured data. You will develop scalable and innovative analytical solutions, process and store terabytes of low latency data, and enable F2DE team team to build successful, data-driven strategies.You will be responsible for designing and implementing a new data architecture using third-party and in-house tools, writing scalable, highly tuned SQL queries running over billions of rows of data, and Python or Scala to automate the ETL, analytics, and data quality platform from the ground up. You will design and implement complex data models, model metadata, build reports and dashboards, and own data presentation and dashboarding tools for the end users of our data products and systems.You will work with leading-edge technologies like AWS Redshift, Apache Spark, and more while helping the organization migrate away from legacy tools/technologies. The ideal candidate has strong business judgment, good sense of architectural design, written/documentation skills, and experience with big data technologies (Spark/Hive, Redshift, EMR, +Other AWS technologies). This role involves both overseeing existing pipelines as well as developing brand new ones for the Data Lake/Warehouse owned by the F2DE team.You should have deep expertise in the design, creation, management, and business use of large datasets across a variety of data platforms. You should have excellent business and interpersonal skills to be able to work with business owners to understand data requirements and implement efficient and scalable ETL solutions. You should be an authority at crafting, implementing, and operating stable, scalable, low-cost solutions to replicate data from production systems into the BI data store.
- 5+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with SQL
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
- Experience mentoring team members on best practices
- Experience operating large data warehouses
- Master's degree
- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
These jobs might be a good fit