The point where experts and best companies meet
Share
You will own the data engineering and availability approach for the Robotic Storage Analytics team, spearheading the transition to a more effective data platform. You'll design and implement scalable data architectures using AWS technologies to handle increasing data volumes, while developing robust data models and efficient ETL/ELT processes. Your role will involve creating and maintaining critical metrics and KPIs for software, hardware, solutions engineers, and stakeholder teams. You'll partner closely with data scientists and business intelligence engineers to enhance reporting and inspection processes, with a focus on automation and self-service help. You'll establish and enforce data governance policies, implement quality controls, and set up monitoring mechanisms. Your expertise will be crucial in designing and building data integration pipelines to consolidate information from various sources into a unified platform. You'll optimize the performance of business-critical queries, resolve ETL job-related issues, and develop automation solutions using programming languages for data ingestion, publishing, and analytics. You'll build automated solutions to bridge the gap between new product development and generally available tools.Key job responsibilities
As a key member of the team, you'll oversee the development of technical strategies for actionable data insights and partner with stakeholders to translate data requirements into optimized structures. You'll also play a vital role in mentoring and developing other data roles across the organization. Staying current with big data technologies and conducting pilots for new data architecture designs will be part of your ongoing remit. Throughout all these tasks, you'll create and maintain comprehensive technical documentation, ensuring knowledge transfer and continuity within the team.A day in the life
1. Medical, Dental, and Vision Coverage
2. Maternity and Parental Leave Options
3. Paid Time Off (PTO)
4. 401(k) Plan
- 3+ years of data engineering experience
- 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience
- Knowledge of distributed systems as it pertains to data storage and computing
- Experience with data modeling, warehousing and building ETL pipelines
- Experience working on and delivering end to end projects independently
- Experience programming with at least one modern language such as C++, C#, Java, Python, Golang, PowerShell, Ruby
- Experience with Redshift, Oracle, NoSQL etc.
These jobs might be a good fit