We are seeking an innovative Data Engineer III to lead the design and implementation of our enterprise-wide data architecture supporting AADA High Velocity Transformation (HVT) initiatives. In this role, you will define and architect end-to-end scalable data solutions that power our organization's transformation initiatives and products.
You'll be responsible for consolidating and optimizing our data architecture to create a unified view of the transformation landscape, working towards a long term goal of integrating diverse data sources including HR metrics, business KPIs, employee sentiment, and market trends.Short/Medium-Term focus (0-18 months):
• Enterprise architect and strategy - Enables the "complete view of transformation landscape" by consolidating currently fragmented Redshift clusters OR running 0-1 HVT redshift cluster altogether. The DE will have to access, understand and build the approach + its significance.
• Rapidly prototype and iterate on data architectures to support evolving transformation needs (i.e. recursive modeler - infra for A/B testing for org change etc.)
• Partner with applied scientists to create data foundations for ML features including sentiment analysis and change prediction
Long-Term focus (18+ months):
• Sustainable infrastructure, with a focus on expansion into customer and business data infrastructure integration
• Design and implement real-time data pipelines for predictive analytics and digital twin simulations
• Create integrated data solutions that combine transformation metrics with customer and business data
• Become a Transformation Enabler - via real-time data pipelines or build predictive datasets for transformation challenges and simulation (developmental plans to go beyond building data pipeline)
Key job responsibilities
Enterprise Technical Leadership & Architecture:
• Define and own data architecture at the team level, working to simplify, optimize, and remove bottlenecks
• Lead the consolidation and standardization of data architecture across business units, supporting transformation initiatives
• Lead the design, implementation, and successful delivery of large-scale, critical data solutions
• Develop extensible and scalable solutions that meet both immediate needs and long-term architectural goals
• Drive adoption of data engineering best practices across engineering partners
• Make technical trade-offs between competing short-term and long-term requirements
• Implement data mesh principles to enable distributed but governed data ownership
• Design scalable data platforms that enable predictive analytics, ambient intelligence, and AI/ML workloadsImplementation & Development:
• Rapidly prototype and iterate on data architectures to support evolving transformation needs
• Write high-quality code for critical data pipelines and infrastructure components
• Design and optimize logical data models and end-to-end data flows
• Build reusable components and services that resolve architecture deficiencies
• Ensure solutions meet requirements for security, scalability, and maintainability
• Drive operational excellence in data solution development and deploymentCross-functional Collaboration:
• Work closely with business analysts, data scientists, and software engineers to understand requirements
• Influence team's technical and business strategy through roadmap contributions
• Build consensus across teams on architectural decisions and implementation approaches
• Partner with measurement partners to identify and solve problems where transformation solutions are bottlenecked by data needsData Quality & Governance:
• Design and implement robust data quality frameworks
• Establish data governance practices and ensure compliance
• Create efficient data validation and monitoring solutions (monitoring, redundancies)
• Drive improvements in data discovery and accessibility (optimization, cost balancing)
- 5+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with SQL
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience operating large data warehouses
- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR and IAM roles and permissions
משרות נוספות שיכולות לעניין אותך