As a Data Engineer in AWS, you will partner with cross-functional teams, including Business Intelligence Engineers, Analysts, Software Developers, and Product Managers, to develop scalable and maintainable data pipelines on both structured and unstructured data. You will play a crucial role in achieving our ambitious objectives, including establishing Phoenix as AWS' premier order management and orchestration engine, transforming BPO into a fully data-driven organization, and centralizing BPO data assets while enabling self-serve analytics.The ideal candidate has strong business judgment, a good sense of architectural design, excellent written and documentation skills, and experience with big data technologies (Spark/Hive, Redshift, EMR, and other AWS technologies). This role involves overseeing existing pipelines as well as developing brand new ones to support key initiatives. You'll work on implementing comprehensive data governance frameworks, increasing adoption of advanced analytics and AI/ML tools, and migrating data and ETL processes to more efficient systems. Additionally, you'll contribute to implementing self-serve analytics platforms, optimizing data pipeline creation processes, and integrating data from multiple sources to support BPO's growing data needs.Key job responsibilities
Design and Develop ETL processes using AWS services such as AWS Glue, Lambda, EMR, and Step Functions, aiming to reduce pipeline creation time and improve efficiency
Implement and maintain a comprehensive data governance framework for Phoenix, ensuring data integrity, security, and compliance
Automate data monitoring, alerting, and incident response processes to ensure the reliability and availability of data pipelines, striving for near real-time data delivery
Collaborate with cross-functional teams including analysts, business intelligence engineers, and stakeholders to understand data requirements and design solutions that support BPO's transformation into a data-driven organization
Lead the development and implementation of a self-serve analytics platform, empowering both technical and non-technical users to drive their own analytics and reporting
Explore and implement advanced analytics and AI/ML tools to enhance data processing and insights generation capabilities
Stay up-to-date with the latest AWS data services, features, and best practices, recommending improvements to the data architecture to support BPO's growing data needs
Provide technical support and troubleshooting for issues related to data pipelines, data quality, and data processing, ensuring Phoenix becomes the trusted source of truth for AWS agreements and order management
Diverse Experiences
AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.Mentorship & Career Growth
We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.Work/Life Balance
About Sales, Marketing and Global Services (SMGS)
- 3+ years of data engineering experience
- Experience with SQL
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
- Experience with data modeling, warehousing and building ETL pipelines
- Knowledge of distributed systems as it pertains to data storage and computing
משרות נוספות שיכולות לעניין אותך