Finding the best job has never been easier
Share
You will be responsible for designing, building, and maintaining scalable, secure, and efficient data pipelines and infrastructure to support the organization's data and analytics needs. This highly technical role requires expertise in data modeling, ETL processes and orchestration, data storage and security. You will work closely with other engineers, product managers, and business stakeholders to understand requirements and develop innovative solutions.Sales, Marketing and Global Services (SMGS)Key job responsibilities
Key Responsibilities:* Design and implement robust, fault-tolerant, and high-performing data pipelines using technologies such as Apache Spark, Kafka, Airflow, Databricks, etc.
* Build and optimize data storage systems to enable efficient data processing and analysis
* Create advanced data models, leveraging techniques like slowly changing dimensions, to support complex business requirements
* Automate data ingestion, transformation, and load processes to ensure reliable and timely data delivery
* Monitor data pipeline performance, identify bottlenecks, and implement optimization strategies
* Develop and maintain metadata management, data lineage, and data governance frameworks* Mentor and train junior data engineers to develop their skills and expertise
* Stay up-to-date with the latest data engineering trends, tools, and best practices, and implement them when appropriate
Diverse Experiences
Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.Why AWS
Work/Life BalanceMentorship and Career Growth
We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
- This position requires a Bachelor's Degree in Computer Science, Data Science, Engineering or related technical.
- 3+ years of work experience with ETL, Data Modeling, and Data Architecture.
- Expert-level skills in writing and optimizing SQL.
- Proficiency in one of the scripting languages - Python, JavaScript, Perl, or similar.
- Experience operating very large data warehouses or data lakes.
These jobs might be a good fit