המקום בו המומחים והחברות הטובות ביותר נפגשים
Key job responsibilities
- Design, implement, and support a platform providing ad-hoc access to large datasets- Build robust and scalable data integration (ETL) pipelines using SQL, Python and AWS services such as Data Pipelines, Glue
- Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL/Redshift
- Interface with business customers, gathering requirements and delivering complete reporting solutionsA day in the life- Leverage new cloud architecture and data engineering patterns to ingest, transform and store data.
- Build and deliver high quality data solutions to support analysts, engineers and data scientists.
ABOUT AWS:Diverse Experiences
Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.Why AWS
Work/Life BalanceMentorship and Career Growth
We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
- 1+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
- Experience with one or more scripting language (e.g., Python, KornShell)
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
משרות נוספות שיכולות לעניין אותך