המקום בו המומחים והחברות הטובות ביותר נפגשים
Key job responsibilities
- Bachelor’s degree in an engineering or technical field such as Computer Science, Physics, Mathematics, Statistics, Engineering, or a related discipline.
- 7+ years of experience with technical architectures of data warehouses, ETL/ELT processes, reporting/analytic tools, and big data tools such as Spark, Kafka, Hive, Airflow.
- 5+ years of demonstrated experience with designing and building scalable data pipelines and infrastructure.
- Proficiency in data modeling and SQL with Redshift, Oracle, MySQL, and Columnar Databases, hands-on experience with AWS services, and experience with query optimization and automatic data quality guardrails and alerts.
- Ability to identify, prioritize and provide trade-offs for technical improvements and optimizations for the deployment of scientific models in a production environment.
- Skills to mentor and guide other engineers, and to work closely with scientists, particularly to productionalize scientific prototypes.
- Proven ability to manage competing priorities concurrently and drive projects to completion.
- Proficiency in software engineering concepts and development lifecycle.
- Working knowledge of at least one programming language widely used at Amazon such as Python, Java or Scala.
- Ability to develop applications and systems that leverage large datasets and build infrastructure around automatic run, maintenance and diagnosis.
- Master's degree in an engineering or technical field such as Computer Science, Physics, Mathematics, Statistics, or Engineering.
- Experience with AWS services and production CDK development workflows, including Brazil, S3, Lambda, EMR, RDS, Data-pipeline, and other big data technologies.
- Proficiency in composing Advanced SQL (analytical functions) and skills in query performance tuning.
משרות נוספות שיכולות לעניין אותך