Share
Key job responsibilities
- Build and optimize physical data models and data pipelines for simple datasets
- Write secure, stable, testable, maintainable code with minimal defects
- Troubleshoot existing datasets and maintain data quality- Document solutions to ensure ease of use and maintainability
- Handle data in accordance with Amazon policies and security requirements
- 1+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
- Experience with one or more scripting language (e.g., Python, KornShell)
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
These jobs might be a good fit