Finding the best job has never been easier
Share
Key job responsibilitiesData Engineers on the team lead the design, implementation, and successful delivery of ETL services, which collectively use 15+ AWS services, e.g., Lambda, Glue, Lake Formation, CloudWatch, API gateway, EMR, Redshift, DynamoDb etc. They are expected to maintain their individual knowledge of these AWS services in order to design efficient, scalable, cost-effective services and data pipelines with high availability (>99.5% SLAs) and data quality, and are also expected to be able to work backwards from their customers, which requires interfacing across the WS organization at multiple levels and functions and maintaining conceptual knowledge of the data domains required to build foundational datasets.
- 1+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
- Experience with one or more scripting language (e.g., Python, KornShell)
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
These jobs might be a good fit