Expoint - all jobs in one place

Finding the best job has never been easier

Limitless High-tech career opportunities - Expoint

Amazon Data Engineer FAE 
United States, Washington, Seattle 
217572619

06.05.2024
DESCRIPTION

You will be responsible for designing and implementing an analytical environment using third-party and in-house reporting tools, modeling metadata, building reports and dashboards. You will have an opportunity to work with leading edge technologies like Redshift, Hadoop/Hive/Pig. You will be writing scalable queries and tuning performance on queries running over billion of rows of data.Key job responsibilitiesDesign, implement, and support data warehouse / data lake infrastructure using AWS big data stack, Python, Redshift, Quicksight, Glue/lake formation, EMR/Spark/Scala, Athena etc.
Develop and manage ETLs to source data from various commercial, sales and operational systems and create unified data model for analytics and reportingWork with Product Managers, Finance, Service Engineering Teams and Sales Teams on day to day basis to support their new analytics requirements.
Manage numerous requests concurrently and strategically, prioritizing when necessary
A day in the life
We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.Internal job descriptionLoop competencies
--
Basic qualifications- 5+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with SQL
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJSPreferred qualifications- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience operating large data warehousesSeattle, WA, USA

BASIC QUALIFICATIONS

- 1+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
- Experience with one or more scripting language (e.g., Python, KornShell)


PREFERRED QUALIFICATIONS

- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.