מציאת משרת הייטק בחברות הטובות ביותר מעולם לא הייתה קלה יותר
Key job responsibilities
- Be hands-on with ETL to build data pipelines to support automated reporting.- Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Redshift.
- Model data and metadata for ad-hoc and pre-built reporting.
- Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark.- Manage AWS resources including Glue, Redshift, MWAA, EMR, Lambda.
- Diagnose and resolve operational issues, perform detailed root cause analysis, respond to suggestions for enhancements.A day in the life
As a Data Engineer, you will be working with cross-functional partners from Science, Product, SDEs, Operations and leadership to help them translate raw data into actionable insights for stakeholders by building data pipelines, choosing the right data model, empowering them to make data-driven decisions.
- 1+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
- Experience with one or more scripting language (e.g., Python, KornShell)
- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with data visualization software (e.g., AWS QuickSight or Tableau) or open-source project
משרות נוספות שיכולות לעניין אותך