Key job responsibilities• Design and implement scalable, secure data pipelines and infrastructure using AWS technologies and big data tools
• Build and maintain high-performance ETL processes that handle large-scale, complex datasets
• Architect end-to-end analytical solutions that are highly available, stable, and cost-effective
• Transform raw data into actionable insights through effective data modeling and integration
• Implement best practices in data system creation, data integrity, and documentation
• Proactively identify opportunities for process improvement and automation
• Partner with business stakeholders to gather requirements and translate them into technical solutions
A day in the life
- 1+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with SQL
- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
- Experience with one or more scripting language (e.g., Python, KornShell)
- Knowledge of cloud services such as AWS or equivalent
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
משרות נוספות שיכולות לעניין אותך