The point where experts and best companies meet
Share
Key job responsibilities
In this role, you will have the opportunity to display and develop your skills in the following areas:
- Design, implement, and support an analytical platform providing approved access to large datasets and computing power
- Managing AWS resources including Redshift, Glue, Lambda, MWAA (managed Airflow etc.- Build robust and scalable data integration (ETL) pipelines using SQL, Python / Java, and Spark.
- Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation
A day in the life- Leverage new cloud architecture and data engineering patterns to ingest, transform and store data.
- Build and deliver high quality data solutions to support analysts, engineers and data scientists.
- 3+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Knowledge of professional software engineering & best practices for full software development life cycle, including coding standards, software architectures, code reviews, source control management, continuous deployments, testing, and operational excellence
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
- 4+ years of SQL experience
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Familiarity with AWS, relational DBs, and no SQL databases.
These jobs might be a good fit