The point where experts and best companies meet
Share
Key job responsibilities
· Strong understanding of ETL concepts and experience building them with large-scale, complex datasets using traditional or map reduce batch mechanism.
· Good data modelling skills with knowledge of various industry standards such as dimensional modelling, star schemas etc
·Proficient in writing performant SQL working with large data volumes
· Experience designing and operating very large Data Warehouses
· Experience with scripting for automation (e.g., UNIX Shell scripting, Python).
· Good to have - experience working on AWS stack· Clear thinker with superb problem-solving skills to prioritize and stay focused on big needle movers · Curious, self-motivated & a self-starter with a ‘can do attitude’. Comfortable working in fast paced dynamic environment.
- 1+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
- Experience with one or more scripting language (e.g., Python, KornShell)
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
These jobs might be a good fit