Share
Key job responsibilities
· Design, implement, and support a platform providing ad hoc access to large datasets
· Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL
· Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Oracle, Redshift, and OLAP technologies
· Model data and metadata for ad hoc and pre-built reporting
· Interface with business customers, gathering requirements and delivering complete reporting solutions
· Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark.
· Build and deliver high quality datasets to support business analyst and customer reporting needs.
· Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers
· Participate in strategic & tactical planning discussions, including annual budget processes
- 1+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
- Experience with one or more scripting language (e.g., Python, KornShell)
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
These jobs might be a good fit