Finding the best job has never been easier
Share
Key job responsibilities
Key job responsibilities
· Design, implement, and support a platform providing ad-hoc access to large datasets
· Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL
· Build robust and scalable data integration (ETL) pipelines using SQL, Python and AWS services such as Data Pipelines, Glue
· Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL/Redshift
· Interface with business customers, gathering requirements and delivering complete reporting solutions
· Build and deliver high quality datasets to support business analyst and customer reporting needs
· Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers
- 3+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
These jobs might be a good fit