Finding the best job has never been easier
Share
Core Responsibilities· Design, implement, and support a platform providing ad hoc access to large datasets
· Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using Spark or any other state of the art systems
· Implement data structures using best practices for lakehouses
· Model data and metadata for ad hoc and pre-built reporting, meeting read/write/summary optimized storages
· Interface with business customers, gathering requirements and delivering complete reporting solutions
· Build robust and scalable data integration (ETL) pipelines using Kotlin, Python, typescript and Spark
· Build and deliver high quality datasets to support business analyst and customer reporting needs
· Continually improve ongoing automating or simplifying self-service Data ingestion at scale for customers
· Participate in strategic & tactical planning discussions
- 3+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Knowledge of batch and streaming data architectures like Kafka, Kinesis, Flink, Storm, Beam
- Knowledge of distributed systems as it pertains to data storage and computing
- Experience with SQL
These jobs might be a good fit