Finding the best job has never been easier
Share
Key responsibilities include:- Designing, building, and maintaining scalable, automated, and fault-tolerant data pipelines using Spark, EMR, Redshift, Glue, S3, and Python.- Simplifying complex datasets and enhancing data accessibility through innovative BI solutions.
- Bachelor's degree in quantitative study such as Computer Science, Computer Engineering, Mathematics, or equivalent experience
- 3+ years of data engineering experience
- 3+ years of writing advanced SQL queries
- Experience with data modeling, warehousing, and building ETL pipelines
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, R, or NodeJS
- Proficiency in written and verbal communication skills including comfortability communicating and presenting to senior leadership
These jobs might be a good fit