The point where experts and best companies meet
Share
Key job responsibilities
- Develop data products, infrastructure and data pipelines leveraging AWS services (such as Redshift, Kinesis, EMR, Lambda etc.) and internal BDT tools (Datanet, Cradle, QuickSight etc.
- Improve existing solutions and come up with next generation Data Architecture to improve scale, quality, timeliness, coverage, monitoring and security.
A day in the life
- Bachelor's degree
- 3+ years of data engineering experience
- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience
- Experience with data modeling, warehousing and building ETL pipelines
- Knowledge of distributed systems as it pertains to data storage and computing
- Experience working on and delivering end to end projects independently
- Coding proficiency in at least one programming language (Python, Ruby, Java, etc)
- Master's degree
- 5+ years of data engineering experience
- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
- Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)
- Knowledge of Engineering and Operational Excellence using standard methodologies.
These jobs might be a good fit