Share
Key job responsibilities
1. Develop data products, infrastructure and data pipelines leveraging AWS services (such as Redshift, Kinesis, EMR, Lambda etc.) and internal BDT tools (Datanet, Cradle, QuickSight etc.2. Improve existing solutions/build solutions to improve scale, quality, IMR efficiency, data availability, consistency & compliance.3. Partner with Software Developers, Business Intelligence Engineers, MLEs, Scientists, and Product Managers to develop scalable and maintainable data pipelines on both structured and unstructured (text based) data.4. Drive operational excellence strongly within the team and build automation and mechanisms to reduce operations
- Bachelor's degree
- 3+ years of data engineering experience
- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience working on and delivering end to end projects independently
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
- 5+ years of data engineering experience
- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
- Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)
- Knowledge of Engineering and Operational Excellence using standard methodologies.
These jobs might be a good fit