Share
• Strong experience in Data Warehouse and Business Intelligence application development
• Data Analysis: Understand business processes, logical data models and relational database implementations
• Expert knowledge in SQL. Optimize complex queries.
• Basic understanding of statistical analysis. Experience in testing design and measurement.
• Proven track record of working on complex modular projects, and assuming a leading role in such projects
• Highly motivated, self-driven, capable of defining own design and test scenarios
• Experience with programming/scripting languages such as Scala/Python etc. preferred
• Evaluate and implement various big-data technologies and solutions (Redshift DW, Glue, EMR, Spark) to optimize processing of extremely large datasets in an accurate and timely fashion.Key job responsibilities- Develop and maintain fully automated ETL pipelines using scripting languages such as Python, Spark, SQL and AWS services such as S3, Glue, Lambda- Makes appropriate trade-offs, re-use where possible, and is judicious about introducing dependencies
- Asks correct questions when data model and requirements are not well defined and comes up with designs which are scalable, maintainable and efficient
- 3+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Knowledge of distributed systems as it pertains to data storage and computing
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
These jobs might be a good fit