Finding the best job has never been easier
Share
• 4-7 years of experience performing quantitative analysis, preferably for an Internet or Technology company
• Strong experience in Data Warehouse and Business Intelligence application development
• Data Analysis: Understand business processes, logical data models and relational database implementations
• Expert knowledge in SQL. Optimize complex queries.
• Basic understanding of statistical analysis. Experience in testing design and measurement.
• Able to execute research projects, and generate practical results and recommendations
• Proven track record of working on complex modular projects, and assuming a leading role in such projects
• Highly motivated, self-driven, capable of defining own design and test scenarios
• Experience with scripting languages, i.e. Perl, Python etc. preferred
• BS/MS degree in Computer Science
• Evaluate and implement various big-data technologies and solutions (Redshift, Hive/EMR, Tez, Spark) to optimize processing of extremely large datasets in an accurate and timely fashion.Experience with large scale data processing, data structure optimization and scalability of algorithms a plusA day in the lifeAbout the hiring groupJob responsibilities
2. Actively participates in the code review process, design discussions, team planning, operational excellence, and constructively identifies problems and proposes solutions
3. Makes appropriate trade-offs, re-use where possible, and is judicious about introducing dependencies
4. Makes efficient use of resources (e.g., system hardware, data storage, query optimization, AWS infrastructure etc.)
5. Knows about recent advances in distributed systems (e.g., MapReduce, MPP Architectures, External Partitioning)
6. Asks correct questions when data model and requirements are not well defined and comes up with designs which are scalable, maintainable and efficient
7. Makes enhancements that improve team’s data architecture, making it better and easier to maintain (e.g., data auditing solutions, automating, ad-hoc or manual operation steps)
8. Owns the data quality of important datasets and any new changes/enhancementsKey job responsibilities
1. Responsible for designing, building and maintaining complex data solutions for Amazon's Operations businesses
2. Actively participates in the code review process, design discussions, team planning, operational excellence, and constructively identifies problems and proposes solutions
3. Makes appropriate trade-offs, re-use where possible, and is judicious about introducing dependencies
4. Makes efficient use of resources (e.g., system hardware, data storage, query optimization, AWS infrastructure etc.)
5. Knows about recent advances in distributed systems (e.g., MapReduce, MPP Architectures, External Partitioning)
6. Asks correct questions when data model and requirements are not well defined and comes up with designs which are scalable, maintainable and efficient
7. Makes enhancements that improve team’s data architecture, making it better and easier to maintain (e.g., data auditing solutions, automating, ad-hoc or manual operation steps)
8. Owns the data quality of important datasets and any new changes/enhancements
- 3+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
These jobs might be a good fit