Share
Key job responsibilities
- Interface with PMs, business customers, and software developers to understand requirements and implement solutions- Design, develop, and operate highly-scalable, high-performance, low-cost, and accurate data pipelines in distributed data processing platforms with AWS technologies
- Recognize and adopt best practices in data processing, reporting, and analysis: data integrity, test design, analysis, validation, and documentation
- Keep up to date with big data technologies, evaluate and make decisions around the use of new or existing software products to design the data architecture
- 5+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with SQL
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
- Experience mentoring team members on best practices
- Bachelor's degree in computer science, engineering, analytics, mathematics, statistics, IT or equivalent
- 8+ years of industry experience in software development, data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets
- 8+ years of experience in designing and developing data processing pipelines using big data technologies (Hadoop, Hive, Hbase, Spark, EMR, etc.)
- 8+ years of experience in designing and developing analytical systems
- Experience building large-scale applications and services with big data technologies
- Experience providing technical leadership and mentoring other engineers for best practices on data engineering
- Expertise in SQL, DB and storage Internals, SQL tuning, and ETL development
- Strong organizational and multitasking skills with ability to balance competing priorities
- Working knowledge of scripting languages such as Python, Perl, etc.
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience operating large data warehouses
- Master's degree
- Experience communicating with users, other technical teams, and management to collect requirements, describe data modeling decisions and data engineering strategy
- Experience working with others to improve skills, making everyone more effective software engineers.
- Exposure to Big Data technologies and techniques.
- Working knowledge of a Object oriented language
- Experience with Amazon Redshift or other distributed computing technology.
- Experience in building complex software systems that have been successfully delivered to customers.
These jobs might be a good fit