Finding the best job has never been easier
Share
Key job responsibilities
- Build end-to-end data pipelines to ingest and transform data from different types of data sources and systems; from traditional ETL pipelines to event data streams
- Utilize data from disparate data sources to build meaningful datasets for analytics and reporting
- Evaluate and implement various big-data technologies and solutions (e.g. Redshift, Hive/EMR, Spark, SNS, SQS, Kinesis) to optimize processing of extremely large datasets- Write high performing and optimized SQL queries
- Design and implement automated data processing solutions and data quality controls
- Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with SQL
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
- Experience as a Data Engineer or in a similar role
- Knowledge of professional software engineering & best practices for full software development life cycle, including coding standards, software architectures, code reviews, source control management, continuous deployments, testing, and operational excellence
- Knowledge of distributed systems as it pertains to data storage and computing
- Bachelor's degree in computer science, engineering, analytics, mathematics, statistics, IT or equivalent
- Experience operating large data warehouses
- Experience building data products incrementally and integrating and managing data sets from multiple sources
- Experience communicating with users, other technical teams, and management to collect requirements, describe data modeling decisions and data engineering strategy
- Experience with MPP databases such as Amazon Redshift
These jobs might be a good fit