Finding the best job has never been easier
Share
In this role, you will drive the development of data engineering solutions from initial experimentation to production level deployment, including the following
- Identify gaps and improvement opportunities in existing data infrastructure;
- Design, implement, and maintain a modern cloud-based data-infrastructure for large data-sets;
- Migrate existing data pipelines to your newly developed solutions;
- Create and manage large datasets by extracting, transforming, combining, and loading data from various heterogeneous data sources;
- Maintain data integrity, availability, and auditability; - Manage AWS resources;
- Drive the adoption of new technologies and new best practices.
- 1+ years of data engineering experience
- 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience
- Bachelor's degree in a quantitative/technical field such as computer science, engineering, statistics
- Knowledge of distributed systems as it pertains to data storage and computing
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
- Experience with one or more scripting language (e.g., Python, KornShell)
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
These jobs might be a good fit