מציאת משרת הייטק בחברות הטובות ביותר מעולם לא הייתה קלה יותר
Key job responsibilities
• Design, develop, implement, test, document, and operate large-scale, high-performant data structures for big data analytics.
• Translate business requirements into robust, scalable, operable solutions that work well within the overall data architecture.
• Solving complex problems by aggregating multiple large datasets to enable informed decision making.
• Working with large datasets and developing big data pipelines that move data from source systems to data warehouses, S3 data lakes, and other data storage and processing systems using Big Data and Cloud technologies.
• Implementing data structures using best practices in data modeling, ELT/ETL (Extract/Transform/Load) processes, using big data tools such as Spark, Hive, SQL, Apache Airflow, AWS Glue, EMR, Lambda, S3, Redshift and OLAP technologies.
• Following software engineering best practices in writing code (using programming languages like Python, SQL and Scala), reviewing code with peers and testing for all related data processing systems.
• Maintaining and optimizing infrastructure required to support scalable and high availability data lakes and data pipelines to store and process petabyte scale datasets.
- 1+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
- Experience with one or more scripting language (e.g., Python, KornShell)
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
משרות נוספות שיכולות לעניין אותך