המקום בו המומחים והחברות הטובות ביותר נפגשים
Key job responsibilities
• Develop data products and build, optimize, and maintain reliable data pipelines for extracting, transforming, and loading (ETL) large datasets from diverse sources.
• Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, AWS – Redshift, and OLAP technologies, Model data and metadata for ad hoc and pre-built reporting.
• Work with product tech teams and build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark.
• Monitor and improve data pipeline performance, ensuring low latency and high availability.
• Automate repetitive data engineering tasks to streamline workflows and improve efficiency.
- 1+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
- Experience with one or more scripting language (e.g., Python, KornShell)
משרות נוספות שיכולות לעניין אותך