המקום בו המומחים והחברות הטובות ביותר נפגשים
This role requires deep expertise in the design, creation, management, and business use of large datasets across a variety of data platforms. You should have excellent business and interpersonal skills to work with business owners to understand data requirements and build ETL processes to ingest data into the data lake. You should be an authority at crafting, implementing, and operating stable, scalable, low-cost solutions to flow data from production systems into the data lake. Above all, you should be passionate about working with huge data sets and love bringing datasets together to answer business questions and drive growth.In this role, you will envision, design, and develop data solutions to scale APM data. You will be responsible for designing and implementing scalable data/ETL pipelines leveraging big data technologies and modern BI products to effectively meet the rapidly growing and dynamic demands of rich data. You will help the organization move towards a data-as-a-service model to improve influence on day-to-day business decisions. You will influence and guide other data engineers, business intelligence engineers, tech team members, and leaders to make informed data decisions.Key job responsibilities
* Design, implement, and support data warehouse infrastructure using Apache ecosystem technologies (e.g., Hadoop, Spark, Hive) and AWS services (e.g., Redshift, EMR, Glue).
* Create and optimize ETL processes to extract data from various operational systems and create a unified dimensional or star schema data model for analytics and reporting.
* Implement real-time data streaming solutions using technologies like Apache Kafka or AWS Kinesis.
* Develop a deep understanding of our vast data sources and know exactly how, when, and which data to use to solve particular business problems.
* Design and implement data quality checks and data governance processes to ensure data integrity and compliance.
* Support the development of performance dashboards that encompass key metrics to be reviewed with senior leadership and sales management.* Manage numerous requests concurrently and strategically, prioritizing when necessary.
* Stay up-to-date with the latest trends and technologies in big data and data engineering, proposing and implementing innovative solutions to improve data processing and analysis capabilities.* Mentor junior data engineers and contribute to the team's knowledge sharing and best practices.
- 5+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with SQL
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
- Experience mentoring team members on best practices
- AWS certifications (e.g., AWS Certified Data Analytics - Specialty) are a plus.
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience operating large data warehouses
משרות נוספות שיכולות לעניין אותך