Key job responsibilities
- Design, build, and operate highly scalable, fault-tolerant data processing systems using modern AWS services like Redshift, S3, Glue, EMR, Kinesis, and Lambda, and orchestration systems using Airflow
- Leverage your expertise in Python, Scala, or other modern programming languages to develop custom data processing frameworks and automation tools
- Collaborate closely with data scientists, analysts, and product managers to understand business requirements and translate them into technical solutions
- Continuously optimize data pipeline performance, reliability, and cost-efficiency using best practices in CI/CD and infrastructure-as-code
- Mentor junior engineers and share your knowledge of data engineering best practices
- Stay up-to-date with the latest big data technologies and architectural patterns, and drive the adoption of innovative solutions
- 3+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Bachelors Degree
משרות נוספות שיכולות לעניין אותך