

Share
As a on the Dublin team, you’ll play a key role in building, optimizing, and maintaining our Hadoop-based data warehouse and large-scale data pipelines. This is a hands-on engineering role where you’ll collaborate closely with data engineers, analysts, and platform teams to ensure our data platforms are scalable, reliable, and secure.
What you will accomplish
Design, develop, and maintain robust, scalable data pipelines using Hadoop and related ecosystems.
Implement and optimize ETL processes for both batch and streaming data needs across analytics platforms.
Collaborate cross-functionally with analytics, product, and engineering teams to align technical solutions with business priorities.
Ensure data security, reliability, and compliance across the entire infrastructure lifecycle.
Troubleshoot distributed systems and contribute to performance tuning, observability, and operational excellence.
Continuously learn and apply new open-source and cloud-native tools to improve data systems and processes.
What you will bring
6+ years of experience in data engineering, with a strong foundation in distributed data systems.
Proficiency with Apache Kafka, Flink, Hive, Iceberg, and Spark SQL in large-scale environments.
Working knowledge of Apache Airflow for orchestration and workflow management.
Strong programming skills in Python , Java (Spring Boot) , and SQL across various platforms (e.g., Oracle, SQL Server).
Experience with CI/CD, monitoring, and cloud-native tools (e.g., Jenkins, GitHub Actions, Docker, Kubernetes, Prometheus, Grafana).
Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
The cool part
Work on one of eBay’s most impactful data infrastructure platforms, supporting global analytics and insights.
Join a collaborative, innovative engineering culture that embraces open-source and continuous learning.
Solve complex, high-scale data challenges that directly shape how eBay makes data-driven decisions.
These jobs might be a good fit