

Share
Essential Responsibilities:
Minimum Qualifications:
Preferred Qualification:
Our Benefits:
Any general requests for consideration of your skills, please
These jobs might be a good fit

Share
Essential Responsibilities:
Expected Qualifications:
Your Day-to-Day
Lead data science initiatives that improve sanctions screening algorithms and optimize detection sensitivity.
Design and implement data-driven solutions that balance detection precision and false positive reduction.
Collaborate with advisory, product, and engineering teams to design frameworks supporting sanctions compliance.
Translate regulatory requirements into actionable analytical frameworks and technical solutions.
Modernize screening platforms using big data and ML-driven approaches to increase scalability and speed.
Conduct advanced analytics and identify workflow inefficiencies, recommending actionable improvements.
Collaborate with engineering teams to ensure deployment, monitoring, and continuous improvement in production systems.
What You Need to Bring
Bachelor’s degree with a minimum of six years of experience in sanctions transaction screening, profile screening, or AML, including at least three years in data analytics or data science.
Strong understanding of economic sanctions programs administered by OFAC, EU, UN, CSSF , etc., and associated adjacency risk.
Proficiency in SQL, BigQuery, Python, R, Tableau, and Power BI, with strong analytical and problem-solving skills.
Working knowledge of Hadoop, Hive, Jupyter Notebooks, and data warehouse technologies.
Excellent written and verbal communication skills, with the ability to translate analytical insights into actionable business recommendations.
Proven ability to manage multiple projects in a fast-paced, dynamic environment with independence and initiative.
Preferred
Advanced degree in Data Science, Statistics, or a related field.
Experience deploying and maintaining ML models in production within compliance or regulated environments.
Familiarity with explainable AI, model validation, and governance frameworks.
Our Benefits:
Any general requests for consideration of your skills, please

Share
Design and deliver distributed systems supporting ingestion, streaming, storage, and governance for eBay’s Data Platform.
Develop services and APIs that power scalable data management and access across multiple clouds.
Contribute to architecture design reviews, ensuring scalability, reliability, and cost efficiency.
Drive operational excellence through observability, automation, and continuous improvement.
Collaborate with analytics, infrastructure, and product teams to align technical delivery with business goals.
Learn and grow in advanced areas such as orchestration, governance, and privacy engineering.
5+ years of experience designing and developing distributed systems or data platforms.
Proficiency in Java or Python , with experience in containerized environments and CI/CD practices.
Hands-on experience with Kafka , Flink , Spark , Delta/Iceberg , and modern data stores (NoSQL or columnar).
Strong understanding of distributed systems fundamentals — performance, reliability, and fault tolerance.
Proven ability to independently deliver complex projects from design to production.
Bachelor’s or Master’s degree in Computer Science, or equivalent practical experience.
Shape the future of eBay’s Core Data Platform powering global analytics, AI, and ML workloads.
Tackle challenging distributed systems problems — scalability, freshness, and multi-cloud reliability.
Join a collaborative, inclusive culture that values curiosity, craftsmanship, and continuous learning.

Share
We would like to invite applications for the role of an permanent Lead Product Development Scientist position at our Waterford Site.
Key Responsibilities
Are you….
Do you have….
The internal career site is available from your home network as well. If you have trouble accessing your EC account, please contact your local HR/IT partner.
Teva designs eligibility to empower and enable employees to manage their careers internally and provides an easy and smooth process to view and apply. To be considered for an open internally posted position, an employee must:
Unless explicitly stated in the job description, no company sponsored work authorisation or relocation assistance should be assumed.

Share
As a on the Dublin team, you’ll play a key role in building, optimizing, and maintaining our Hadoop-based data warehouse and large-scale data pipelines. This is a hands-on engineering role where you’ll collaborate closely with data engineers, analysts, and platform teams to ensure our data platforms are scalable, reliable, and secure.
What you will accomplish
Design, develop, and maintain robust, scalable data pipelines using Hadoop and related ecosystems.
Implement and optimize ETL processes for both batch and streaming data needs across analytics platforms.
Collaborate cross-functionally with analytics, product, and engineering teams to align technical solutions with business priorities.
Ensure data security, reliability, and compliance across the entire infrastructure lifecycle.
Troubleshoot distributed systems and contribute to performance tuning, observability, and operational excellence.
Continuously learn and apply new open-source and cloud-native tools to improve data systems and processes.
What you will bring
6+ years of experience in data engineering, with a strong foundation in distributed data systems.
Proficiency with Apache Kafka, Flink, Hive, Iceberg, and Spark SQL in large-scale environments.
Working knowledge of Apache Airflow for orchestration and workflow management.
Strong programming skills in Python , Java (Spring Boot) , and SQL across various platforms (e.g., Oracle, SQL Server).
Experience with CI/CD, monitoring, and cloud-native tools (e.g., Jenkins, GitHub Actions, Docker, Kubernetes, Prometheus, Grafana).
Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
The cool part
Work on one of eBay’s most impactful data infrastructure platforms, supporting global analytics and insights.
Join a collaborative, innovative engineering culture that embraces open-source and continuous learning.
Solve complex, high-scale data challenges that directly shape how eBay makes data-driven decisions.

Share
As a Senior Software Engineer, you will help shape the next generation of our Hadoop-based analytics infrastructure. eBay operates one of the world’s largest Hadoop deployments, with . You’ll join a team of passionate engineers who thrive on building at scale and contributing to open-source innovation.
Lead the design and development of scalable, secure analytics infrastructure aligned with eBay’s platform vision.
Build and optimize production-grade frameworks and features using Hadoop, Spark, and Iceberg.
Contribute to open-source projects that advance both eBay and the broader data community.
Collaborate across engineering teams to drive innovation, resiliency, and performance at massive scale.
Solve complex system challenges with creativity, data-driven thinking, and technical depth.
7+ years of software engineering experience with proven expertise in Java and distributed systems design.
Strong knowledge of Hadoop ecosystem technologies such as Hadoop, Spark, Iceberg, and YuniKorn.
Deep understanding of computer science fundamentals, performance tuning, and concurrency.
Experience working in Linux environments with strong networking and troubleshooting skills.
A collaborative mindset with excellent communication and analytical abilities.
Bachelor’s or Master’s degree in Computer Science, or equivalent experience in the field.
Shape the future of eBay’s Hadoop ecosystem and big-data infrastructure.
Work at global scale, driving analytics that power millions of eBay experiences.
Join a culture that encourages open-source contribution, innovation, and collaboration.

Share
The design, development, and maintenance of traffic management solutions.
Optimize network protocols, configurations, and Points of Presence (PoP).
Develop and implement advanced caching strategies to improve system efficiency.
Develop high-performance applications in C++ and Go.
Deploy and manage scalable systems on Kubernetes.
Implement observability tools and practices to monitor system performance and health.
Collaborate with cross-functional teams to integrate networking solutions.
Monitor and analyze traffic patterns and security threats.
Contribute to and lead open-source projects.
Innovate and develop patented technologies to improve network capabilities.
Stay updated with the latest developments in network technologies, security practices, and software development.
Identify gaps and issues across systems and functional areas, proposes solutions, and drives those resolutions.
Champion best practices and advanced concepts and impact the business by delivering solutions that address business needs.
Lead and empower others, taking responsibility for small projects and collaborating across functional teams to influence change.
Actively seek feedback and ways to improve team performance and projects, demonstrating strong communication skills.
At least 5 years of experience in cloud networking or traffic management.
Strong programming skills in C++ and Go.
Experience with TCP/IP networking
Familiarity with TCP, SSL and HTTP Protocols.
Expertise in using Kubernetes for orchestrating containerized applications.
Experience with observability tools and practices.
Experience with Envoy for traffic control.
Experience contributing to and leading open-source projects.
Certifications in networking, Kubernetes, cyber security, or related fields.
Experience in a high-traffic, large-scale environment.
Familiarity with additional programming languages, including Java, or frameworks.
Proficiency in Agile development methodologies.
Experience in patent creation and innovation.
Experience in implementing caching strategies and optimizing PoPs.

Essential Responsibilities:
Minimum Qualifications:
Preferred Qualification:
Our Benefits:
Any general requests for consideration of your skills, please
These jobs might be a good fit