

Share
General platform optimizations are usually achieved by developing or using advanced data structures and algorithms, while application-level optimizations are mostly rooted in a deep understanding of designing information retrieval systems. The core components of Cassini are written in modern C++, and we use parallel and distributed computation to power query serving. If you have a background in those areas and like to contribute to achieving our mission, please come join us!
What you will accomplish:These jobs might be a good fit

Share
We are looking for exceptional Engineers, who take pride in creating simple solutions to apparently-complex problems. Our Engineering tasks typically involve at least one of the following:
Building a pipeline that processes up to billions of items; frequently employing ML models on these datasets
Creating services that provide Search or other Information Retrieval capabilities at low latency on datasets of hundreds of millions of items
Crafting sound API design and driving integration between our Data layers and Customer-facing applications and components
Designing and running A/B tests in Production experiences in order to vet and measure the impact of any new or improved functionality
Design, deliver, and maintain significant features in data pipelines, ML processing, and / or service infrastructure
Optimize software performance to achieve the required throughput and / or latency
Work with your manager, peers, and Product Managers to scope projects and features
Come up with a sound technical strategy, taking into consideration the project goals, timelines, and expected impact
Take point on some cross-team efforts, taking ownership of a business problem and ensuring the different teams are in sync and working towards a coherent technical solution
Take active part in knowledge sharing across the organization - both teaching and learning from others
B.Sc. or M.Sc. in Computer Science or an equivalent professional experience
7+ years of software design and development experience, tackling non-trivial problems in backend services and / or data pipelines
Full proficiency in Python; additional hands-on experience with Java is a plus!
Solid foundation in Computer Science with strong proficiencies in Data Structures, Algorithms, Object-Oriented Programming, and Software Design
Experience in designing and operating Big Data processing pipelines, such as: Hadoop, Spark, Hive
Track record of impactful publications and/or patents in machine learning or related areas.
Contributions to open-source ML tools or frameworks.
Experience with modern large language models, graph-based ML, or knowledge graph construction.
Strong presence in scientific communities through talks, panels, or organizing roles.
These jobs might be a good fit

Share
Drive the search monetization technical vision by incorporating and developing software engineering processes and standards to enhance eBay’s buying experience.
Collaborate with scientists and product managers to deploy complex yet scalable core algorithmic logic.
Create robust data pipelines and real-time monitoring and optimization algorithms.
4+ years of software design and development experience, solid foundation in computer science with strong proficiencies in data structures, functional programming, algorithms, OOPs, and Software Design
Experience in designing and operating big data processing pipelines, such as: Hadoop, Spark, Hive, ETL
3+ years of software development experience in building large scale Web Services and Backend Applications using Java, C++, Scala, and related technologies
Background or interest in mathematics or machine learning
Excellent verbal and written communication, collaboration, and influencing skills
Bachelor's degree in computer science/engineering or equivalent professional experience, with 5+ years of experience,
Masters degree in computer science/engineering or equivalent professional experience, with 3+ years of experience.
These jobs might be a good fit

Share
These jobs might be a good fit

Share
Key Responsibilities
Required Qualifications
All your information will be kept confidential according to EEO guidelines.
These jobs might be a good fit

Share
Job Summary
As a Data Engineer / Integration Specialist within the EY SAP Enterprise Data Management Initiative run by SAP Platforms & Assets, you will be responsible for designing, building, and optimizing scalable data pipelines and system integrations for data management and transformation projects.
You will be part of a global team creating an advanced, cloud-enabled data platform that uses technologies from SAP, Databricks, Snowflake, NVIDIA and Microsoft. Your focus will be to enable seamless data movement, transformation, and integration between SAP systems and modern data platforms, ensuring data availability and quality across multiple environments.
This role requires a hands-on, technically proficient individual with deep experience in both SAP integration and modern cloud-native data engineering.
Essential Functions of the Job
Knowledge and Skills Requirements
Other Requirements
Job Requirements
What we offer you
develop you with future-focused skills and equip you with world-class experiences.
To help createan equitable
These jobs might be a good fit

Share
Being the cybersecurity partner of choice, protecting our digital way of life.
Your Impact
Your Experience
All your information will be kept confidential according to EEO guidelines.
These jobs might be a good fit

Share
General platform optimizations are usually achieved by developing or using advanced data structures and algorithms, while application-level optimizations are mostly rooted in a deep understanding of designing information retrieval systems. The core components of Cassini are written in modern C++, and we use parallel and distributed computation to power query serving. If you have a background in those areas and like to contribute to achieving our mission, please come join us!
What you will accomplish:These jobs might be a good fit