

Share
This role is focused on Data Platform Engineering — not data engineering. While familiarity with Spark, Flink, and other tools in the Hadoop ecosystem is valuable, your primary responsibility will be building and evolving the platform itself, not just authoring data pipelines.
What you’ll do and learnOwn and deliver major components of eBay’s Data Platforms — from design through production rollout.
Design and evolve distributed systems powering ingestion, streaming, lakehouse/warehouse, catalog, and governance.
Contribute to long-term architecture through design reviews and authoring architecture design documents, ensuring scalability and resilience.
Build systems that balance latency, correctness, and cost while ensuring security and compliance.
Drive operational excellence for services you own, including observability and incident response.
Collaborate across product, infra, and analytics teams to align execution with business needs.
Learn and grow in areas like governance, orchestration, and privacy engineering.
Experience designing large-scale distributed systems (compute, storage, APIs, streaming).
Ability to independently deliver complex projects from requirements to production.
Systems thinker who anticipates bottlenecks, schema evolution, and reliability issues.
Strong communication skills to influence cross-team technical outcomes.
Growth mindset with curiosity to learn new technologies.
Impact at scale: powering global analytics and ML systems.
Challenging problems: streaming, freshness/correctness, and multi-cloud resiliency.
Collaborative culture that values inclusion and knowledge sharing.
Support & growth: flexibility, benefits, and career development resources.
Focus on reliability and sustainable on-call practices.
8+ years of distributed systems or data platform experience.
Proven ability to design and deliver critical systems with impact.
Proficiency in Java/Python, CI/CD, and containerized environments.
Hands-on expertise in tools like Kafka/Flink, Spark, Delta/Iceberg, Kubernetes, NoSQL/columnar stores.
Experience in streaming and batch data platforms.
Strong foundation in algorithms and distributed design.
BS/MS in CS or equivalent experience.
These jobs might be a good fit

Share
The Impact You Will Make Here
The Senior Software Engineer is responsible for coordinating the design, development, and implementation of software solutions. This role combines technical expertise with leadership skills to guide a team of developers, ensure outstanding deliverables, and drive the successful completion of projects. Candidates must have in-depth software development expertise, proven ability to deliver complex solutions, hands-on experience with Java and GCP, and capability to lead a small engineering team autonomously.
Architect, design, develop and test complex, multi-tier distributed Warehouse Management System software systems
Lead architecture discussions, develop well-documented design docs, and facilitate development and code reviews ensuring alignment with industry standards
Work with product managers, collaborators, and cross-functional teams to build software solutions that fulfill functional and non-functional needs, establish project plans and deliverables
Estimate engineering effort, plan execution cycles, and roll out system changes
Identify and address performance bottlenecks in software systems & ensure systems are secure, scalable, and maintainable
Write Unit and Integration tests and ensure software developed meets high quality standards
Stay updated on emerging technologies and integrate them into development processes whenever applicable
Function as a team leader utilizing communication, leadership, and problem-solving skills
What You Bring to the Team
Bachelor’s degree in Computer Science or related field plus 8+ years of experience or Master’s degree in Computer Science or related field plus 7+ years of hands-on experience in building large-scale distributed systems
Strong expertise in front-end technologies such as HTML, CSS, JavaScript, and React
Strong expertise in designing and developing REST API
Extensive hands-on experience and expertise in object-oriented design methodology and application development using Java/J2EE and Kotlin, including frameworks such as Spring Boot
Hands-on experience with Google Cloud Platform (GCP), particularly in Google Cloud Run and deployment pipelines
Deep understanding of SQL Databases, with an emphasis on Postgres. Familiarity with tuning systems, architecture, thread management, and problem analysis
Expertise with Terraform deployments
These jobs might be a good fit

Share
Job Title: MTS 1, Software Engineer
Start Date: January 19, 2026
days per annum , 5 sick days per annum
highly available
deployment
Identify project technical risks and make recommendations to mitigate
Use deep technical proven experience of company systems and applications, which could cover applications, services, systems, or frameworks
a high level of initiative and attention to detail during daily operations
Collaborate effectively with other engineers, product managers, designers, and QA engineers
Encourage peers with high-quality, hands-on technical contributions
methodology , design, and best practices.
Consistently produce high-quality software with a focus on unit testing, regular code reviews, and continuous integration.
high standards in quality and operational excellence.
Develop comprehensive technical documentation and presentations to clearly communicate architectural decisions and design options. Ensure documentation aligns with project scopes, milestones, and deliverables.
Effectively delegate tasks and responsibilities within the team, considering individual skills and workload
Lead the development of prototypes and proof-of-concept implementations for new technologies or approaches.
Required Experience
The language of work is English.
Bachelor of Engineering degree plus 8 or more years of experience ; or MS in Computer Science plus 6 or more years of hands-on experience in developing highly scalable distributed platforms and services and internet scale web application
Highly experienced in application development in JAVA and Kotlin and its related frameworks such as like Spring, Spring Boot, Hibernate, Stream processing platforms such as Kafka and Flink ;
in Oracle ADF 12c Framework, JavaScript , HTML and
in J2EE, SOAP, SOA Services, Design Patterns, OOA/D, Data Structures, XML, REST, JSON, and Internet Protocols ;
in a programming language such as Scala, solid base in data structures, algorithms and a strong understanding of multithreading, synchronization, concurrent programming; deep architectural understanding of system design and lead ing
noSQL data technology such as Mongo , ElasticSearch and related toolset ;
in Spring boot and Hadoop framework ;
in retail and logistics ;
ility to troubleshoot performance bottlenecks in
participating in design and code reviews, coding and unit testing of fault-tolerant applications. Comfortable or have familiarity with all the layers of multi-tier applications to craft complete solutions and maintain products
Solid understanding of computer science fundamentals. Experience in non-functional skills like Security, Load and Performance Tests, Accessibility, Site Speed optimization, Cross-browser /Cross-platform UX Design
Excellent verbal and written communication, leadership, and collaboration skills
These jobs might be a good fit

Share
What you’ll be doing:
Contribute features to vLLM that empower the newest models with the latest NVIDIA GPU hardware features; profile and optimize the inference framework (vLLM) with methods like speculative decoding,data/tensor/expert/pipeline-parallelism,prefill-decode disaggregation.
Develop, optimize, and benchmark GPU kernels (hand-tuned and compiler-generated) using techniques such as fusion, autotuning, and memory/layout optimization; build and extend high-level DSLs and compiler infrastructure to boost kernel developer productivity while approaching peak hardware utilization.
Define and build inference benchmarking methodologies and tools; contribute both new benchmark and NVIDIA’s submissions to the industry-leading MLPerf Inference benchmarking suite.
Architect the scheduling and orchestration of containerized large-scale inference deployments on GPU clusters across clouds.
Conduct and publish original research that pushes the pareto frontier for the field of ML Systems; survey recent publications and find a way to integrate research ideas and prototypes into NVIDIA’s software products.
What we need to see:
Bachelor’s degree (or equivalent expeience) in Computer Science (CS), Computer Engineering (CE) or Software Engineering (SE) with 7+ years of experience; alternatively, Master’s degree in CS/CE/SE with 5+ years of experience; or PhD degree with the thesis and top-tier publications in ML Systems, GPU architecture, or high-performance computing.
Strong programming skills in Python and C/C++; experience with Go or Rust is a plus; solid CS fundamentals: algorithms & data structures, operating systems, computer architecture, parallel programming, distributed systems, deep learning theories.
Knowledgeable and passionate about performance engineering in ML frameworks (e.g., PyTorch) and inference engines (e.g., vLLM and SGLang).
Familiarity with GPU programming and performance: CUDA, memory hierarchy, streams, NCCL; proficiency with profiling/debug tools (e.g., Nsight Systems/Compute).
Experience with containers and orchestration (Docker, Kubernetes, Slurm); familiarity with Linux namespaces and cgroups.
Excellent debugging, problem-solving, and communication skills; ability to excel in a fast-paced, multi-functional setting.
Ways to stand out from the crowd
Experience building and optimizing LLM inference engines (e.g., vLLM, SGLang).
Hands-on work with ML compilers and DSLs (e.g., Triton,TorchDynamo/Inductor,MLIR/LLVM, XLA), GPU libraries (e.g., CUTLASS) and features (e.g., CUDA Graph, Tensor Cores).
Experience contributing tocontainerization/virtualizationtechnologies such ascontainerd/CRI-O/CRIU.
Experience with cloud platforms (AWS/GCP/Azure), infrastructure as code, CI/CD, and production observability.
Contributions to open-source projects and/or publications; please include links to GitHub pull requests, published papers and artifacts.
You will also be eligible for equity and .
These jobs might be a good fit

Share
Opis stanowiska:
Poszukujemy Stażysty ds. Wsparcia Administracyjnego, który dołączy do naszych zespołów Payroll oraz Global Business Services (GBS), wspierając działania w krajach regionu EMEA. To idealna rola dla osoby, która lubi pracować z danymi, systemami i różnymi działami, zapewniając płynność procesów biznesowych. Będziesz zajmować się koordynacją informacji, dbaniem o spójność danych oraz wspieraniem narzędzi cyfrowych usprawniających pracę organizacji.
Twoje obowiązki:
· Wsparcie działań GBS i Payroll w zadaniach administracyjnych w regionie EMEA.
· Utrzymywanie i aktualizacja treści wewnętrznych (np. FAQ) na portalu firmowym.
· Przygotowywanie i dostarczanie raportów zespołom wewnętrznym i interesariuszom.
· Organizowanie i uzgadnianie danych na potrzeby raportowania i operacji.
· Weryfikacja i kontrola danych pracowników w systemach.
· Wsparcie testów i dokumentacji w ramach inicjatyw automatyzacji procesów.
Szukamy osoby, która:
· Jest w trakcie studiów licencjackich lub ukończyła kierunek administracja, języki lub pokrewne.
· Biegle posługuje się językiem angielskim i niemieckim (min. B2+). Znajomość innych języków będzie atutem.
· Posiada silne umiejętności organizacyjne i administracyjne.
· Dobrze zna pakiet Microsoft Office i szybko uczy się nowych systemów.
· Jest skrupulatna, analityczna i potrafi rozwiązywać problemy.
· Potrafi pracować samodzielnie oraz w zróżnicowanym, międzynarodowym środowisku.
· Dobrze zarządza czasem i priorytetami.
Oferujemy:
· Umowę stażową do 12 miesięcy, elastyczne godziny (30–40 godz./tyg.).
· Model hybrydowy (3 dni w biurze).
· Międzynarodowe środowisko pracy i wspierających współpracowników.
· Możliwość rozwoju umiejętności administracyjnych i koordynacyjnych w globalnej organizacji.
· Nowoczesne biuro w centrum Warszawy.
Lokalizacja: Warszawa (Hybrydowo)
Start: Grudzień/Styczeń 2025
These jobs might be a good fit

Share
These jobs might be a good fit

Share
This role can be based out of our Toronto office or remotely in the Ontario region.
Our ideal candidate will haveThese jobs might be a good fit

Share
This role is focused on Data Platform Engineering — not data engineering. While familiarity with Spark, Flink, and other tools in the Hadoop ecosystem is valuable, your primary responsibility will be building and evolving the platform itself, not just authoring data pipelines.
What you’ll do and learnOwn and deliver major components of eBay’s Data Platforms — from design through production rollout.
Design and evolve distributed systems powering ingestion, streaming, lakehouse/warehouse, catalog, and governance.
Contribute to long-term architecture through design reviews and authoring architecture design documents, ensuring scalability and resilience.
Build systems that balance latency, correctness, and cost while ensuring security and compliance.
Drive operational excellence for services you own, including observability and incident response.
Collaborate across product, infra, and analytics teams to align execution with business needs.
Learn and grow in areas like governance, orchestration, and privacy engineering.
Experience designing large-scale distributed systems (compute, storage, APIs, streaming).
Ability to independently deliver complex projects from requirements to production.
Systems thinker who anticipates bottlenecks, schema evolution, and reliability issues.
Strong communication skills to influence cross-team technical outcomes.
Growth mindset with curiosity to learn new technologies.
Impact at scale: powering global analytics and ML systems.
Challenging problems: streaming, freshness/correctness, and multi-cloud resiliency.
Collaborative culture that values inclusion and knowledge sharing.
Support & growth: flexibility, benefits, and career development resources.
Focus on reliability and sustainable on-call practices.
8+ years of distributed systems or data platform experience.
Proven ability to design and deliver critical systems with impact.
Proficiency in Java/Python, CI/CD, and containerized environments.
Hands-on expertise in tools like Kafka/Flink, Spark, Delta/Iceberg, Kubernetes, NoSQL/columnar stores.
Experience in streaming and batch data platforms.
Strong foundation in algorithms and distributed design.
BS/MS in CS or equivalent experience.
These jobs might be a good fit