

Share
We are looking for exceptional Engineers, who take pride in creating simple solutions to apparently-complex problems. Our Engineering tasks typically involve at least one of the following:
Design, implement, and maintain the linking graph database that tracks relationships between all SEO-managed pages
Optimize graph schema and data models for high performance on complex queries and large-scale datasets
Develop automated pipelines to ingest, normalize, and update page metadata and link relationships from multiple data sources
Build and maintain RESTful APIs or microservices that enable internal tools and dashboards to query the linking graph efficiently
Implement scalable indexing, caching, and partitioning strategies to support rapid traversal and analytics workloads
Write clean, well-documented code for ETL processes, data validations, and custom data processing tasks
Stay current with best practices and emerging tools in graph database technology (e.g., Neo4j, Amazon Neptune), recommending upgrades or migrations as appropriate
Architect and maintain scalable graph-database infrastructure to support dataset growth
Define and oversee health monitoring, maintenance routines, and troubleshooting workflows for high availability
Establish and enforce data-integrity frameworks—indexing, partitioning, and validation—to ensure accurate link relationships
Drive capacity planning and forecasting to guarantee seamless infrastructure scaling as usage increases
Lead software modernization efforts, including upgrades, migrations, and architectural enhancements, to sustain performance
Mentor and onboard junior engineers, sharing best practices for backend development, data modeling, and SEO-focused engineering considerations
Align engineering deliverables with cross-functional SEO initiatives, ensuring technical work supports strategic business goals and performance KPIs
Passion and commitment to technical excellence, with a focus on graph database solutions
B.Sc. or M.Sc. in Computer Science (or equivalent practical experience)
7+ years of software design and development experience, solving complex problems in backend services—preferably involving graph data models and pipelines
Strong fundamentals in Data Structures, Algorithms, Object-Oriented Programming, and Software Design, applied to graph-based architectures
Production-grade coding expertise in Java and Python/Scala, including integration with graph databases (e.g., Neo4j, Amazon Neptune)
Strong mentorship and collaboration skills, with the ability to influence technical direction and foster cross-functional alignment
These jobs might be a good fit

Share
What you will accomplish :
Design, deliver, and optimize high-performance, large-scale applications, data pipelines, and ML service infrastructure with exceptional speed and reliability.
Enhance core products by skillfully integrating, fine-tuning, and deploying advanced machine learning models for optimal performance and impact.
Command the full software development lifecycle—from initial design and architecture to coding, testing, deployment, and maintenance—while producing clean, efficient, and documented code.
Develop and execute sound technical strategies for complex projects, taking into account business goals, timelines, and long-term impact.
Work closely with product managers and partners to translate business requirements into robust technical solutions, ensuring alignment across teams.
Take ownership of cross-team engineering efforts and guide junior team members, setting a high standard for technical excellence and professional growth.
Drive innovation by developing novel solutions to challenging problems and actively contribute to a culture of knowledge sharing by both teaching and learning from others.
What you will bring :
Masters in Computer Science or a related field with 7+ years of experience (or BS/BA with 8+ years) in building large-scale distributed applications and backend services.
A solid foundation in Data Structures, Algorithms, Object-Oriented Programming, Software Design/architecture, and core Statistics knowledge
Experience in the close examination of data, computation of statistics, and deriving of data insights
Proven experience designing and operating Big Data processing pipelines (Hadoop, Spark) and working with NoSQL databases or key-value stores (e.g., MongoDB, Redis).
Hands-on experience with the end-to-end lifecycle of machine learning, including model deployment and application at scale. Experience in AI research or industrial recommendation systems is a significant plus.
Experience with cloud services and familiarity with Large Language Models (LLMs) or prompt engineering is highly desirable.
A passion for technical excellence, excellent communication skills, and a "can-do" attitude with a willingness to learn and master new technologies.
These jobs might be a good fit

Share
As a thousands of applications and support 3,000+ developers every day. You’ll build paved‑path tooling, automate at scale, and adopt modern cloud infrastructure so teams can develop, test, and ship high‑quality, secure, performant software—rapidly and reliably.
What you’ll doOwn the Software Build, Test platform and frameworks. Design and evolve fast, reproducible, and cache-efficient builds; reduce CI build times with creative solutions; improve correctness and remove flakiness in infrastructure; maintain scalable artifact storage and dependency management.
Platform modernization. Our platform is powered by Jenkins, Maven for Java, NPM for NodeJs builds. You’ll be evaluating state of the art CI platforms (e.g.Tekton) and pave the way for modernizing our Build, Test stack.
Spin up ephemeral, production-like test environments. Standardize on sandboxed, on-demand environments (e.g., per-PR) to enable reliable integration/e2e testing and preview deployments.
Harden & simplify deployments. Advance our CD/GitOps workflows (progressive delivery, automated rollbacks, canaries), with golden paths and strong guardrails.
Build the internal developer portal. Curate paved roads for services, data, and infra via templates, scorecards, and software catalogs to improve discoverability and self-service.
Introduce AI-assisted engineering.
Ship secure, private AI copilots for code authoring, refactoring, and code review.
Use LLMs for test generation , flaky test triage , log summarization , debug suggestions , Error classifications, Selective Test Execution and AIOps across CI/CD.
Build evaluation harnesses, prompt libraries, RAG over internal docs, and policy controls for IP, PII, and secrets.
Champion reliability, security & compliance. Bake in supply-chain security (SBOMs, provenance, signing), policy-as-code, and infra guardrails, Patching CVEs in accordance with policies
Instrument, measure, improve. Track DORA and DevEx metrics (lead time, deployment frequency, change-failure rate, MTTR) and drive continuous improvement via experiments.
Partner widely. Work with product teams, Cloud, Frameworks, Security, and Data/ML to understand friction points and design paved-road solutions that scale.
Collaborate across.
Data driven analysis and cut average CI build times by 40% via incremental builds, dependency management and smarter caching; Bring down slowest test suites with parallelization, Selective Test executions, profiling and flake-busting.
Launch ephemeral “PR environments” with seeded data and synthetic traffic; integrate with feature flags for safe, progressive rollouts.
Stand up an internal developer portal (service templates, scorecards, docs search) and migrate golden paths there.
Deliver an AI DevEx toolkit : repo-aware chat, code-review assistant, flaky-test explainer, and CI log summarizer—with evaluation dashboards and privacy controls.
Pilot remote dev pods
Must-haves
7+ years building platforms/tools for large engineering orgs; deep expertise in one or more build systems (preferably building Java, Nodejs stacks), CI orchestration (preferably Jenkins), test infra, deployment/CD pipelines, or Internal Developer Portals.
Self-starter with a proactive mindset and strong sense of ownership.
Proven ability to manage communications effectively with partner teams across global regions, with hands-on experience working in private cloud environments.
Skilled at designing robust solutions while proactively anticipating potential issues to ensure reliability and efficiency.
Strong systems design for high-scale developer workflows (monorepos/multirepos, artifact caching, remote execution, hermetic builds).
Experienced in providing timely solutions to developer challenges in Jenkins environments, ensuring smooth CI/CD workflows.
DevOps fundamentals: Containers, Kubernetes, service mesh, IaC (Terraform), GitOps, observability (metrics/logs/traces), SLOs/SLIs.
Practical AI skills & design patterns: Prompt engineering, hands-on RAG, LLM evaluations, API orchestration, privacy/guardrails; ability to ship AI-backed tools that measurably reduce toil.
System design & design patterns: strong grasp of distributed systems, API design, resiliency, and object-oriented/functional patterns; ability to create clear, scalable architectures and ADRs.
Agentic/MCP architectures: practical experience designing agent loops (planner/executor/critic), tool abstractions, memory, and MCP-style tool/resource servers for enterprise integration.
Proficiency in at least two languages (e.g., Java, Kotlin, Python, Go,). Fullstack development experience is a plus.
Fluency in Linux/Ubuntu commands to get bottom of the system level issues.
Database literacy: working knowledge of NoSQL and modern relational databases.
Observability dashboards: ability to build with Prometheus, Grafana, ELK, Splunk, New Relic, or Nagios.
Performance sleuthing: able to diagnose system and web-service performance issues end-to-end.
Nice-to-haves
Experience with modern CI/CD platforms (e.g., Tekton/Spinnaker/Argo CD/Flux) and progressive delivery.
Prior work with Backstage or other IDPs; plugin development and service catalog design.
Background in developer analytics and productivity research; familiarity with DORA, DX frameworks
Experience with remote dev environments (e.g., DevPods/Codespaces-style) at scale.
Experience with microservices architecture and related DevOps practices.
Lead time for changes trends down; deployment frequency trends up without increasing risk.
CI stability & speed improve (p95 build/test time, flake rate, queueing).
Change failure rate & MTTR drop via safer releases and better rollback automation.
Developer NPS/DevEx survey and onboarding time improve; IDP adoption grows. (Benchmarked using DORA-style measures.)
These jobs might be a good fit

Share
Responsibilities:
Vertex O Series Configuration:
Lead the design and configuration of Vertex O Series for indirect tax calculation including Sales & Use Tax, VAT, GST and other relevant tax types.
Configure Vertex O Series to meet business requirements, including tax payers, taxability rules, rates, exemption certificate management and more.
Develop and implement robust tax rules for various global jurisdictions, understanding their unique indirect tax regulations.
Work closely with finance, product, and engineering teams to translate business needs into effective Vertex O Series solutions.
Vertex Setup & Management:
Provide essential support for the on-premise installation, configuration, and ongoing management of our Vertex O Series application.
Apply monthly Vertex tax content updates (rates, rules, jurisdictions) and perform associated testing to ensure accuracy and compliance.
Troubleshoot and resolve tax calculation errors, data discrepancies, technical and performance issues.
Manage user access, roles, and permissions within the Vertex O Series on-premise environment.
Documentation & Training:
Create and maintain detailed documentation for Vertex O Series configurations chang es.
Train internal teams on Vertex O Series functionality and best practices.
Technical Operations & Development:
Set up and manage ETL jobs, and update associated scripts to ensure accurate data flow for reporting.
Develop and deploy backend services using Java, Spring Framework to establish comprehensive auditing and data mirroring capabilities for data analysis and verification.
What you will bring:
Bachelor's Degree in Computer Science or closely related field with 8 years of relevant experience or MS with 6 plus years’ experience
Specializing in Tax to innovate and change the eBay Payments experience
Expertise in Tax software solutions like Vertex or similar.
Experience with SQL and relational databases as well as NoSQL databases such as MSSQL, PostgreSQL, Dynamo
Unit test with mock (Jest or Jasmine preferred) , Automation testing is a plus.
Understanding of how to create modular and extensible APIs.
Proficient at using appropriate security, documentation, and/or monitoring best practices.
Familiar with Agile/Scrum methodologies
Experience in fixing accessibility issues is a plus.
1+ years of experience in applying AI to practical and comprehensive technology solutions [Nice to Have]
to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities.
uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our for more information.
These jobs might be a good fit

Share
What you will accomplish:
What you will bring:
These jobs might be a good fit

Share
Responsibilities
Architecture & Development: Design, develop, and maintain scalable backend services and front-end applications that power eBay’s next-generation systems.
System Design: Drive architectural decisions, ensuring system scalability, security, and fault tolerance.
Full-Stack Ownership: Work across the stack—developing RESTful APIs, microservices, databases, and intuitive front-end interfaces.
Collaboration: Partner with product managers, architects, and cross-functional engineering teams to deliver end-to-end solutions.
Best Practices: Champion coding standards, secure development practices, and CI/CD automation.
Mentorship: Guide junior engineers through code reviews, design discussions, and knowledge-sharing sessions.
Innovation:
Qualifications
Education: Bachelor’s degree in computer science, Engineering, or related fields.
Experience: 8–10 years of professional software engineering experience with demonstrated expertise in full-stack development.
Technical Skills:
Backend: Expertise in Node.js, Java , or similar backend frameworks. Strong experience with microservices, RESTful APIs, event-driven architectures , and OpenAPI Specification for API design and documentation.
Frontend: Proficiency with React.js (or similar modern frameworks) and strong UI/UX sensibilities .
Databases: Deep knowledge of SQL and NoSQL databases .
Cloud & DevOps: Familiarity with AWS, GCP, Azure or Private cloud as well as containerization ( Docker, Kubernetes ) and CI/CD pipelines .
Security & Performance: Strong grasp of secure coding practices, scalability, and performance optimization .
Version Control: Proficiency with Git and collaborative workflows.
Soft Skills: Strong problem-solving and analytical skills.
Preferred Skills
Familiarity with traffic management UI applications.
Contributions to open-source projects.
Experience in Agile methodologies.
Benefits
Competitive salary and benefits.
Professional growth and advancement opportunities.
Access to the latest tools and technologies.
These jobs might be a good fit

Share
We’re seeking a Member of Technical Staff 1 (Software Engineer) who can work independently and is an expert in distributed systems . You’ll design and deliver well-scoped services and features that advance eBay’s Core Data Platform—improving scalability, reliability, and developer experience. This role is Data Platform Engineering not data engineering ): you’ll build and evolve the platform itself rather than author application pipelines.
Independently design, implement, and ship distributed services and features end-to-end (design → code → tests → deploy → operate).
Build core platform capabilities across ingestion, streaming, lakehouse/warehouse, catalog, and governance .
Write production-grade code with strong observability (metrics, logs, traces) and SLOs , and participate in on-call for the services you own.
Diagnose and resolve performance, scalability, and correctness issues in distributed environments.
Contribute design docs for your areas; participate in reviews to uphold reliability, security, and cost best practices.
Collaborate with product, infra, and analytics teams to align technical work with business outcomes.
6+ years of professional software engineering experience (or equivalent impact).
Expertise in distributed systems fundamentals (consensus, replication, partitioning, consistency models, fault tolerance) and practical experience building and running such systems in production.
Strong coding skills in Java/Python and familiarity with CI/CD .
Hands-on with some of: Kafka/Flink , Spark , Delta/Iceberg , Kubernetes , NoSQL/columnar stores .
Proven ability to work independently , make sound tradeoffs, and deliver quality outcomes with minimal supervision.
Solid debugging, performance analysis, and system design skills.
multi-tenant platform services, data governance, or privacy-by-design controls.
Contributions to open-source distributed systems or data platforms.
ships independent, well-scoped features/services to production with strong reliability.
Demonstrably improves throughput/latency/cost or availability/SLOs on owned services.
Becomes a go-to engineer for distributed-systems debugging and design conversations within the team.
Maintains high code quality, test coverage, and quality-in-release metrics.
Impact at scale: Your platform work powers analytics and ML across a global marketplace.
Hard problems: Streaming freshness/correctness, storage/compute efficiency, multi-region resiliency.
Collaborative culture: Inclusive team that values autonomy, craftsmanship, and knowledge sharing.
Growth:
These jobs might be a good fit

Share
We are looking for exceptional Engineers, who take pride in creating simple solutions to apparently-complex problems. Our Engineering tasks typically involve at least one of the following:
Design, implement, and maintain the linking graph database that tracks relationships between all SEO-managed pages
Optimize graph schema and data models for high performance on complex queries and large-scale datasets
Develop automated pipelines to ingest, normalize, and update page metadata and link relationships from multiple data sources
Build and maintain RESTful APIs or microservices that enable internal tools and dashboards to query the linking graph efficiently
Implement scalable indexing, caching, and partitioning strategies to support rapid traversal and analytics workloads
Write clean, well-documented code for ETL processes, data validations, and custom data processing tasks
Stay current with best practices and emerging tools in graph database technology (e.g., Neo4j, Amazon Neptune), recommending upgrades or migrations as appropriate
Architect and maintain scalable graph-database infrastructure to support dataset growth
Define and oversee health monitoring, maintenance routines, and troubleshooting workflows for high availability
Establish and enforce data-integrity frameworks—indexing, partitioning, and validation—to ensure accurate link relationships
Drive capacity planning and forecasting to guarantee seamless infrastructure scaling as usage increases
Lead software modernization efforts, including upgrades, migrations, and architectural enhancements, to sustain performance
Mentor and onboard junior engineers, sharing best practices for backend development, data modeling, and SEO-focused engineering considerations
Align engineering deliverables with cross-functional SEO initiatives, ensuring technical work supports strategic business goals and performance KPIs
Passion and commitment to technical excellence, with a focus on graph database solutions
B.Sc. or M.Sc. in Computer Science (or equivalent practical experience)
7+ years of software design and development experience, solving complex problems in backend services—preferably involving graph data models and pipelines
Strong fundamentals in Data Structures, Algorithms, Object-Oriented Programming, and Software Design, applied to graph-based architectures
Production-grade coding expertise in Java and Python/Scala, including integration with graph databases (e.g., Neo4j, Amazon Neptune)
Strong mentorship and collaboration skills, with the ability to influence technical direction and foster cross-functional alignment
These jobs might be a good fit