

What you’ll be doing:
Design, build, and run cloud infrastructure services in scope to meet our business goals performing integrations, migrations, bringups, updates, and decommissions as necessary.
Participate in the definition of our internal facing service level objectives and error budgets as part of our overall observability strategy.
Eliminate toil or automate it where the ROI of building and maintaining automation is worth it.
Practice sustainable blameless incident prevention and incident response while being a member of an on-call rotation.
Consult with and provide consultation for peer teams on systems design best practices.
Participate in a supportive culture of values-driven introspection, communication, and self-organization
What we need to see:
Proficiency in one or more of the following programming languages: Python or Go
BS degree in Computer Science or a related technical field involving coding (e.g., physics or mathematics) or equivalent experience.
5+ years of relevant experience in infrastructure and fleet management engineering.
Experience with infrastructure automation and distributed systems design developing tools for running large scale private or public cloud systems at scales requiring fully automated management and under active customer consumption in production.
A track record demonstrating a mix of initiating your own projects, convincing others to collaborate with you, and collaborating well on projects initiated by others.
In-depth knowledge in one or more of the following: Linux, Slurm, Kubernetes, Local and Distributed Storage, and Systems Networking.
Ways to stand out from the crowd:
Demonstrating a systematic problem-solving approach, coupled with clear communication skills and a willingness to take ownership and get results such as experience driving a build / reuse / buy decision.
Experience working with or developing bare metal as a service (BMaaS) associated systems. For example, vending BMaaS, or Slurm running on containers, or vending Kubernetes clusters. Experience working with or developing multi-cloud infrastructure services. Experience teaching reliability engineering (e.g. SRE) and/or other scale-oriented cloud systems practices to peers and/or other companies (e.g. CRE). Experience in running private or public cloud systems based on one or more of Kubernetes, OpenStack, Docker or Slurm.
Experience with accelerated compute and communications technologies such BlueField Networking, Infiniband topologies, NVMesh, and/or the NVIDIA Collective Communication Library (NCCL).
Experience working with a centralized security organization to prioritize and mitigate security risks. Prior experience in a ML/AI focused role or on a team matching specific keywords is welcome but not required.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך

What you’ll be doing:
Own Initial Power-On and Board Bring-Up: Lead the initial power-on and functional validation of compute trays (CPU, GPU, NIC, storage including NVMe, cooling, etc.) internally and with customers. Ensure all functional requirements are met.
Form and lead a virtual team across NVIDIA software & firmware teams to ensure subject matter experts are available as needed throughout bringup. Regular reporting on status of bringup to provide visibility and ensure teams across the company are fully activated to help.
Oversee flashing, updating, and validation of firmware for all server components as per defined architecture. Ensure appropriate validation done for boundary, stress, and regression testing, and confirm telemetry, logging, and hardware management features working as per requirements. Document pain points, bring up failures, recovery flows, and provide actionable feedback to hardware, firmware, and software teams. Ensure usability, firmware/BIOS update coverage, and error reporting for reliable customer installation and operation
Factory & Manufacturing Support: Support manufacturing flows, firmware updates, and diagnostic procedures. Ensure BOM change signoff and process optimization.
Debug, Issue Resolution & Customer Support: Lead root cause analysis and resolution of bring-up failures. Collaborate with partners, ODMs, and customers for technical support.
Documentation & Knowledge Transfer: Own and maintain platform design guides, bring-up checklists, and install instructions. Provide training and enablement for internal and external teams.
Product Ownership: Drive product life cycles with QA teams, ensuring robust bring up, productization, and delivery.
Performance Management: Conduct performance evaluations, develop a culture of excellence, and ensure high productivity.
What we need to see:
5+ years of relevant experience managing systems/platform software teams, ideally in server bring up, firmware development, or data center solutions.bDeep experience operating successfully in a matrix environment, forming and leading high impact virtual teams spanning multiple disciplines.
BS, MS, or PhD in EE/CS or related field (or equivalent experience) with 12+ overall years of experience. Strong knowledge of compute tray designs, firmware enablement, and system-level architecture.
Proven track record of delivering scalable server products and solutions for large scale data centers. Experience collaborating with hardware, firmware, manufacturing, diags and QA teams
Experience with SCM (Git, Perforce) and project management tools (Jira).
Excellent written and oral communication skills, strong work ethic, and dedication to teamwork.
Hands-on experience with x86/ARM system architecture and coding (C/C++, Python).
You are a self-starter who loves to find creative solutions to complicated problems.
Proven excellence in server architecture, collaborating across teams for delivering server products as per defined Key Performance Indicators (KPIs).
Ways to stand out from the crowd:
Experience leading bring-up for sophisticated compute architectures like GB200 NVL72.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך

As a senior manager in our global IT PMO team, you will be accountable for critical infrastructure programs supporting Compute platforms and IT Automation. As a leader of these initiatives, you will drive the operating model, scaling approach and playbooks with delivering programs at scale.
What you'll be doing:
Lead, mentor, and develop a diverse team of Technical Program Managers, fostering professional growth and a culture of accountability and innovation.
Serve as a force multiplier across the organization by enabling effective coordination of cross-functional initiatives and managing complex interdependencies.
Promote collaboration in a fast-paced, dynamic environment while guiding teams through uncertainty with clarity, technical insight, and a results-oriented mindset.
Establish and maintain best-in-class program management practices to optimize delivery efficiency, mitigate risk, and ensure consistent execution excellence.
Drive continuous improvement by implementing data-driven feedback mechanisms and leveraging metrics to identify and act on opportunities for optimization.
Own and manage the Infrastructure program portfolio, ensuring alignment with organizational goals, strategic priorities, and resource capacity.
Lead quarterly portfolio planning sessions to align stakeholders on dependencies, risks, and prioritization of initiatives. Deliver clear, data-driven updates to senior stakeholders, tracking progress against key performance indicators and strategic objectives.
Evaluate and prioritize new program requests based on strategic value, business impact, and available capacity. Monitor and assess portfolio performance, identifying improvement areas and providing actionable recommendations to leadership.
What we need to see:
Bachelor's Degree in computer science, other related technical field or equivalent experience.
12+ overall years of IT experience.
7+ years of experience successfully leading technical programs in a fast paced, multifaceted, enterprise environment.
In depth technical knowledge of IT infrastructure within Automation, platforms, SW development and observability, and Compute Systems such as OpenShift.
Deep understanding of infrastructure standards and methodologies to optimize for quality and efficiency. Experience with various continuousintegration/deploymentmodels for large organizations will be important, as well as a foresight towards how to adopt and integrate such practices into a very dynamic infrastructure environment.
Certified Scrum Master or Certified Scrum Trainer certification or equivalent preferred.
Proven history of continuous improvement to enable higher performingprogram/organizationsand/or teams with improved business and customer outcomes.
Consistent track record of delivering critical infrastructure builds, while navigating a fast-paced environment with frequent shift in priorities.
Effective communication skills both written andverbal/presentations.Ability to bridge from high-level objectives to project details and vice-versa.
Willingness to work with distributed team members across different time zones.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך

What you'll be doing:
Building and maintaining infrastructure from first principles needed to deliver TensorRT LLM
Maintain CI/CD pipelines to automate the build, test, and deployment process and build improvements on the bottlenecks. Managing tools and enabling automations for redundant manual workflows via Github Actions, Gitlab, Terraform, etc
Enable performing scans and handling of security CVEs for infrastructure components
Improve the modularity of our build systems using CMake
Use AI to help build automated triaging workflows
Extensive collaboration with cross-functional teams to integrate pipelines from deep learning frameworks and components is essential to ensuring seamless deployment and inference of deep learning models on our platform.
What we need to see:
Masters degree or equivalent experience
3+ years of experience in Computer Science, computer architecture, or related field
Ability to work in a fast-paced, agile team environment
Excellent Bash, CI/CD, Python programming and software design skills, including debugging, performance analysis, and test design.
Experience with CMake.
Background with Security best practices for releasing libraries.
Experience in administering, monitoring, and deploying systems and services on GitHub and cloud platforms. Support other technical teams in monitoring operating efficiencies of the platform, and responding as needs arise.
Highly skilled in Kubernetes and Docker/containerd. Automation expert with hands-on skills in frameworks like Ansible & Terraform. Experience in AWS, Azure or GCP
Ways to stand out from the crowd:
Experience contributing to a large open-source deep learning community - use of GitHub, bug tracking, branching and merging code, OSS licensing issues handling patches, etc.
Experience in defining and leading the DevOps strategy (design patterns, reliability and scaling) for a team or organization.
Experience driving efficiencies in software architecture, creating metrics, implementing infrastructure as code and other automation improvements.
Deep understanding of test automation infrastructure, framework and test analysis.
Excellent problem solving abilities spanning multiple software (storage systems, kernels and containers) as well as collaborating within an agile team environment to prioritize deep learning-specific features and capabilities within Triton Inference Server, employing advanced troubleshooting and debugging techniques to resolve complex technical issues.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך

What you'll be doing:
Create products to help researchers and production model builders
Develop product strategy, roadmaps, and go-to-market plans
Collaborate with internal and external customers to build product-based roadmaps for training/post training software
Work with leadership to align with and drive company strategy
What we need to see:
Experience with training/post training and optimization software (ex. PyTorch distributed, torchtitan, VeRL, Nemo Framework, etc.)
Demonstrable knowledge of GenAI or machine learning concepts, particularly around model training, performance optimization, and software development and delivery
Experience with large scale distributed systems
BS or MS degree in Computer Science, Computer Engineering, or similar experience (or equivalent experience)
15+ years of technical product management, or similar, experience at a technology company
Strong communication and interpersonal skills
Ways to Stand Out from the crowd:
Experience leading GenAI/RecSys research to production at scale
Working on Open Source & Github-first developer products with deep customer interactions
Knowledge of GPU architecture, HW/SW co-design, and performance profiling
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך

You will collaborate closely with researchers to design and scale agents - enabling them to reason, plan, call tools and code just like human engineers. You will work on building and maintaining the core infrastructure for deploying and running these agents in production, powering all our agentic tools and applications and ensuring their seamless and efficient performance. If you're passionate about the latest research and cutting-edge technologies shaping generative AI, this role and team offer an exciting opportunity to be at the forefront of innovation.
What you'll be doing:
Design, develop, and improve scalable infrastructure to support the next generation of AI applications, including copilots and agentic tools.
Drive improvements in architecture, performance, and reliability, enabling teams to bring to bear LLMs and advanced agent frameworks at scale.
Collaborate across hardware, software, and research teams, mentoring and supporting peers while encouraging best engineering practices and a culture of technical excellence.
Stay informed of the latest advancements in AI infrastructure and contribute to continuous innovation across the organization.
What we need to see:
Master or PhD or equivalent experience in Computer Science or related field, with a minimum of 5 years in large-scale distributed systems or AIinfrastructure.
Advanced expertise in Python (required), strong experience with JavaScript, and deep knowledge of software engineering principles, OOP/functional programming, and writing high-performance, maintainable code.
Demonstrated expertise in crafting scalable microservices, web apps, SQL, and NoSQL databases (especially MongoDB and Redis) in production with containers, Kubernetes, and CI/CD.
Solid experience with distributed messaging systems (e.g., Kafka), and integrating event-driven or decoupled architectures into robust enterprise solutions.
Practical experience integrating and fine-tuning LLMs or agent frameworks (e.g., LangChain, LangGraph, AutoGen, OpenAI Functions, RAG, vector databases, timely engineering).
Demonstrated end-to-end ownership of engineering solutions, from architecture and development to deployment, integration, and ongoingoperations/support.
Excellent communication skills and a collaborative, proactive approach.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך

What you'll be doing:
Working with NVIDIA AI Native customers on data center GPU server and networking infrastructure deployments.
Guiding customer discussions on network topologies, compute/storage, and supporting the bring-up ofserver/network/clusterdeployments.
Identifying new project opportunities for NVIDIA products and technology solutions in data center and AI applications.
Conducting regular technical meetings with customers as a trusted advisor, discussing product roadmaps, cluster debugging, and new technology introductions.
Building custom demonstrations and proofs of concept to address critical business needs.
Analyzing and debugging compute/network performance issues.
What we need to see:
BS/MS/PhD in Electrical/Computer Engineering, Computer Science, Physics, or related fields, or equivalent experience.
5+ years of experience in Solution Engineering or similar roles.
System-level understanding of server architecture, NICs, Linux, system software, and kernel drivers.
Practical knowledge of networking - switching & routing for Ethernet/Infiniband, and data center infrastructure (power/cooling).
Familiarity with DevOps/MLOps technologies such as Docker/containers and Kubernetes.
Effective time management and ability to balance multiple tasks.
Excellent communication skills for articulating ideas and code clearly through documents and presentations.
Ways to stand out from the crowd:
External customer-facing skills and experience.
Experience with the bring-up and deployment of large clusters.
Proficiency in systems engineering, coding, and debugging, including C/C++, Linux kernel, and drivers.
Hands-on experience with NVIDIA systems/SDKs (e.g., CUDA), NVIDIA networking technologies (e.g., DPU or equivalent experience, RoCE, InfiniBand), and/or ARM CPU solutions.
Familiarity with virtualization technology concepts.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך

What you’ll be doing:
Design, build, and run cloud infrastructure services in scope to meet our business goals performing integrations, migrations, bringups, updates, and decommissions as necessary.
Participate in the definition of our internal facing service level objectives and error budgets as part of our overall observability strategy.
Eliminate toil or automate it where the ROI of building and maintaining automation is worth it.
Practice sustainable blameless incident prevention and incident response while being a member of an on-call rotation.
Consult with and provide consultation for peer teams on systems design best practices.
Participate in a supportive culture of values-driven introspection, communication, and self-organization
What we need to see:
Proficiency in one or more of the following programming languages: Python or Go
BS degree in Computer Science or a related technical field involving coding (e.g., physics or mathematics) or equivalent experience.
5+ years of relevant experience in infrastructure and fleet management engineering.
Experience with infrastructure automation and distributed systems design developing tools for running large scale private or public cloud systems at scales requiring fully automated management and under active customer consumption in production.
A track record demonstrating a mix of initiating your own projects, convincing others to collaborate with you, and collaborating well on projects initiated by others.
In-depth knowledge in one or more of the following: Linux, Slurm, Kubernetes, Local and Distributed Storage, and Systems Networking.
Ways to stand out from the crowd:
Demonstrating a systematic problem-solving approach, coupled with clear communication skills and a willingness to take ownership and get results such as experience driving a build / reuse / buy decision.
Experience working with or developing bare metal as a service (BMaaS) associated systems. For example, vending BMaaS, or Slurm running on containers, or vending Kubernetes clusters. Experience working with or developing multi-cloud infrastructure services. Experience teaching reliability engineering (e.g. SRE) and/or other scale-oriented cloud systems practices to peers and/or other companies (e.g. CRE). Experience in running private or public cloud systems based on one or more of Kubernetes, OpenStack, Docker or Slurm.
Experience with accelerated compute and communications technologies such BlueField Networking, Infiniband topologies, NVMesh, and/or the NVIDIA Collective Communication Library (NCCL).
Experience working with a centralized security organization to prioritize and mitigate security risks. Prior experience in a ML/AI focused role or on a team matching specific keywords is welcome but not required.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך