

What you will be doing:
Drive strategic CSP partnerships, collaborate with keyhyperscaleCSPs to align project schedules, priorities, and technical roadmaps for next-generation data center platforms.
Manage complex technical collaborations proactively, identifying and resolving critical issues before they impact customer deployments.
Orchestrate internal stakeholder alignment ensuring CSP priorities are reflected across NVIDIA's engineering, product, and business.
Create comprehensive customer program visibility through executive dashboards, status reports, and metrics tracking that provide real-time insights into CSP project health, risks, and milestone achievement.
Lead large-scale deployment programs managing multi-rack, hyperscale infrastructure rollouts with complex technical dependencies, timeline coordination, and resource alignment across multiple internal teams.
What we need to see:
Technical Expertise: Solid understanding of system software design, OS fundamentals, Linux kernel development, and hardware/software interfaces. Experience in GPU-based data center server architectures.
Program Management: Proven ability to lead software development for rack-scale systems and data center servers, including complex hardware/software integration projects.
Industry Collaboration: Experience partnering with hyperscalers to drive technical outcomes, manage dependencies, handle escalations, and communicate effectively at the executive level.
Communication & Leadership: Exceptional ability to translate technical concepts for business stakeholders and align diverse teams toward common goals.
BS or MS in Computer Engineering, Computer Science, or related field or equivalent experience.
8+ years of technical program management experience in HPC or data center server software development.
Ways to stand out from the crowd:
Prior hands-on technology development experience in out-of-band manageability and observability solutions, system software/Linux kernel driver development, CUDA programming, or LLM/AI framework development.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך

What you’ll be doing:
Working with tech giants to develop and demonstrate solutions based on NVIDIA’s groundbreaking software and hardware technologies.
Partnering with Sales Account Managers and Developer Relations Managers to identify and secure business opportunities for NVIDIA products and solutions.
Serving as the main technical point of contact for customers engaged in the development of intricate AI infrastructure, while also offering support in understanding performance aspects related to tasks like large scale LLM training and inference.
Conducting regular technical customer meetings for project/product details, feature discussions, introductions to new technologies, performance advice, and debugging sessions.
Collaborating with customers to build Proof of Concepts (PoCs) for solutions to address critical business needs and support cloud service integration for NVIDIA technology on hyperscalers.
Analyzing and developing solutions for customer performance issues for both AI and systems performance.
What we need to see:
BS/MS/PhD in Electrical/Computer Engineering, Computer Science, Physics, or other Engineering fields or equivalent experience.
4+ years of engineering(performance/system/solution)experience.
Hands-on experience building performance benchmarks for data center systems, including large scale AI training and inference.
Understanding of systems architecture including AI accelerators and networking as it relates to the performance of an overall application.
Effective engineering program management with the capability of balancing multiple tasks.
Ability to communicate ideas clearly through documents, presentations, and in external customer-facing environments.
Ways to stand out from the crowd:
Hands-on experience with Deep Learning frameworks (PyTorch, JAX, etc.), compilers (Triton, XLA, etc.), and NVIDIA libraries (TRTLLM, TensorRT, Nemo, NCCL, RAPIDS, etc.).
Familiarity with deep learning architectures and the latest LLM developments.
Background with NVIDIA hardware and software, performance tuning, and error diagnostics.
Hands-on experience with GPU systems in general including but not limited to performance testing, performance tuning, and benchmarking.
Experience deploying solutions in cloud environments including AWS, GCP, Azure, or OCI as well as knowledge of DevOps/MLOps technologies such as Docker/containers, Kubernetes, data center deployments, etc. Command line proficiency.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך

What you will be doing:
Lead design and development of cutting-edge end-to-end reference system stack for 5G/6G baseband system.
Developing and integrating modules like PDCP, RLC, MAC for 5G/6G air interface and L3 control plane.
Work in lab environment to trouble-shoot and integrate complex software modules!
Develop software, implementing new functions in C/C++/Python/CUDA in a multi-core environment.
Support system integration, performance testing, system demonstration and lab trials for end-to-end system.
Engage with customer field trials and technical teams.
Be a technical bridge between engineering team and partners/customers engineering team!
Help in implementing missing features to unblock progress at customers/partners.
What we need to see:
8+ years of LTE/5G network system experience focused on Radio Access Network.
Bachelor’s or Master degree in electrical or computer science or related field (or equivalent experience).
We are seeking candidates with end to end RAN integration knowledge.
Experience in building RAN products like baseband system (eNB or gNB).
Experience with cloud RAN, software defined networking, 5G macro and small cell deployments.
Familiar with NFV, software virtualization, VM, containers and VNF’s.
We seek experience in integrating with 5G NG and EPC cores.
Experience in integrating with PHY layer.
Direct experience in development, integration and testing of BBU functions like PHY, MAC, Scheduler, RLC, PDCP and RRC.
Proficient in developing code using C/C++ on Linux based platform.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך

What You'll Be Doing:
Working as a key member of our cloud solutions team, you will be the go-to technical expert on NVIDIA's products, helping our clients architect and optimize GPU solutions for AI services.
Collaborating directly with engineering teams to secure design wins, address challenges, usher projects into production, and offer support through the project's lifecycle.
Acting as a trusted advisor to our clients, while developing reference architectures and best practices for running Microsoft AI workloads on NVIDIA infrastructure.
What We Need To See:
4+ years of experience in cloud computing and/or large-scale AI systems.
A BS in EE, CS, Math, or Physics, or equivalent experience.
A proven understanding of cloud computing and large-scale computing systems.
Proficiency in Python, C, or C++ and experience with AI frameworks like Pytorch or TensorFlow.
Passion for machine learning and AI, and the drive to continually learn and apply new technologies.
Excellent interpersonal skills, including the ability to explain complex technical topics to non-experts.
Ways To Stand Out From The Crowd:
Recent projects or contributions (for example, on GitHub) related to large language models and transformer architectures.
Knowledge of Azure cloud and AzureML services.
Experience with CUDA programming and optimization.
Familiarity with NVIDIA networking technologies such as Infiniband.
Proficiency in Linux, Windows Subsystem for Linux, and Windows.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך

What you’ll be doing:
Design, build, and run cloud infrastructure services in scope to meet our business goals performing integrations, migrations, bringups, updates, and decommissions as necessary.
Participate in the definition of our internal facing service level objectives and error budgets as part of our overall observability strategy.
Eliminate toil or automate it where the ROI of building and maintaining automation is worth it.
Practice sustainable blameless incident prevention and incident response while being a member of an on-call rotation.
Consult with and provide consultation for peer teams on systems design best practices.
Participate in a supportive culture of values-driven introspection, communication, and self-organization
What we need to see:
Proficiency in one or more of the following programming languages: Python or Go
BS degree in Computer Science or a related technical field involving coding (e.g., physics or mathematics) or equivalent experience.
5+ years of relevant experience in infrastructure and fleet management engineering.
Experience with infrastructure automation and distributed systems design developing tools for running large scale private or public cloud systems at scales requiring fully automated management and under active customer consumption in production.
A track record demonstrating a mix of initiating your own projects, convincing others to collaborate with you, and collaborating well on projects initiated by others.
In-depth knowledge in one or more of the following: Linux, Slurm, Kubernetes, Local and Distributed Storage, and Systems Networking.
Ways to stand out from the crowd:
Demonstrating a systematic problem-solving approach, coupled with clear communication skills and a willingness to take ownership and get results such as experience driving a build / reuse / buy decision.
Experience working with or developing bare metal as a service (BMaaS) associated systems. For example, vending BMaaS, or Slurm running on containers, or vending Kubernetes clusters. Experience working with or developing multi-cloud infrastructure services. Experience teaching reliability engineering (e.g. SRE) and/or other scale-oriented cloud systems practices to peers and/or other companies (e.g. CRE). Experience in running private or public cloud systems based on one or more of Kubernetes, OpenStack, Docker or Slurm.
Experience with accelerated compute and communications technologies such BlueField Networking, Infiniband topologies, NVMesh, and/or the NVIDIA Collective Communication Library (NCCL).
Experience working with a centralized security organization to prioritize and mitigate security risks. Prior experience in a ML/AI focused role or on a team matching specific keywords is welcome but not required.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך

What you'll be doing:
Design, build and optimize agentic AI systems for the CUDA ecosystem.
Co-design agentic system solutions with software, hardware and algorithm teams; influence and adopt new capabilities as they become available.
Develop reproducible, high-fidelity evaluation frameworks covering performance, quality and developer productivity.
Collaborate across the AI stack—from hardware throughcompilers/toolchains,kernels/libraries, frameworks, distributed training, andinference/serving—andwith model/agent teams.
What we need to see:
Bachelor’s degree in Computer Science, Electrical Engineering, or related field (or equivalent experience); MS or PhD preferred.
3 years+ industry or academia experience with AI systems development; exposure to building foundational models, agents or orchestration frameworks; hands-on experience with deep learning frameworks and modern inference stacks.
Strong C/C++ and Python programming skills; solid software engineering fundamentals.
Experience with GPU programming and performance optimization (CUDA or equivalent).
Ways To Stand Out From The Crowd:
Strong experience in building/evaluating deep learning models, coding agents and developer tooling.
Demonstrated ability to optimize and deploy high-performance models, including on resource-constrained platforms.
Demonstrated ability in GPU performance optimizations, evidenced by benchmark wins or published results.
Publications or open-source leadership in deep learning, multi-agent systems, reinforcement learning, or AI systems; contributions to widely used repos or standards.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך

What you’ll be doing:
100% kernel coding role
Own end-to-end design and development, challenging existing paradigms and exploring innovative approaches for RDMA and high-speed TCP-based networks.
Collaborate closely with cross-functional teams to define and implement robust networking algorithms, data management strategies, and distributed systems principles.
Contribute to architecture, integration, and alignment with both on-prem and cloud-native platforms.
Optimize system performance and reliability through in-depth analysis and low-level tuning.
Stay up to date with the latest industry trends and contribute to open-source projects.
What we need to see:
B.S. or M.S. degree in Computer Science or Electrical Engineering (or equivalent experience).
12+ years experience in development
Proven professional experience in designing and developing distributed systems; advantage for experience in block storage and networking systems, advantage for cloud environments.
Strong proficiency in C/C++ programming. Experienced with Linux Kernel internals including block subsystem, IO stack, memory management, and scheduling.
Familiarity with storage protocols and standards, especially NVMe.
Knowledge of networking fundamentals and experience in Linux-based networking environments.
Familiarity with RDMA technologies, including Infiniband, RoCE, or iWARP, and experience with RDMA programming models, control and data paths.
Knowledge of cloud computing concepts, including virtualization, scalability, and data management.
Ways To Stand Out From The Crowd:
Excellent communication skills and a collaborative mindset.
Perseverance and determination in debugging complex problems.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך

What you will be doing:
Drive strategic CSP partnerships, collaborate with keyhyperscaleCSPs to align project schedules, priorities, and technical roadmaps for next-generation data center platforms.
Manage complex technical collaborations proactively, identifying and resolving critical issues before they impact customer deployments.
Orchestrate internal stakeholder alignment ensuring CSP priorities are reflected across NVIDIA's engineering, product, and business.
Create comprehensive customer program visibility through executive dashboards, status reports, and metrics tracking that provide real-time insights into CSP project health, risks, and milestone achievement.
Lead large-scale deployment programs managing multi-rack, hyperscale infrastructure rollouts with complex technical dependencies, timeline coordination, and resource alignment across multiple internal teams.
What we need to see:
Technical Expertise: Solid understanding of system software design, OS fundamentals, Linux kernel development, and hardware/software interfaces. Experience in GPU-based data center server architectures.
Program Management: Proven ability to lead software development for rack-scale systems and data center servers, including complex hardware/software integration projects.
Industry Collaboration: Experience partnering with hyperscalers to drive technical outcomes, manage dependencies, handle escalations, and communicate effectively at the executive level.
Communication & Leadership: Exceptional ability to translate technical concepts for business stakeholders and align diverse teams toward common goals.
BS or MS in Computer Engineering, Computer Science, or related field or equivalent experience.
8+ years of technical program management experience in HPC or data center server software development.
Ways to stand out from the crowd:
Prior hands-on technology development experience in out-of-band manageability and observability solutions, system software/Linux kernel driver development, CUDA programming, or LLM/AI framework development.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך