Expoint – all jobs in one place
המקום בו המומחים והחברות הטובות ביותר נפגשים
Limitless High-tech career opportunities - Expoint

Nvidia Engineering Manager Deep Learning Inference 
United States, Texas 
850503382

Today
US, CA, Santa Clara
US, WA, Remote
US, CA, Remote
US, WA, Seattle
time type
Full time
posted on
Posted 9 Days Ago
job requisition id

What you'll be doing:

  • Lead, mentor, and scale a high-performing engineering team focused on deep learning inference and GPU-accelerated software.

  • Drive the strategy, roadmap, and execution of NVIDIA’s inference frameworks engineering, focusing on SGLang.

  • Partner with internal compiler, libraries, and research teams to deliver end-to-end optimized inference pipelines across NVIDIA accelerators.

  • Oversee performance tuning, profiling, and optimization of large-scale models for LLM, multimodal, and generative AI applications.

  • Guide engineers in adopting best practices for CUDA, Triton, CUTLASS, and multi-GPU communications (NIXL, NCCL, NVSHMEM).

  • Represent the team in roadmap and planning discussions, ensuring alignment with NVIDIA’s broader AI and software strategies.

  • Foster a culture of technical excellence, open collaboration, and continuous innovation.

What we need to see:

  • MS, PhD, or equivalent experience in Computer Science, Electrical/Computer Engineering, or a related field.

  • 6+ years of software development experience, including 3+ years in technical leadership or engineering management.

  • Strong background in C/C++ software design and development; proficiency in Python is a plus.

  • Hands-on experience with GPU programming (CUDA, Triton, CUTLASS) and performance optimization.

  • Proven record of deploying or optimizing deep learning models in production environments.

  • Experience leading teams using Agile or collaborative software development practices.

Ways to Stand out from The Crowd

  • Significant open-source contributions to deep learning or inference frameworks such as PyTorch, vLLM, SGLang, Triton, or TensorRT-LLM.

  • Deep understanding of multi-GPU communications (NIXL, NCCL, NVSHMEM) and distributed inference architectures.

  • Expertise in performance modeling, profiling, and system-level optimization across CPU and GPU platforms.

  • Proven ability to mentor engineers, guide architectural decisions, and deliver complex projects with measurable impact.

  • Publications, patents, or talks on LLM serving, model optimization, or GPU performance engineering.

You will also be eligible for equity and .