

Share
NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people.
What you’ll be doing:
Crafting, scaling, and hardening deep learning infrastructure libraries and frameworks for training on multi-thousand GPU clusters.
Improving efficiency throughout the training stack: data loaders, distributed training, scheduling, and performance monitoring.
Building robust training pipelines and libraries to handle massive video datasets and enable rapid experimentation.
Collaborating with researchers, model engineers, and internal platform teams to enhance efficiency, minimize stalls, and improve training availability.
Owning core infrastructure components such as orchestration libraries, distributed training frameworks, and fault-resilient training systems.
Partnering with leadership to ensure infrastructure scales with growing GPU capacity and dataset size while maintaining developer efficiency and stability.
What we need to see:
BS, MS, or PhD in Computer Science, Electrical/Computer Engineering, or a related field, or equivalent experience.
12+ years of professional experience building and scaling high-performance distributed systems, ideally in ML, HPC, or large-scale data infrastructure.
Extensive knowledge in deep learning frameworks (PyTorch is preferred), large scale training (DDP/FSDP, NCCL, tensor/pipeline parallelism), and performance profiling.
Strong systems background: datacenter networking (RoCE, IB), parallel filesystems (Lustre), storage systems, schedulers (Slurm, Kubernetes, etc.).
Proficiency in Python and C++, with experience writing production-grade libraries, orchestration layers, and automation tools.
Ability to work closely with multi-functional teams (ML researchers, infra engineers, product leads) and translate requirements into robust systems.
Ways to stand out from the crowd:
Shown experience scaling large GPU training clusters with >1,000 GPUs.
Contributions to open-source ML systems libraries (e.g., PyTorch, NCCL, FSDP, schedulers, storage clients).
Expertise in fault resilience and high availability, including elastic training and large-scale observability.
Tried leadership skills as a hands-on technical authority, encouraging others and establishing guidelines for ML systems engineering.
Familiarity with reinforcement learning (RL) at scale, particularly in the context of simulation-heavy workloads.
You will also be eligible for equity and .
These jobs might be a good fit