What You'll Be Doing:
The software architecture group at NVIDIA has openings for a Deep Learning Communication Architect. We scale the DNN models and training/inference frameworks to systems with hundreds of thousands of nodes.
Optimizing communication performance: Identify and eliminate bottlenecks in data transfer and synchronization during distributed deep learning training and inference.
Designing efficient communication protocols: Develop and implement communication algorithms and protocols tailored for deep learning workloads, minimizing communication overhead and latency.
Hardware and software co-craft: Collaborate with hardware and software teams to craft systems that effectively apply high-speed interconnects (e.g., NVLink, InfiniBand, SPC-X) and communication libraries (e.g., MPI, NCCL, UCX, UCC, NVSHMEM).
Exploring innovative communication technologies: Research and evaluate new communication technologies and techniques to enhance the performance and scalability of deep learning systems.
Developing and implementing solutions: Build proofs-of-concept, conduct experiments, and perform quantitative modeling to validate and deploy new communication strategies.
What We Need to See:
A Ph.D., Masters, or BS in Computer Science (CS), Electrical Engineering (EE), Computer Science and Electrical Engineering (CSEE), or a closely related field or equivalent experience.
6+ years of experience in Building DNNs, Scaling of DNNs, Parallelism of DNN frameworks, or deep learning training and inference workloads.
Experience in evaluating, analyzing, and optimizing LLM training and inference performance of state-of-the-art models on cutting-edge hardware.
Deep understanding of parallelism techniques, including Data Parallelism, Pipeline Parallelism, Tensor Parallelism, Expert Parallelism, and FSDP.
Understanding of the emerging serving architectures like Disaggregated Serving and inference servers like Dynamo and Triton
Proficiency in developing code for one or more deep neural network (DNN) training and Inference frameworks, such as PyTorch, TensorRT-LLM, vLLM, SGLang.
Strong programming skills in C++ and Python.
Familiarity with GPU computing, including CUDA and OpenCL, and familiarity with InfiniBand and RoCE networks. CUDA and OpenCL, and familiarity with InfiniBand and RoCE networks.
Ways to Stand Out from the Crowd:
Prior contributions to one or more DNN training and Inference frameworks as part of your previous work experience.
Deep understanding and contributions to the scaling of LLMs on large-scale systems.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך