Expoint – all jobs in one place
Finding the best job has never been easier
Limitless High-tech career opportunities - Expoint

Nvidia Manager Deep Learning Algorithms - Training Framework 
United States, California 
557792556

Today
US, CA, Santa Clara
time type
Full time
posted on
Posted 7 Days Ago
job requisition id

In this critical role, you will manage a team to expand NeMo Framework's capabilities, enabling users to develop, train, and optimize models by designing and implementing the latest in distributed training algorithms, model parallel paradigms, model optimizations, defining robust APIs, meticulously analyzing and tuning performance, and expanding our toolkits and libraries to be more comprehensive and coherent. You will collaborate with internal partners, users, and members of the open source community to analyze, design, and implement highly optimized solutions.

What you’ll be doing:

  • Plan, schedule, mentor, and lead the execution of projects and activities of the team.

  • Collaborate with internal customers to align priorities across business units.

  • Coordinate projects across different geographic locations.

  • Grow and develop a world-class team.

  • Contribute and advance open source

  • Solve large-scale, end-to-end AI training challenges, spanning the full model lifecycle from initial orchestration, data pre-processing, running of model training and tuning, to model deployment.

  • Work at the intersection ofcomputer-architecture,libraries, frameworks, AI applications and the entire software stack.

  • Innovate and improve model architectures, distributed training algorithms, and model parallel paradigms.

What we need to see:

  • Excellent understanding of SDLC practices including architecting, testing, continuous integration, and documentation

  • MS, PhD or equivalent experience in Computer Science, AI, Applied Math, or related field

  • 8+ overall years of industry experience, including 3+ years of management experience.

  • Proven experience to lead and scale high-performing engineering teams, especially across distributed and functional groups.

  • Experience with AI Frameworks (e.g. PyTorch, JAX), and/or inference and deployment environments (e.g. TRTLLM, vLLM, SGLang).

  • Proficient in Python programming, software design, debugging, performance analysis, test design and documentation.

  • Consistent record of working effectively across multiple engineering initiatives and improving AI libraries with new innovations.

Ways to stand out from the crowd:

  • Hands-on experience in large-scale AI training, with a deep understanding of core compute system concepts (such as latency/throughput bottlenecks, pipelining, and multiprocessing) and demonstrated excellence in related performance analysis and tuning.

  • Expertise in distributed computing, model parallelism, and mixed precision training.

  • Prior experience with Generative AI techniques applied to LLM and Multi-Modal learning (Text, Image, and Video).

  • Knowledge of GPU/CPU architecture and related numerical software.

  • Created / contributed to open source deep learning frameworks.

You will also be eligible for equity and .