Share
What you’ll be doing:
Design and build modular, scalable model optimization software platforms that deliver exceptional user experiences while supporting diverse AI models and optimization techniques to drive widespread adoption.
Explore, develop, and integrate innovative deep learning optimization algorithms (e.g., quantization, speculative decoding, sparsity) into NVIDIA's AI software stack, e.g., TensorRT Model Optimizer, NeMo/Megatron, and TensorRT-LLM.
Deploy optimized models into leading OSS inference frameworks and contribute specialized APIs, model-level optimizations, and new features tailored to the latest NVIDIA hardware capabilities.
Partner with NVIDIA teams to deliver model optimization solutions for customer use cases, ensuring optimal end-to-end workflows and balanced accuracy-performance trade-offs.
Conduct deep GPU kernel-level profiling to identify and capitalize on hardware and software optimization opportunities (e.g., efficient attention kernels, KV cache optimization, parallelism strategies).
Drive continuous innovation in deep learning inference performance to strengthen NVIDIA platform integration and expand market adoption across the AI inference ecosystem.
What we need to see:
Master’s, PhD, or equivalent experience in Computer Science, Artificial Intelligence, Applied Mathematics, or a related field.
5+ years of relevant work or research experience in deep learning.
Strong software design skills, including debugging, performance analysis, and test development.
Proficiency in Python, PyTorch, and modern ML frameworks/tools.
Proven foundation in algorithms and programming fundamentals.
Strong written and verbal communication skills, with the ability to work both independently and collaboratively in a fast-paced environment.
Ways to stand out from the crowd:
Contributions to PyTorch, JAX, vLLM, SGLang, or other machine learning training and inference frameworks.
Hands-on experience training or fine-tuning generative AI models on large-scale GPU clusters.
Proficient in GPU architectures and compilation stacks, adept at analyzing and debugging end-to-end performance.
Familiarity with NVIDIA’s deep learning SDKs (e.g., TensorRT).
Experience developing high-performance GPU kernels for machine learning workloads using CUDA, CUTLASS, or Triton.
You will also be eligible for equity and .
These jobs might be a good fit