

Share
NVIDIA seeks a Senior Software Engineer specializing in Deep Learning Inference for our growing team. As a key contributor, you will help design, build, and optimize the GPU-accelerated software that powers today’s most sophisticated AI applications.
Our team is responsible for developing and maintaining high-performance deep learning frameworks, including SGLang and vLLM, which are at the forefront of efficient large-scale model serving and inference. You will play a central role in improving these platforms, facilitating smooth deployment and serving of groundbreaking language models.
You’ll work closely with the deep learning community to implement the latest algorithms for public release in frameworks like SGLang and vLLM, as well as other DL frameworks. Your work will focus on identifying and driving performance improvements for state-of-the-art LLM and Generative AI models across NVIDIA accelerators, from datacenter GPUs to edge SoCs. You'll bring to bear open-source tools and plugins—including CUTLASS, OAI Triton, NCCL, and CUDA kernels—to implement and optimize model serving pipelines.
What you'll be doing:
Performance optimization, analysis, and tuning of DL models in various domains like LLM, Multimodal and Generative AI.
Scale performance of DL models across different architectures and types of NVIDIA accelerators.
Contribute features and code to NVIDIA’s inference libraries, vLLM and SGLang, FlashInfer and LLM software solutions.
Work with cross-collaborative teams across frameworks, NVIDIA libraries and inference optimization innovative solutions.
What we need to see:
Masters or PhD or equivalent experience in relevant field (Computer Engineering, Computer Science, EECS, AI).
5+ years of relevant software development experience.
Excellent C/C++ programming and software design skills. SW Agile skills are helpful and Python experience is a plus.
Prior experience with training, deploying or optimizing the inference of DL models in production is a plus.
Prior background with performance modeling, profiling, debug, and code optimization or architectural knowledge of CPU and GPU is a plus.
Ways to stand out from the crowd:
Contribute to Deep Learning Software projects, such as PyTorch, vLLM, and SGLang to drive advancements in the field.
Experience with Multi-GPU Communications (NCCL, NVSHMEM)
Experience building and shipping products to enterprisecustomers.
GPU programming experience (CUDA, OAI TRITON or CUTLASS).
These jobs might be a good fit

Share
What you will be doing:
focusing on performance at scale, reliability, manageability, real-time monitoring, etc.
Interact with end-users in academia and industry,develop a keen understanding of their goals and needs, define and deliver high-value solutions that meet these needs.
Identify gaps and propose/develop prototypical solutions.
Demonstrate accelerated computing and AI workflows, deliver trainings using NVIDIA GPUs and software for AI research,groom power users to be NVIDIA champions e.g. as DLI Ambassadors.
Communicate customer requirements to NVIDIA Engineering to foster product improvements.
What we need to see:
3+ years of research experience in Deep Learning with a track record of scientific publications
Experience with Large Language Models (LLM) training and adaptation, including knowledge of floating-point arithmetic at micro-scale
Passion for accelerated computing
A graduate degree from a leading university in a STEM related field
Action oriented with strong analytical skills
Strong organization and time management skills to work in a fast-pace multi-task environment
Self-motivated, independent, ability to work independently with minimal day-to-day direction
Significant experience in High-Performance Computing or Deep Learning
Strong collaboration and social skills, ability to communicate effectively with customers, and across organizations (Engineering, Sales, Support)
Experience with DL frameworks, multi-GPU computing, Generative AI
Fluent in English both oral and written
Ways to stand out from the crowd:
Experience with data curation pipeline at scale, data formats, filtering, cleaning
Experience working with EuroHPC-class supercomputers or tier-1 clouds at scale
Skilled at profiling, analyzing and optimizing code
Understanding of HPC system architecture inc. distributed computing, networking, parallel filesystems, cluster operations, workload schedulers, etc.
Experience working with NVIDIA technologies inc.NVAIE, NeMo, CUDA, NIM, etc.

Share
NVIDIA seeks a Senior Software Engineer specializing in Deep Learning Inference for our growing team. As a key contributor, you will help design, build, and optimize the GPU-accelerated software that powers today’s most sophisticated AI applications.
Our team is responsible for developing and maintaining high-performance deep learning frameworks, including SGLang and vLLM, which are at the forefront of efficient large-scale model serving and inference. You will play a central role in improving these platforms, facilitating smooth deployment and serving of groundbreaking language models.
You’ll work closely with the deep learning community to implement the latest algorithms for public release in frameworks like SGLang and vLLM, as well as other DL frameworks. Your work will focus on identifying and driving performance improvements for state-of-the-art LLM and Generative AI models across NVIDIA accelerators, from datacenter GPUs to edge SoCs. You'll bring to bear open-source tools and plugins—including CUTLASS, OAI Triton, NCCL, and CUDA kernels—to implement and optimize model serving pipelines.
What you'll be doing:
Performance optimization, analysis, and tuning of DL models in various domains like LLM, Multimodal and Generative AI.
Scale performance of DL models across different architectures and types of NVIDIA accelerators.
Contribute features and code to NVIDIA’s inference libraries, vLLM and SGLang, FlashInfer and LLM software solutions.
Work with cross-collaborative teams across frameworks, NVIDIA libraries and inference optimization innovative solutions.
What we need to see:
Masters or PhD or equivalent experience in relevant field (Computer Engineering, Computer Science, EECS, AI).
5+ years of relevant software development experience.
Excellent C/C++ programming and software design skills. SW Agile skills are helpful and Python experience is a plus.
Prior experience with training, deploying or optimizing the inference of DL models in production is a plus.
Prior background with performance modeling, profiling, debug, and code optimization or architectural knowledge of CPU and GPU is a plus.
Ways to stand out from the crowd:
Contribute to Deep Learning Software projects, such as PyTorch, vLLM, and SGLang to drive advancements in the field.
Experience with Multi-GPU Communications (NCCL, NVSHMEM)
Experience building and shipping products to enterprisecustomers.
GPU programming experience (CUDA, OAI TRITON or CUTLASS).
These jobs might be a good fit