Expoint – all jobs in one place
המקום בו המומחים והחברות הטובות ביותר נפגשים
Limitless High-tech career opportunities - Expoint

Nvidia Senior Software Engineer AI Inference Systems 
United States, California 
973356704

Today
US, CA, Santa Clara
time type
Full time
posted on
Posted 6 Days Ago
job requisition id

What you’ll be doing:

  • Contribute features to vLLM that empower the newest models with the latest NVIDIA GPU hardware features; profile and optimize the inference framework (vLLM) with methods like speculative decoding,data/tensor/expert/pipeline-parallelism,prefill-decode disaggregation.

  • Develop, optimize, and benchmark GPU kernels (hand-tuned and compiler-generated) using techniques such as fusion, autotuning, and memory/layout optimization; build and extend high-level DSLs and compiler infrastructure to boost kernel developer productivity while approaching peak hardware utilization.

  • Define and build inference benchmarking methodologies and tools; contribute both new benchmark and NVIDIA’s submissions to the industry-leading MLPerf Inference benchmarking suite.

  • Architect the scheduling and orchestration of containerized large-scale inference deployments on GPU clusters across clouds.

  • Conduct and publish original research that pushes the pareto frontier for the field of ML Systems; survey recent publications and find a way to integrate research ideas and prototypes into NVIDIA’s software products.

What we need to see:

  • Bachelor’s degree (or equivalent expeience) in Computer Science (CS), Computer Engineering (CE) or Software Engineering (SE) with 7+ years of experience; alternatively, Master’s degree in CS/CE/SE with 5+ years of experience; or PhD degree with the thesis and top-tier publications in ML Systems, GPU architecture, or high-performance computing.

  • Strong programming skills in Python and C/C++; experience with Go or Rust is a plus; solid CS fundamentals: algorithms & data structures, operating systems, computer architecture, parallel programming, distributed systems, deep learning theories.

  • Knowledgeable and passionate about performance engineering in ML frameworks (e.g., PyTorch) and inference engines (e.g., vLLM and SGLang).

  • Familiarity with GPU programming and performance: CUDA, memory hierarchy, streams, NCCL; proficiency with profiling/debug tools (e.g., Nsight Systems/Compute).

  • Experience with containers and orchestration (Docker, Kubernetes, Slurm); familiarity with Linux namespaces and cgroups.

  • Excellent debugging, problem-solving, and communication skills; ability to excel in a fast-paced, multi-functional setting.

Ways to stand out from the crowd

  • Experience building and optimizing LLM inference engines (e.g., vLLM, SGLang).

  • Hands-on work with ML compilers and DSLs (e.g., Triton,TorchDynamo/Inductor,MLIR/LLVM, XLA), GPU libraries (e.g., CUTLASS) and features (e.g., CUDA Graph, Tensor Cores).

  • Experience contributing tocontainerization/virtualizationtechnologies such ascontainerd/CRI-O/CRIU.

  • Experience with cloud platforms (AWS/GCP/Azure), infrastructure as code, CI/CD, and production observability.

  • Contributions to open-source projects and/or publications; please include links to GitHub pull requests, published papers and artifacts.

You will also be eligible for equity and .