Research, invent, and implement groundbreaking algorithms for LLM inference to advance the state of the art in both low-latency and high-throughput scenarios.
Translate research into practical software solutions that directly impact NVIDIA's products and customers.
Collaborate with internal research, engineering, and product teams across the globe to drive the development of sophisticated inference technologies.
Analyze the performance of new algorithms on NVIDIA’s latest hardware, identifying bottlenecks and opportunities for algorithmic optimizations.
Partner with leading scientific organizations and industry pioneers to remain at the forefront of technological advancements and integrate the latest innovations into practical applications.
What we need to see:
MSc/PhD in Computer Science, Electrical Engineering, or a closely related field.
At least 3 years of proven experience in deep learning research or applied research.
At least one publication in a top-tier AI/ML conference (e.g., NeurIPS, ICLR, ICML).
Deep understanding of LLM architectures coupled with hands-on experience in training large-scale models.
Excellent programming skills, particularly in Python and deep learning frameworks like PyTorch, and experience with software engineering best practices.
A strong problem-solving mentality and a proactive demeanor, driven by the ambition to deliver solutions with real-world impact.
Ways to stand out from the crowd:
Hands-on research experience in LLM inference optimization algorithms such as speculative decoding or parallelization strategies.
Proven experience with High-Performance Computing (HPC) environments, including training or running inference on large-scale GPU clusters (tens to hundreds of GPUs).
Deep familiarity and experience with popular LLM inference systems (e.g., vLLM, TensorRT-LLM).
Experience from a world-class industrial research group or a top-tier institution.