Expoint – all jobs in one place
המקום בו המומחים והחברות הטובות ביותר נפגשים
Limitless High-tech career opportunities - Expoint

Nvidia Senior Research Engineer 
United States, Texas 
842940841

15.10.2025
US, CA, Santa Clara
US, CA, Remote
time type
Full time
posted on
Posted Today
job requisition id

What you’ll be doing:

  • Work with applied researchers to design, implement and test next generation of RL and pos-training algorithms

  • Contribute and advance open source by developing , and NeMo Framework and yet to be announced software

  • You will be engaged as part of one team during Nemotron models post-training

  • Solve large-scale, end-to-end AI training and inference challenges, spanning the full model lifecycle from initial orchestration, data pre-processing, running of model training and tuning, to model deployment.

  • Work at the intersection ofcomputer-architecture,libraries, frameworks, AI applications and the entire software stack.

  • Performance tuning and optimizations, model training with mixed precision recipes on next-gen NVIDIA GPU architectures.

  • Publish and present your results at academic and industry conferences

What we need to see:

  • BS, MS or PhD in Computer Science, AI, Applied Math, or related fields or equivalent experience

  • 3+ years of proven experience in machine learning, systems, distributed computing, or large-scale model training.

  • Experience with AI Frameworks such as Pytorch or JAX

  • Experience with at least one inference and deployment environments such as vLLM, SGLang or TRT-LLM

  • Proficient in Python programming, software design, debugging, performance analysis, test design and documentation.

  • Strong understanding of AI/Deep-Learning fundamentals and their practical applications.

Ways to stand out from the crowd:

  • Contributions to open source deep learning libraries

  • Hands-on experience in large-scale AI training, with a deep understanding of core compute system concepts (such as latency/throughput bottlenecks, pipelining, and multiprocessing) and demonstrated excellence in related performance analysis and tuning.

  • Expertise in distributed computing, model parallelism, and mixed precision training

  • Prior experience with Generative AI techniques applied to LLM and Multi-Modal learning (Text, Image, and Video).

  • Knowledge of GPU/CPU architecture and related numerical software.

You will also be eligible for equity and .