Share
What you'll be doing:
Engage with customers to help them scope and develop solutions for building AV perception and planning models and pipelines, simulations, synthetic data generation, and software in the loop testing, AI enhanced manipulation and navigation workflows using NVIDIA's Physical AI platforms and CUDA-X libraries.
Provide hands-on technical mentorship to partners and customers on Nvidia GenAI stack. Guide customers to develope and deploy Agentic AI workflows on our platforms, quantifying the benefits of our accelerated computing software and hardware.
Partner with Sales, Engineering, Product and other Solution Architect teams to drive NVIDIA full stack adoption. Develop a deep understanding of customer workflows and requirements, lead proof-of-concepts evaluations and provide internal feedback to drive continuous product improvements.
Build collateral (notebooks, github repos, demos, etc.) applied to workflows such as AV and GenAI data curation, model training and validations, LLMs, VFMs, video encoding/decoding, etc.
What we need to see:
Master's or Ph.D. in Computer Science, Artificial Intelligence, or equivalent experience.
8+ years of hands-on experience in a technical AI role, with a strong emphasis on AV End-to-End models and GenAI model development.
Experience writing production codes in Python, or C++ and proficiency with Linux.
Hands-on experience with DevOps tools such as GitLab, Docker, and Kubernetes.
Strong understanding of AV systems (Sensors, dynamics, perception, prediction, planning, control).
Experience with DL and RL algorithms and frameworks such as PyTorch.
Enjoy working with multiple levels and teams across organizations(engineering/research,product, sales and marketing teams).
Effective verbal/written communication, and technical presentation skills.
Self-starter with a vision for growth, real passion for continuous learning and sharing findings across the team.
Ways to stand out from the crowd:
Experience with AV sensors, data curation pipelines, world models, simulations workflows and tools e.g., Carla.
Experience with Agentic AI frameworks, tools, and protocols like LangChain, LangGraph, MCP or equivalent experience.
Understand computational characteristics of Multimodal LLMs, VLMs, DiT, etc.
Experience in deploying LLM models at scale on mainstream cloud providers (e.g., AWS, Azure, GCP).
Proven track record to profile and optimize inference latency and throughput, memory and I/O utilization.
You will also be eligible for equity and .
These jobs might be a good fit