Share
What You’ll Be Doing:
Develop Intelligent AI Solutions – Leverage NVIDIA AI technologies and GPUs to build cutting‑edge NLP and Generative AI solutions—such as Retrieval‑Augmented Generation (RAG) pipelines and agentic workflows—that solve real‑world enterprise and supply‑chain problems.
Lead AI Product Development – Guide engineers and researchers in developinglarge‑language‑model–poweredapplications, chatbots, and optimization engines that directly improve chip‑design supply‑chain efficiency and resilience.
Design ML & Optimization Architectures – Create and implement machine‑learning andcombinatorial‑optimizationarchitectures (e.g., using NVIDIA cuOpt) tailored to semiconductor supply‑chain use cases such as multi‑echelon inventory, yield‑constrained scheduling, and supplier risk mitigation.
Collaborate Across NVIDIA – Partner with supply‑chain operation teams to identify high‑impact opportunities, translate requirements into ML solutions, and drive measurable business outcomes.
What We Need to See:
Master’s or Ph.D. in Computer Science, Operations Research, Industrial Engineering, or a related technical field, or equivalent experience.
12+ years of experience designing, building, and deploying ML models and systems in production.
Demonstrated, hands-on experience applying AI techniques to supply‑chain challenges (e.g., demand forecasting, wafer‑level yield optimization, capacity planning, material logistics, or supplier risk analytics).
Strong knowledge of transformers, attention mechanisms, and modern NLP/GenAI techniques.
Expert‑level Python plus deep‑learning frameworks such as PyTorch or TensorFlow; familiarity with CUDA‑accelerated libraries (cuOpt, TensorRT‑LLM) is a plus.
Proven ability to think independently, drive research and development efforts, and mentor multidisciplinary engineering teams.
Highly motivated, curious, and eager to push the boundaries of what AI can do for complex supply‑chain systems.
Ways to Stand Out from the Crowd:
Agentic AI Expertise – Practical experience with frameworks such as LangChain or LangGraph and a deep understanding of multi‑step reasoning and planning.
LLM Inference Optimization – Expertise in accelerating LLM inference (e.g., KV caching) to achieve sub‑second latency at scale.
End‑to‑End ML Systems Design – A portfolio showing ownership of the full ML lifecycle, from data ingestion to monitoring and continuous improvement.
Research Impact – Publications or patents that advance NLP, or supply‑chain AI.
You will also be eligible for equity and .
These jobs might be a good fit