Expoint – all jobs in one place
The point where experts and best companies meet
Limitless High-tech career opportunities - Expoint

Nvidia Solutions Architect AI ML 
United States, Texas 
378124734

20.10.2025
US, WA, Redmond
US, CA, Santa Clara
US, WA, Seattle
time type
Full time
posted on
Posted 2 Days Ago
job requisition id

What you will be doing:

  • Help cloud customers craft, deploy, and maintain scalable, GPU-accelerated inference pipelines on cloud ML services and Kubernetes for large language models (LLMs) and generative AI workloads.

  • Enhance performance tuning usingTensorRT/TensorRT-LLM,vLLM, Dynamo, and Triton Inference Server to improve GPU utilization and model efficiency.

  • Collaborate with multi-functional teams (engineering, product) and offer technical mentorship to cloud customers implementing AI inference at scale.

  • Build custom PoCs for solution that address customer’s critical business needs applying NVIDIA hardware and software technology

  • Partner with Sales Account Managers or Developer Relations Managers to identify and secure new business opportunities for NVIDIA products and solutions for ML/DL and other software solutions

  • Prepare and deliver technical content to customers including presentations about purpose-built solutions, workshops about NVIDIA products and solutions, etc.

  • Conduct regular technical customer meetings for project/product roadmap, feature discussions, and intro to new technologies. Establish close technical ties to the customer to facilitate rapid resolution of customer issues

What we need to see:

  • BS/MS/PhD in Electrical/Computer Engineering, Computer Science, Statistics, Physics, or other Engineering fields or equivalent experience.

  • 3+ Years in Solutions Architecture with a proven track record of moving AI inference from POC to production in cloud computing environments including AWS, GCP, or Azure

  • 3+ years of hands-on experience with Deep Learning frameworks such as PyTorch and TensorFlow

  • Excellent knowledge of the theory and practice of LLM and DL inference

  • Strong fundamentals in programming, optimizations, and software design, especially in Python

  • Experience with containerization and orchestration technologies like Docker and Kubernetes, monitoring, and observability solutions for AI deployments

  • Knowledge of Inference technologies - NVIDIA NIM, TensorRT-LLM, Dynamo, Triton Inference Server, vLLM, etc

  • Proficiency in problem-solving and debugging skills in GPU environments

  • Excellent presentation, communication and collaboration skills

Ways to stand out from the crowd:

  • AWS, GCP or Azure Professional Solution Architect Certification.

  • Experience optimizing and deploying large MoE LLMs at scale

  • Active contributions to open-source AI inference projects (e.g., vLLM, TensorRT-LLM Dynamo, SGLang, Triton or similar)

  • Experience with Multi-GPU Multi-node Inference technologies like Tensor Parallelism/Expert Parallelism, Disaggregated Serving, LWS, MPI, EFA/Infiniband, NVLink/PCIe, etc

  • Experience in developing and integrating monitoring and alerting solutions using Prometheus, Grafana, and NVIDIA DCGM and GPU performance Analysis and tools like NVIDIA Nsight Systems

You will also be eligible for equity and .