Expoint - all jobs in one place

Finding the best job has never been easier

Limitless High-tech career opportunities - Expoint

Nvidia Principal AI ML Engineer — Networking 
United States, California 
502443566

06.05.2025
US, CA, Santa Clara
time type
Full time
posted on
Posted 2 Days Ago
job requisition id

What You’ll Be Doing:

  • Architect and implement infrastructure platformstailored for AI/ML workloads, with a focus on scaling private cloud environments to support high-throughput training, inference, and Agentic workflows and pipelines.

  • Lead initiatives in Generative AI systems design, including Retrieval-Augmented Generation (RAG), LLM fine-tuning, semantic search, and multi-modal data processing.

  • Build and optimize ML systemsfor document understanding, vector-based retrieval, and knowledge graph integration using advanced NLP and information retrieval techniques.

  • Design and develop scalable services and toolsto support GPU-accelerated AI pipelines, leveraging Kubernetes, Python/Go, and observability frameworks.

  • Mentor and collaboratewith a multidisciplinary team of network engineers, automation engineers, AI and ML scientists, product managers, and multiple domain experts.

  • Build and drive adoption of emerging AIOPs technologies, integrating AI Agents, RAGs, and LLMs using MPC workflows to streamline automation, performance tuning, and large-scale data insights.


What We Need to See:

  • 10+ years of engineering experience with at least 5 years leading initiatives in ML infrastructure, AI systems, or applied NLP/LLM development.

  • 5+ years of experience in Networking and infrastructure.

  • Bachelor’s, Master’s, or Ph.D. in Computer Science, Engineering, Machine Learning, or a related field (or equivalent experience).

  • Deep expertise with:

    • Generative AI concepts such as embeddings, RAG, semantic search, and transformer-based LLMs

    • MPC workflows and Agentic ecosystem

    • Vector databases (e.g., FAISS, Pinecone, Weaviate) and data pipelines

    • Programming in Python (preferred) and/or Go, and software engineering best practices

  • Experience deploying and tuning LLMs using techniques like LoRA, QLoRA, and instruction tuning.

  • Strong understanding of infrastructure automation pipeline (Terraform, Ansible, Salt), monitoring (Prometheus, Grafana), and DevOps tools.

  • Hands-on experience working with petabyte-scale datasets, schema design, and distributed processing.

  • Strong background in working with infrastructure related data collections and logs related to network data. Ability to run simulations of network state with AI tools.

Ways to Stand Out From the Crowd:

  • Experience building multi-hop RAG systems with self-consistency and chain-of-thought prompting.

  • Prior leadership in designing AI platforms used for large-scale enterprise search, document intelligence, or recommendation systems.

  • Contributions to open-source ML/AI tools or active participation in the AI research community.

  • Familiarity with knowledge graph construction and reasoning systems as well as demonstrated ability to communicate complex ML concepts to executive and cross-functional stakeholders.

  • Strong knowledge of automation pipeline and infrastructure configuration and observability tools like: BigPanda, Splunk, Storm, Netbox/Nautobot and different open-source automation tooling

  • Strong knowledge of different "network operating systems" like Arista EOS, Cumulus, Cisco NX-OS, Sonic, SRLinuxas well as excellence in Infrastructure or Network as a Code automation frameworks

You will also be eligible for equity and .