

Share
What you’ll be doing:
Working with tech giants to develop and demonstrate solutions based on NVIDIA’s groundbreaking software and hardware technologies.
Partnering with Sales Account Managers and Developer Relations Managers to identify and secure business opportunities for NVIDIA products and solutions.
Serving as the main technical point of contact for customers engaged in the development of intricate AI infrastructure, while also offering support in understanding performance aspects related to tasks like large scale LLM training and inference.
Conducting regular technical customer meetings for project/product details, feature discussions, introductions to new technologies, performance advice, and debugging sessions.
Collaborating with customers to build Proof of Concepts (PoCs) for solutions to address critical business needs and support cloud service integration for NVIDIA technology on hyperscalers.
Analyzing and developing solutions for customer performance issues for both AI and systems performance.
What we need to see:
BS/MS/PhD in Electrical/Computer Engineering, Computer Science, Physics, or other Engineering fields or equivalent experience.
4+ years of engineering(performance/system/solution)experience.
Hands-on experience building performance benchmarks for data center systems, including large scale AI training and inference.
Understanding of systems architecture including AI accelerators and networking as it relates to the performance of an overall application.
Effective engineering program management with the capability of balancing multiple tasks.
Ability to communicate ideas clearly through documents, presentations, and in external customer-facing environments.
Ways to stand out from the crowd:
Hands-on experience with Deep Learning frameworks (PyTorch, JAX, etc.), compilers (Triton, XLA, etc.), and NVIDIA libraries (TRTLLM, TensorRT, Nemo, NCCL, RAPIDS, etc.).
Familiarity with deep learning architectures and the latest LLM developments.
Background with NVIDIA hardware and software, performance tuning, and error diagnostics.
Hands-on experience with GPU systems in general including but not limited to performance testing, performance tuning, and benchmarking.
Experience deploying solutions in cloud environments including AWS, GCP, Azure, or OCI as well as knowledge of DevOps/MLOps technologies such as Docker/containers, Kubernetes, data center deployments, etc. Command line proficiency.
You will also be eligible for equity and .
These jobs might be a good fit

Share
What You'll Be Doing:
Working as a key member of our cloud solutions team, you will be the go-to technical expert on NVIDIA's products, helping our clients architect and optimize GPU solutions for AI services.
Collaborating directly with engineering teams to secure design wins, address challenges, usher projects into production, and offer support through the project's lifecycle.
Acting as a trusted advisor to our clients, while developing reference architectures and best practices for running Microsoft AI workloads on NVIDIA infrastructure.
What We Need To See:
4+ years of experience in cloud computing and/or large-scale AI systems.
A BS in EE, CS, Math, or Physics, or equivalent experience.
A proven understanding of cloud computing and large-scale computing systems.
Proficiency in Python, C, or C++ and experience with AI frameworks like Pytorch or TensorFlow.
Passion for machine learning and AI, and the drive to continually learn and apply new technologies.
Excellent interpersonal skills, including the ability to explain complex technical topics to non-experts.
Ways To Stand Out From The Crowd:
Recent projects or contributions (for example, on GitHub) related to large language models and transformer architectures.
Knowledge of Azure cloud and AzureML services.
Experience with CUDA programming and optimization.
Familiarity with NVIDIA networking technologies such as Infiniband.
Proficiency in Linux, Windows Subsystem for Linux, and Windows.
You will also be eligible for equity and .
These jobs might be a good fit

Share
What you will be doing:
You will be responsible for design, development, and delivery of core components of our next-generation VLSI productivity platforms.
Design, build, deploy, and improve highly scalable systems
Translate high-level requirements into actionable plans/deliverables
Leverage LLMs to accelerate (not replace) your contribution while taking ownership of your output
Convert legacy codebases into modern powerhouses infused with industry best-practices
Collaborate with engineering teams to identify and alleviate bottlenecks in their daily tasks
What we need to see:
B.S. CS/EE (or equivalent experience)
5+ years developing large-scale software applications in Go and Python
Solid computer science fundamentals in algorithms/datastructures/complexityanalyses
Understand processes, synchronization, locks, concurrency, and load-balancing
Excellent grasp of distributed systems and compute abstractions
Experience building custom solutions around open-source products and libraries to solve feature-gaps fast
Ways to stand out from the crowd:
5+ years in an enterprise engineering environment, shipping at scale
Experience in partitioning and optimizing complex interconnected systems
Understand filesystems, job-scheduling, and inter-process signaling
Highly self-sufficient in the face of ambiguity, with strong reasoning and problem-solving skills
Rapid comprehension of existing codebases (in any language) to implement high-leverage changes effectively
You will also be eligible for equity and .
These jobs might be a good fit

Share
What you'll be doing:
Building and maintaining infrastructure from first principles needed to deliver TensorRT LLM
Maintain CI/CD pipelines to automate the build, test, and deployment process and build improvements on the bottlenecks. Managing tools and enabling automations for redundant manual workflows via Github Actions, Gitlab, Terraform, etc
Enable performing scans and handling of security CVEs for infrastructure components
Improve the modularity of our build systems using CMake
Use AI to help build automated triaging workflows
Extensive collaboration with cross-functional teams to integrate pipelines from deep learning frameworks and components is essential to ensuring seamless deployment and inference of deep learning models on our platform.
What we need to see:
Masters degree or equivalent experience
3+ years of experience in Computer Science, computer architecture, or related field
Ability to work in a fast-paced, agile team environment
Excellent Bash, CI/CD, Python programming and software design skills, including debugging, performance analysis, and test design.
Experience with CMake.
Background with Security best practices for releasing libraries.
Experience in administering, monitoring, and deploying systems and services on GitHub and cloud platforms. Support other technical teams in monitoring operating efficiencies of the platform, and responding as needs arise.
Highly skilled in Kubernetes and Docker/containerd. Automation expert with hands-on skills in frameworks like Ansible & Terraform. Experience in AWS, Azure or GCP
Ways to stand out from the crowd:
Experience contributing to a large open-source deep learning community - use of GitHub, bug tracking, branching and merging code, OSS licensing issues handling patches, etc.
Experience in defining and leading the DevOps strategy (design patterns, reliability and scaling) for a team or organization.
Experience driving efficiencies in software architecture, creating metrics, implementing infrastructure as code and other automation improvements.
Deep understanding of test automation infrastructure, framework and test analysis.
Excellent problem solving abilities spanning multiple software (storage systems, kernels and containers) as well as collaborating within an agile team environment to prioritize deep learning-specific features and capabilities within Triton Inference Server, employing advanced troubleshooting and debugging techniques to resolve complex technical issues.
You will also be eligible for equity and .
These jobs might be a good fit

Share
What you will be doing:
You will be responsible for design, development, and delivery of core components of our next-generation VLSI productivity platforms.
Design, build, deploy, and improve highly scalable systems
Translate high-level requirements into actionable plans/deliverables
Leverage LLMs to accelerate (not replace) your contribution while taking ownership of your output
Convert legacy codebases into modern powerhouses infused with industry best-practices
Collaborate with engineering teams to identify and alleviate bottlenecks in their daily tasks
What we need to see:
B.S. CS/EE (or equivalent experience)
5+ years developing large-scale software applications in Go and Python
Solid computer science fundamentals in algorithms/datastructures/complexityanalyses
Understand processes, synchronization, locks, concurrency, and load-balancing
Excellent grasp of distributed systems and compute abstractions
Experience building custom solutions around open-source products and libraries to solve feature-gaps fast
Ways to stand out from the crowd:
5+ years in an enterprise engineering environment, shipping at scale
Experience in partitioning and optimizing complex interconnected systems
Understand filesystems, job-scheduling, and inter-process signaling
Highly self-sufficient in the face of ambiguity, with strong reasoning and problem-solving skills
Rapid comprehension of existing codebases (in any language) to implement high-leverage changes effectively
You will also be eligible for equity and .
These jobs might be a good fit

Share
This position requires the incumbent to have a sufficient knowledge of English to have professional verbal and written exchanges in this language since the performance of the duties related to this position requires frequent and regular communication with colleagues and partners located worldwide and whose common language is English.
Gross pay salary$156,000—$234,000 USDThese jobs might be a good fit

Share
What you’ll be doing:
Collaborating with business development in guiding the customer through the solution adoption process for our Metropolis, Isaac and IGX AI SW platforms, GPU Computing and IGX/Jetson, being responsible for the technical relationship and assisting customers in building creative solutions based on NVIDIA
Be an industry leader with vision on integrating NVIDIA technology into intelligent machines’ architectures
You will engage with customers to develop a keen understanding of their goals, vision and plans, as well as technical needs – and help to define and deliver high-value solutions that meet these needs
Train customers on the adoption of our AI platforms, develop and optimize proof of concepts using the Nvidia robotics and Metropolis platforms as well as the Jetson/IGX SDKs
Establish positive relationships and communication channels with internal teams
What we need to see:
BS or MS in Electrical Engineering or Computer Science or equivalent experience
8+ years of work-related experience in a high-tech electronics industry in a similar role as a systems or solution architect
AI practitioner experience
C, C++, and Python coding
Strong time-management and organization skills for coordinating multiple initiatives, priorities, and implementations of new technology and products into very complex projects
Ways to stand out from the crowd:
NVIDIA GPU development experience
Experience with Omniverse, ISAAC and Metropolis
Experience with generative AI on Jetson or IGX, RIVA, VSS
You will also be eligible for equity and .
These jobs might be a good fit

Share
What you’ll be doing:
Working with tech giants to develop and demonstrate solutions based on NVIDIA’s groundbreaking software and hardware technologies.
Partnering with Sales Account Managers and Developer Relations Managers to identify and secure business opportunities for NVIDIA products and solutions.
Serving as the main technical point of contact for customers engaged in the development of intricate AI infrastructure, while also offering support in understanding performance aspects related to tasks like large scale LLM training and inference.
Conducting regular technical customer meetings for project/product details, feature discussions, introductions to new technologies, performance advice, and debugging sessions.
Collaborating with customers to build Proof of Concepts (PoCs) for solutions to address critical business needs and support cloud service integration for NVIDIA technology on hyperscalers.
Analyzing and developing solutions for customer performance issues for both AI and systems performance.
What we need to see:
BS/MS/PhD in Electrical/Computer Engineering, Computer Science, Physics, or other Engineering fields or equivalent experience.
4+ years of engineering(performance/system/solution)experience.
Hands-on experience building performance benchmarks for data center systems, including large scale AI training and inference.
Understanding of systems architecture including AI accelerators and networking as it relates to the performance of an overall application.
Effective engineering program management with the capability of balancing multiple tasks.
Ability to communicate ideas clearly through documents, presentations, and in external customer-facing environments.
Ways to stand out from the crowd:
Hands-on experience with Deep Learning frameworks (PyTorch, JAX, etc.), compilers (Triton, XLA, etc.), and NVIDIA libraries (TRTLLM, TensorRT, Nemo, NCCL, RAPIDS, etc.).
Familiarity with deep learning architectures and the latest LLM developments.
Background with NVIDIA hardware and software, performance tuning, and error diagnostics.
Hands-on experience with GPU systems in general including but not limited to performance testing, performance tuning, and benchmarking.
Experience deploying solutions in cloud environments including AWS, GCP, Azure, or OCI as well as knowledge of DevOps/MLOps technologies such as Docker/containers, Kubernetes, data center deployments, etc. Command line proficiency.
You will also be eligible for equity and .
These jobs might be a good fit