

Share
What you’ll be doing:
Design and implement highly efficient inference systems for large-scale deployments of generative AI models.
Define inference benchmarking methodologies and build tools that will be adopted across the industry.
Develop, profile, debug, and optimize low-level system components and algorithms to improve throughput and minimize latency for the MLPerf Inference benchmarks on bleeding-edge NVIDIA GPUs.
Productionize inference systems with uncompromised software quality.
Collaborate with researchers and engineers to productionize innovative model architectures, inference techniques and quantization methods.
Contribute to the design of APIs, abstractions, and UX that make it easier to scale model deployment while maintaining usability and flexibility.
Participate in design discussions, code reviews, and technical planning to ensure the product aligns with the business goals.
Stay up to date with the latest advancements and come up with novel research ideas in inference system-level optimization, then translate research ideas into practical, robust systems. Explorations and academic publications are encouraged.
What we need to see:
Bachelor’s, Master’s, or PhD degree in Computer Science/Engineering, Software Engineering, a related field, or equivalent experience.
5+ years of experience in software development, preferably with Python and C++.
Deep understanding of deep learning algorithms, distributed systems, parallel computing, and high-performance computing principles.
Hands-on experience with ML frameworks (e.g., PyTorch) and inference engines (e.g., vLLM and SGLang).
Experience optimizing compute, memory, and communication performance for the deployments of large models.
Familiarity with GPU programming, CUDA, NCCL, and performance profiling tools.
Ability to work closely with both research and engineering teams, translating state-of-the-art research ideas into concrete designs and robust code, as well as coming up with novel research ideas.
Excellent problem-solving skills, with the ability to debug complex systems.
A passion for building high-impact software that pushes the boundaries of what’s possible with large-scale AI.
Ways to stand out from the crowd:
Background in building and optimizing LLM inference engines such as vLLM and SGLang.
Experience building ML compilers such as Triton, Torch Dynamo/Inductor.
Experience working with cloud platforms (e.g., AWS, GCP, or Azure), containerization tools (e.g., Docker), and orchestration infrastructures (e.g., Kubernetes, Slurm).
Exposure to DevOps practices, CI/CD pipelines, and infrastructure as code.
Contributions to open-source projects (please provide a list of the GitHub PRs you submitted).
You will also be eligible for equity and .
These jobs might be a good fit

Share
What you’ll be doing:
You will be part of Canadian Solutions Architect team engaging with AI developers and engineers to develop a keen understanding of their goals, strategies, and technical needs, and drive NVIDIA technology adoption in data center, edge, and cloud deployments.
Facilitate AI use cases and proof-of-concepts on the NVIDIA platform.
Collaborating with other solution architects, engineering and product teams, understanding their technical needs and helping define high-value solutions.
Strategically supporting and partnering with Canadian customers and industry-specific solution partners to help them adopt and build solutions using NVIDIA technology.
What we need to see:
MS or PhD in Computer Science, Engineering, or related field from an accredited university.
5+ years of experience.
Experience with modern AI software tools including PyTorch, JAX, TRT-LLM, vLLM, SGLang, or other frameworks.
Programming experience with data science languages like Python, and/or HPC languages such as C/C++/Fortran.
Experience with GPUs and accelerated computing.
Ways to stand out from the crowd:
Excellent knowledge of theory and practice of deep learning, reinforcement learning, and/or large language models.
CUDA/GPU optimization or CUDA-X library experience.
Knowledge of MLOps technologies such as Docker/containers, Kubernetes, as well as cloud and data center deployments.
Experience deploying large-scale GPU clusters.
Experience deploying AI inference at scale on-premise or in the cloud.
You will also be eligible for equity and .
These jobs might be a good fit

Share
What you’ll be doing:
You will be part of Canadian Solutions Architect team engaging with AI developers and engineers to develop a keen understanding of their goals, strategies, and technical needs, and drive NVIDIA technology adoption in data center, edge, and cloud deployments.
Facilitate AI use cases and proof-of-concepts on the NVIDIA platform.
Collaborating with other solution architects, engineering and product teams, understanding their technical needs and helping define high-value solutions.
Strategically supporting and partnering with Canadian customers and industry-specific solution partners to help them adopt and build solutions using NVIDIA technology.
What we need to see:
MS or PhD in Computer Science, Engineering, or related field from an accredited university.
5+ years of experience.
Experience with modern AI software tools including PyTorch, JAX, TRT-LLM, vLLM, SGLang, or other frameworks.
Programming experience with data science languages like Python, and/or HPC languages such as C/C++/Fortran.
Experience with GPUs and accelerated computing.
Ways to stand out from the crowd:
Excellent knowledge of theory and practice of deep learning, reinforcement learning, and/or large language models.
CUDA/GPU optimization or CUDA-X library experience.
Knowledge of MLOps technologies such as Docker/containers, Kubernetes, as well as cloud and data center deployments.
Experience deploying large-scale GPU clusters.
Experience deploying AI inference at scale on-premise or in the cloud.
You will also be eligible for equity and .
These jobs might be a good fit

Share
What you’ll be doing:
Design and implement highly efficient inference systems for large-scale deployments of generative AI models.
Define inference benchmarking methodologies and build tools that will be adopted across the industry.
Develop, profile, debug, and optimize low-level system components and algorithms to improve throughput and minimize latency for the MLPerf Inference benchmarks on bleeding-edge NVIDIA GPUs.
Productionize inference systems with uncompromised software quality.
Collaborate with researchers and engineers to productionize innovative model architectures, inference techniques and quantization methods.
Contribute to the design of APIs, abstractions, and UX that make it easier to scale model deployment while maintaining usability and flexibility.
Participate in design discussions, code reviews, and technical planning to ensure the product aligns with the business goals.
Stay up to date with the latest advancements and come up with novel research ideas in inference system-level optimization, then translate research ideas into practical, robust systems. Explorations and academic publications are encouraged.
What we need to see:
Bachelor’s, Master’s, or PhD degree in Computer Science/Engineering, Software Engineering, a related field, or equivalent experience.
5+ years of experience in software development, preferably with Python and C++.
Deep understanding of deep learning algorithms, distributed systems, parallel computing, and high-performance computing principles.
Hands-on experience with ML frameworks (e.g., PyTorch) and inference engines (e.g., vLLM and SGLang).
Experience optimizing compute, memory, and communication performance for the deployments of large models.
Familiarity with GPU programming, CUDA, NCCL, and performance profiling tools.
Ability to work closely with both research and engineering teams, translating state-of-the-art research ideas into concrete designs and robust code, as well as coming up with novel research ideas.
Excellent problem-solving skills, with the ability to debug complex systems.
A passion for building high-impact software that pushes the boundaries of what’s possible with large-scale AI.
Ways to stand out from the crowd:
Background in building and optimizing LLM inference engines such as vLLM and SGLang.
Experience building ML compilers such as Triton, Torch Dynamo/Inductor.
Experience working with cloud platforms (e.g., AWS, GCP, or Azure), containerization tools (e.g., Docker), and orchestration infrastructures (e.g., Kubernetes, Slurm).
Exposure to DevOps practices, CI/CD pipelines, and infrastructure as code.
Contributions to open-source projects (please provide a list of the GitHub PRs you submitted).
You will also be eligible for equity and .
These jobs might be a good fit