Finding the best job has never been easier
Share
What you will be doing:
Understand, analyze, profile, and optimize AI training workloads on innovative hardware and software platforms.
Understand the big picture of training performance on GPUs, prioritizing and then solving problems across all state-of-the-art neural networks.
Implement production-quality software in multiple layers of NVIDIA's deep learning platform stack, from drivers to DL frameworks.
Build and support NVIDIA submissions to the MLPerf Training benchmark suite.
Implement key DL training workloads in NVIDIA's proprietary processor and system simulators to enable future architecture studies.
Build tools to automate workload analysis, workload optimization, and other critical workflows.
What we need to see:
PhD in CS, EE or CSEE and 5+ years; or MS (or equivalent experience) and 8+ years of relevant meaningful work experience.
Strong background in deep learning and neural networks, in particular training.
Strong background in computer architecture and familiarity with the fundamentals of GPU architecture.
Proven experience analyzing and tuning application performance, as well as experience with processor and system-level performance modelling.
Programming skills in C++, Python, and CUDA.
GPU computing is the most productive and pervasive platform for deep learning and AI. It begins with the most advanced GPUs and the systems and software we build on top of them. We integrate and optimize every deep learning framework. We work with the major systems companies and every major cloud service provider to make GPUs available in data centers and in the cloud. We craft computers and software to bring AI to edge devices, such as self-driving cars and autonomous robots. AI has the potential to spur a wave of social progress unmatched since the industrial revolution.
You will also be eligible for equity and .
These jobs might be a good fit