Share
What you'll be doing:
Contribute to the design and development of high-performance deep learning inference software using modern C++
Collaborate with teams across the hardware and software stack to understand and leverage new technologies to improve TensorRT's functionality and performance
Participate in the development of robust, high-quality C++ code in alignment with Modern C++ standards
Support systematic reasoning about test plans from unit to integration level
Assist in documenting the properties of functions, classes, and systems to improve robustness
Contribute to performance optimization and benchmarking efforts
Help develop new features and capabilities for TensorRT to serve specialized customer needs
What we need to see:
Masters, or PhD in relevant fields (Computer Engineering, Computer Science, Electrical Engineering, AI) or equivalent experience
Strong foundational C++ skills, including familiarity with C++11 and C++14 or newer standards
Familiarity with the C++ Standard Template Library (STL)
Familiarity with modern deep learning models and inference frameworks
Interest in performance optimization and systems programming
Demonstrated ability to take initiative and see projects through to completion
Excellent interpersonal skills and a collaborative, pragmatic approach to solving problems
Ways to stand out from the crowd:
Experience with Python and/or CUDA through coursework, internships, or personal projects
Exposure to systems programming, embedded systems, and/or compiler concepts
Experience in software performance analysis, profiling, or optimization techniques
Knowledge of C++17 or later standards
Understanding of computer architecture, memory management, or parallel computing concepts
You will also be eligible for equity and .
These jobs might be a good fit