Finding the best job has never been easier
Share
What you will be doing:
Optimize deep learning models for low-latency, high-throughput inference.
Convert and deploy models using frameworks such as TensorRT and TensorRT-LLM
Understand, analyze, profile, and optimize performance of deep learning workloads on state-of-the-art hardware and software platforms.
Collaborate with internal and external researchers to ensure seamless integration of models from training to deployment.
What we want to see:
Master’s or PhD in Computer Science, Electrical Engineering, Computer Engineering, or a related field (or equivalent experience)
3+ years of professional experience in deep learning or applied machine learning.
Strong foundation in deep learning algorithms, including hands-on experience with LLMs and VLMs
Deep understanding of transformer architectures, attention mechanisms, and inference bottlenecks.
Proficient in building and deploying models using PyTorch or TensorFlow in production-grade environments.
Solid programming skillsin Python and C++
Ways to stand out from the crowd:
Proven experience deploying LLMs or VLMs at scale in real-world applications.
Hands-on experience with model optimization and serving frameworks, such as: TensorRT, TensorRT-LLM, vLLM, SGLang.
You will also be eligible for equity and .
These jobs might be a good fit