What you will be doing:
Optimize deep learning models for low-latency, high-throughput inference, with a focus on Diffusion models for Visual Generative AI applications.
Convert, deploy, and optimize models for efficient inference using frameworks such as TensorRT, TensorRT-LLM, and vLLM.
Understand, analyze, profile, and optimize performance of deep learning workloads on state-of-the-art NVIDIA GPU hardware and software platforms.
Collaborate with internal and partner research scientists and software engineers to ensure seamless integration of cutting-edge AI models from training to deployment.
Contribute to the development of automation and tooling for NVIDIA Inference Microservices (NIMs) and inference optimization, including creating automated benchmarks to track performance regressions.
What we need to see:
3+ years of experience in DL model implementation and SW Development.
BSc, MS or PhD degree in Computer Science, Computer Architecture or related technical field.
Extensive knowledge of at least one DL Framework (PyTorch, JAX, TensorFlow) with practical experience in PyTorch required.
Deep understanding of transformer architectures, attention mechanisms, Visual Generative AI foundational models architectures (e.g., U-Net, DiT) and inference bottlenecks.
Excellent Python programming skills.
Strong problem solving and analytical skills.
Algorithms and DL fundamentals.
Docker containerization fundamentals.
Ways to stand out from the crowd:
Experience in performance measurements and profiling.
Hands-on experience with model optimization and serving frameworks, such as: TensorRT, TensorRT-LLM, vLLM, SGLang, and ONNX.
Deep understanding of distributed systems for large-scale model inference and serving.
Experience with extending and leveraging open-source tools for Visual Generative AI workflow creation.
Familiarity with the latest trends in Visual Generative AI for content creation.
משרות נוספות שיכולות לעניין אותך