What you'll be doing:
Lead the design and development of a scalable, robust, and reliable platform for serving AI models for inference as a service.
Architect and implement systems for dynamic GPU resource management, autoscaling, and efficient scheduling of inference workloads.
Build and maintain the core infrastructure, including load balancing and rate limiting, to ensure the stability and high availability of inference services.
Define and implement APIs for model deployment, monitoring, and management for a seamless user experience.
Optimize system performance and latency for various model types, from large language models (LLMs) to computer vision models, ensuring high-throughput and responsiveness.
Collaborate with engineering teams to integrate deployment, monitoring, and performance telemetry into our CI/CD pipelines.
Develop tools and frameworks for real-time observability, performance profiling, and debugging of inference services.
Drive architectural decisions and best practices for long-term platform evolution and scalability.
Contribute to NVIDIA's AI Factory initiative by building a foundational platform that supports model serving needs.
What we need to see:
15+ years of software engineering experience with deep expertise in distributed systems or large-scale backend infrastructure.
BS, MS, or PhD in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, other Engineering or related fields (or equivalent experience)
Strong programming skills in Python, Go, or C++ with a track record of building production-grade, highly available systems.
Proven experience with container orchestration technologies like Kubernetes.
A deep understanding of system architecture for high-performance, low-latency API services.
Experience in designing, implementing, and optimizing systems for GPU resource management.
Familiarity with modern observability tools (e.g., DataDog, Prometheus, Grafana, OpenTelemetry).
Demonstrated experience with deployment strategies and CI/CD pipelines.
Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment.
Ways to stand out from the crowd:
Experience with specialized inference serving frameworks.
Open-source contributions to projects in the AI/ML, distributed systems, or infrastructure space.
Hands-on experience with performance optimization techniques for AI models, such as quantization or model compression.
Expertise in building platforms that support a wide variety of AI model architectures.
Strong understanding of the full lifecycle of an AI model, from training to deployment and serving.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך