What you’ll be doing:
Identify infrastructure and software bottlenecks to improve ML job startup time, data load/write time, resiliency, and failure recovery.
Translate research workflows into automated, scalable, and reproducible systems that accelerate experimentation.
Build CI/CD workflows tailored for ML to support data preparation, model training, validation, deployment, and monitoring.
Develop observability frameworks to monitor performance, utilization, and health of large-scale training clusters.
Collaborate with hardware and platform teams to optimize models for emerging GPU architectures, interconnects, and storage technologies.
Develop guidelines for dataset versioning, experiment tracking, and model governance to ensure reliability and compliance.
Mentor and guide engineering and research partners on MLOps patterns, scaling NVIDIA’s impact from research to production.
Collaborate with NVIDIA Research teams and the DGX Cloud Customer Success team to enhance MLOps automation continuously.
What we need to see:
BS in Computer Science, Information Systems, Computer Engineering or equivalent experience
8+ years of experience in large-scale software or infrastructure systems, with 5+ years dedicated to ML platforms or MLOps.
Proven track record designing and operating ML infrastructure for production training workloads.
Expert knowledge of distributed training frameworks (PyTorch, TensorFlow, JAX) and orchestration systems (Kubernetes, Slurm, Kubeflow, Airflow, MLflow).
Strong programming experience in Python plus at least one systems language (Go, C++, Rust).
Deep understanding of GPU scheduling, container orchestration, and cloud-native environments.
Experience integrating observability stacks (Prometheus, Grafana, ELK) with ML workloads.
Familiarity with storage and data platforms that support large-scale training (object stores, feature stores, versioned datasets).
Strong communication abilities, collaborating effectively with research teams to transform requirements into scalable engineering solutions.
Ways to stand out from the crowd:
Practical experience supporting research teams in expanding models on the newest GPU or accelerator hardware.
Contributions to open-source MLOps or ML infrastructure projects.
Proficiency in optimizing multi-node training tasks throughout extensive GPU clusters and familiarity with extensive ETL and data pipelinesoftware/infrastructurefor both structured and unstructured data.
Knowledge of security, compliance, and governance requirements for ML in regulated environments.
Demonstrated capability in connecting research and production by directing scientists on guidelines while providing reliable infrastructure.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך