Design, build, and maintain observability systems for leading NVIDIA DGX clusters, ensuring flawless monitoring of AI workloads, hardware utilization (GPUs), and system health.
Develop monitoring tools and dashboards that supervise key metrics such as GPU utilization, memory, temperature, latency, network bandwidth, model performance, and system availability.
Build custom alerting systems for AI/ML workflows, enabling proactive issue detection (e.g., GPU failures, hardware bottlenecks, system crashes).
Collaborate with IT and MLOps teams to design efficient, scalable solutions for deploying, monitoring, and leading machine learning models on DGX systems.
Optimize DGX infrastructure by implementing standard processes for observability, ensuring high performance and reducing operational costs.
Supervise system-level metrics such as hardware temperature, power consumption, and GPU/CPU health, preventing hardware degradation or failure.
Develop solutions for supervising AI/ML model performance across DGX clusters, integrating logging and supervising for model training, inference, and deployment processes.
Integrate observability tools (e.g., Prometheus, Grafana, Splunk) with NVIDIA-specific tools (e.g., DCGM, NVIDIA GPU Cloud) for real-time monitoring and alerting.
Work closely with data scientists and machine learning engineers to ensure effective resource utilization and model observability, including the identification of performance bottlenecks and tuning for optimal GPU usage.
Drive solving and root cause analysis for failures and anomalies in both the DGX hardware and AI/ML models running on the infrastructure.
Ensure compliance with ethical AI standards by monitoring fairness, model drift, and performance consistency.
Document standard methodologies and processes for managing, deploying, and monitoring AI workloads on DGX clusters.
Strong experience leading NVIDIA DGX systems or similar GPU-based computing clusters.
Proficiency in GPU monitoring tools such as NVIDIA Data Center GPU Manager (DCGM) and related NVIDIA libraries/APIs.
Experience with AI/ML model deployment and monitoring on large-scale infrastructure, including model performance metrics (latency, throughput, accuracy).
Hands-on experience with observability tools such as Prometheus, Grafana, Splunk or similar, especially in high-performance computing environments.
Proficiency in scripting/programming languages (e.g., Python, Bash, Go) for automating cluster management and monitoring tasks.
Experience with container orchestration technologies (e.g., Docker, Kubernetes), including NVIDIA’s GPU operator for Kubernetes.
Familiarity with AI/ML lifecycle management tools such as ML flow, Kubeflow, or similar.
Strong understanding of HPC environments, including distributed computing, storage, and networking for AI/ML workloads.
Experience with infrastructure monitoring and solving at both hardware (GPU, CPU, memory) and software (AI/ML models, applications) levels.
Strong analytical and problem-solving skills, with the ability to interpret complex data and develop actionable insights.
Excellent verbal and written communication skills, with the ability to convey technical concepts to non-technical partners.
Ability to work effectively in a collaborative team environment and lead multiple projects simultaneously.
Experience with NVIDIA NGC (NVIDIA GPU Cloud) and DGX OS software stack for large-scale AI workloads.
Understanding of AI workload orchestration with frameworks such as Slurm or Kubernetes in GPU-based clusters.
Knowledge of NVIDIA Deep Learning frameworks (TensorFlow, PyTorch) and their performance optimization on DGX infrastructure.
Experience with AIOps tools for automated anomaly detection and solving of large-scale AI infrastructure.
Certification or experience with cloud platforms that offer GPU instances (AWS, GCP, Azure).
Familiarity with network performance tuning in HPC environments and large-scale AI workloads.
Familiarity with DevOps practices and tools, including CI/CD pipelines and infrastructure as code. Knowledge of Graphs, Graph DB's and Graph Theory. Familiarity with Terraform, Helm Chart, Ansible, or similar tools.
But “Digital Transformation” is an empty buzz phrase without a culture that allows for innovation, creativity, and yes, even failure (if you learn from it.)