מציאת משרת הייטק בחברות הטובות ביותר מעולם לא הייתה קלה יותר
As a member of the GPU AI/HPC Infrastructure team, you will provide leadership in the design and implementation of ground breaking GPU compute clusters that run demanding deep learning, high performance computing, and computationally intensive workloads. In this role we seek an expert to optimize the Capacity management and allocation in GPU Compute Clusters. You will help us with the strategic challenges we encounter in maximizing and optimizing our usage of all datacenter resources including compute, storage, network and power. You will help build methodologies, tools and metrics to enable effective resource utilization in a heterogeneous compute environment, and assist with growth planning across our global computing environment.
What you'll be doing:
Building and improving our ecosystem around GPU-accelerated computing including developing large scale automation solutions
Supporting our researchers to run their flows on our clusters including performance analysis and optimizations of deep learning workflows
Diagnosing customer utilization deficiencies and job scheduling issues
Building automation, tools and metrics to help us increase productive utilization of resources
Collaborating with the scheduler team to improve scheduling algorithms
Root cause analysis and suggest corrective action for problems large and small scales
Finding and fixing problems before they occur
What we need to see:
Bachelor’s degree (Master's preferred) in Computer Science, Electrical Engineering or related field or equivalent experience.
Minimum 10+ years of experience designing and operating large scale compute infrastructure.
Experience analyzing and tuning performance for a variety of AI/HPC workloads.
Working knowledge of cluster configuration managements tools such as Ansible, Puppet, Salt.
Experience with AI/HPC advanced job schedulers, and ideally familiarity with schedulers such as Slurm, K8s, RTDA or LSF
Proficient in Python programming or other programming language
Experience with AI/HPC workflows that use MPI
Ways to stand out from the crowd:
Experience with NVIDIA GPUs, Cuda Programming, NCCL and MLPerf benchmarking
Experience with Machine Learning and Deep Learning concepts, algorithms and models
Proficient in Centos/RHEL and/or Ubuntu Linux distros
Familiarity with InfiniBand with IBOP and RDMA as well as understanding of fast, distributed storage systems like Lustre and GPFS for AI/HPC workloads
Familiarity with deep learning frameworks like PyTorch and TensorFlow
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך