Identify gaps and optimize the performance of the collective communication libraries used in the training software stack
Build infrastructure to improve observability into the collective communication libraries to significantly reduce cognitive load in debugging massively distributed training jobs
Optimize the AI network software stack with respect to the network topology of our AI supercomputing clusters
Develop and integrate various health checks to the fault tolerance training infrastructure
Collaborate with the supercomputing and research team to ensure requirements on network bandwidth and topology for modern AI workloads are met
What You’ll Bring
Members of the Autopilot AI Infrastructure team are expected to be adaptable to the dynamic requirements of AI research and capable of contributing across all parts of the AI training software stack
Strong work ethic and independence
3+ years of relevant industry experience (HPC, lossless networks) in a fast-paced environment
Strong knowledge on datacenter server systems (PCIe, NUMA, RDMA NICs and switches)
Experience in working with, testing and debugging datacenter RDMA networking fabrics (IB, RoCE) and communication collectives (e.g. NCCL)
Experience in debugging issues or bottlenecks in the Linux kernel
Experience in massively parallel programming across multiple hosts
Knowledge or interest in understanding ML training workloads and how it translates to relevant collectives