Share
What you will be doing:
Design and maintain large-scale distributed training systems to support multi-modal foundation models for robotics.
Optimize GPU and cluster utilization for efficient model training and fine-tuning on massive datasets.
Implement scalable data loaders and preprocessors tailored for multimodal datasets, including videos, text, and sensor data.
Develop robust monitoring and debugging tools to ensure the reliability and performance of training workflows on large GPU clusters.
Collaborate with researchers to integrate cutting-edge model architectures into scalable training pipelines.
What we need to see:
Bachelor's degree in Computer Science, Robotics, Engineering, or a related field;
10+ years of full-time industry experience in large-scale MLOps and AI infrastructure;
Proven experience designing and optimizing distributed training systems with frameworks like PyTorch, JAX, or TensorFlow.
Deep understanding of GPU acceleration, CUDA programming, and cluster management tools like Kubernetes.
Strong programming skills in Python and a high-performance language such as C++ for efficient system development.
Strong experience with large-scale GPU clusters, HPC environments, and jobscheduling/orchestrationtools (e.g., SLURM, Kubernetes).
Ways to stand out from the crowd:
Master’s or PhD’s degree in Computer Science, Robotics, Engineering, or a related field;
Demonstrated Tech Lead experience, coordinating a team of engineers and driving projects from conception to deployment;
Strong experience at building large-scale LLM and multimodal LLM training infrastructure;
Contributions to popular open-source AI frameworks or research publications in top-tier AI conferences, such as NeurIPS, ICRA, ICLR, CoRL.
You will also be eligible for equity and .
These jobs might be a good fit