Share
What you'll be doing:
Design, post-train, and optimize foundation models (e.g., LLMs, diffusion video models, VLMs, VLAs) for real world applications.
Contribute to highly-collaborative development on large-scale training infrastructure, high-efficiency inference pipelines, and scalable data pipelines.
Work with teams in research, software, and product to bring world models from idea to deployment.
Collaborate on open-source and internal projects, author technical papers or patents, and mentor junior engineers.
Prototype and iterate rapidly on experiments across cutting-edge AI domains, including agentic systems, reinforcement learning, reasoning, and video generation.
Design and implement model distillation algorithms for size reduction and diffusion step optimization. Profile and benchmark training and inference pipelines to achieve production-ready performance requirements.
What we need to see:
We are looking for stellar experience building and deploying generative AI systems (minimum 8 years industry or 5+ years research/postdoc).
Proficiency in PyTorch, JAX, or other deep learning frameworks is a must!
We are working on a full range of foundation models. You should have expertise in one or more of: LLMs, coding agents, diffusion models, autoregressive models, VAE/GAN architectures, retrieval-augmented generation, neural rendering, or multi-agent systems.
Our models are predominantly built on the transformer architectures. You should be intimately familiar with all variants of the attention mechanisms.
Hands on experience with large scale training (e.g., ZeRO, DDP, FSDP, TP, CP) and data processing (e.g. Ray, Spark).
All we do is in Python and we open source our product, therefore production-quality software engineering skills is highly relevant.
MS or PhD or equivalent experience in Computer Science, Machine Learning, Applied Math, Physics, or a related field.
Ways to stand out from the crowd:
Familiarity with high-performance computing and GPU acceleration.
Contributions to influential open-source libraries or influential conference publications (NeurIPS, ICML, CVPR, ICLR).
Experience working with multimodal data (e.g., vision-language, VLA, audio).
Prior work with NVIDIA GPU-based compute clusters or simulation environments.
You will also be eligible for equity and .
These jobs might be a good fit