Share
be doing:
Design, post-train, and optimize novel world models (e.g., diffusion video models, VLMs, VLAs) for Physical AI applications.
Contribute to highly-collaborative development on large-scale training infrastructure, high-efficiency inference pipelines, and scalable data pipelines.
Work with teams in research, software, and product to bring world models from idea to deployment.
Collaborate on open-source and internal projects, author technical papers or patents, and mentor junior engineers.
Prototype rapidly and iterate on experiments across cutting-edge AI domains, including text-to-image/video generation, reinforcement learning, reasoning, and foundation models.
Design and implement model distillation algorithms for size reduction and diffusion step optimization. Profile and benchmark training and inference pipelines to achieve production-ready performance requirements.
What we need to see:
We are looking for stellar experience building and deploying generative AI systems (minimum 2 years industry or 3+ years research/postdoc).
Proficiency in PyTorch, JAX, or other deep learning frameworks is a must!
We are working on all range of foundation models. You should have expertise in one or more of: diffusion models, auto-regressive models, VAE/GAN architectures, retrieval-augmented generation, neural rendering, or multi-agent systems.
Our models are predominantly built on the transformer architectures. You should be intimately familiar with all variants of the attention mechanisms.
Hands on experience with large scale training (e.g., ZeRO, DDP, FSDP, TP, CP) and data processing (e.g. Ray, Spark).
All we do is in Python and we open source our product, therefore production-quality software engineering skills is highly relevant.
MS or PhD or equivalent experience in Computer Science, Machine Learning, Applied Math, Physics, or a related field.
15+ years of relevant software development experience
Ways to stand out from the crowd:
Familiarity with high-performance computing and GPU acceleration.
Contributions to influential open-source libraries or influential conference publications (NeurIPS, ICML, CVPR, ICLR).
Experience working with multimodal data (e.g., vision-language, VLA, audio).
Prior work with NVIDIA GPU-based compute clusters or simulation environments.
You will also be eligible for equity and .
These jobs might be a good fit