What to Expect As a member of the Autopilot Team, you will build systems for offline and online 3D reconstruction and scene understanding that are robust, accurate, and performant. You will work on a world class spatial computing platform deployed at scale.
What You’ll Do - Leverage state of the art techniques for spatial computing (such as NeRFs, Diffusion Models, Gaussian Splatting, Multiview Stereo, TSDF Fusion, Structure from Motion, SLAM)
- Improve mission critical perception systems that enhance customer comfort and safety for all Tesla cars on the road (driving autonomously or not).
- Push the boundaries of real world AI
- Strict adherence to strong software engineering practices to develop novel work quickly and safely
What You’ll Bring - Experience with 3D computer vision and graphics concepts such as camera models, multi view geometry, and rendering pipelines
- Domain expertise in an area of computer vision such as object detection & tracking, pose estimation, depth estimation, panoptic segmentation, 3D reconstruction, visual SLAM, structure from motion, neural rendering (e.g. NeRFs or gaussian splatting), novel view synthesis
- Solid mathematical fundamentals including linear algebra, computational geometry, vector calculus, probability theory, and numeric optimization
- Understanding of modern deep learning techniques (such as transformers, diffusion models, autoregressive models, multi-modal models, CNNs, etc.)
- Experience in real time graphics or non-real time rendering is a plus
- Exposure to sensor fusion, signal processing, computational photography, and robotics is a plus