Finding the best job has never been easier
Share
Our AI Inference team puts ML models into production - we train and deploy large neural networks for efficient inference on compute-constrained edge devices (CPU / GPU / AI ASIC). The nature of this role is multi-disciplinary - you will work at the intersection of machine learning and systems by building the ML frameworks and infrastructure that enable the seamless deployment, and inference of all neural networks that run on Autopilot and Optimus.
These jobs might be a good fit