Design and implement backend services and tooling that handles iteration and batch processing of inference, simulation, and evaluation workloads
Work closely with the other Autonomy teams to build foundational components as well as bridge missing pieces in ML compiler and runtime infrastructure while designing for scalability, reliability, security and high performance
What You’ll Bring
Pursuing a degree in Computer Science, Computer Engineering, or a related field of study with a graduation date between December 2025 -May 2026
Proficiency with Python
Familiarity of managing hardware inference chips like TPUs and of optimizing machine learning inference workloads for low latency and scale
Familiarity with Operating Systems concepts such as networking, processes, file systems and virtualization
Familiarity with concurrency programming
Familiarity with C++ and / or Golang
Experience with Linux, container orchestrator like Kubernetes or similar and bare metal setup tools like Ansible or similar
Experience with data stores like PostgreSQL and Redis