We’ve used both Machine Learning and Artificial Intelligence throughout the company for years, from providing personalized recommendations to our members to optimizing operations.
We are looking for a highly technical Product Manager with demonstrated excellence in delivering platforms that enhance data scientist productivity. You’ll craft a vision for ML inference at Netflix, addressing the needs of model producers and consumers. You’ll work closely with many engineering teams to deliver solutions for model management, batch and online inference. You’ll especially focus on scaling our infrastructure to large, generative models. And, you’ll find ways to expand Netflix’s ML footprint to edge computing.
This role is based in our Los Gatos office in a hybrid model. We are also open to employees who are remote-based within West Coast of the US. Monthly travel will be required if remote based.
Primary Responsibilities- Build a vision of machine learning inference at Netflix - from model management to hosting to consumption
- Engage a broad set of customers to collect and clearly document requirements. Prioritize roadmaps to meet these needs.
- Drive execution and adoption of the ML Platform’s inference tools across the various stakeholders for MLP: Consumer Engineering, Algorithms Engineering, Studio and Content Engineering and Data Science organizations
- Understand the variety of ML use cases at Netflix, and define standard personas and tasks to enumerate requirements against
- Define new pathways to support Generative AI within the ML Platform
- Establish and follow through on regular measurement of success metrics like adoption and developer productivity
- Apply strong technical, organizational, and communication skills to drive alignment across disparate groups
- Own regular communication to senior Netflix management as well as broad communication to ML practitioners at Netflix
Skills and Responsibilities- 7+ years of experience in technical product management
- Extensive experience working and innovating on ML/AI platforms
- Deep understanding of ML / AI development workflows and MLOps. Working knowledge of ML tools like frameworks (PyTorch, Tensorflow, etc.) and inference necessities (servers like TensorRT, feature stores, etc.)
- Familiarity with modern application patterns, from cloud computing (AWS, Azure, GCP, etc.) to DevOps
- Technical knowledge of inference internals, including GPU management, auto-scaling, batching/pipelining, etc.
- Knowledge and experience in defining, measuring, and improving developer productivity
- Strategic thinking and ability to craft and drive a vision for maximum positive business impact
- Demonstrated initiative in clarifying ambiguous projects and driving them to completion
- Demonstrated leadership experience working effectively with engineers, data scientists and ML practitioners
- Excellent written communication skills and ability to present technical content to non-technical audiences. Ability to partner with different functions to ensure that your solutions drive real business impact.
- Experience in identifying tradeoffs, surfacing needs for clarity, and driving decision-making
Our compensation structure consists solely of an annual salary; we do not have bonuses. You choose each year how much of your compensation you want in salary versus stock options. To determine your personal top of market compensation, we rely on market indicators and consider your specific job family, background, skills, and experience to determine your compensation in the market range. The range for this role is $120,000 - $515,000.