Expoint – all jobs in one place
Finding the best job has never been easier
Limitless High-tech career opportunities - Expoint

Fortinet Staff Data Scientist Product Efficiency & Infra 
United States, California 
983554546

05.09.2025

Responsibilities

  • Partner with product, engineering, finance, sales, and customer success teams to model and forecast product workloads, define metrics, and build tools for planning.
  • Design experiments and studies to reduce uncertainty in workload forecasts and optimize product/revenue flows.
  • Analyze data to address business questions, generate insights using statistical methods, and present findings to stakeholders.
  • Architect data models, pipelines, and applications to support workload data, finance and infrastructure teams.
  • Develop and productionize metrics and dashboards for system availability, reliability, and performance.
  • Conduct root cause and causal inference analyses of availability issues, recommending remediations.
  • Shape data science areas like segmentation, recommendation systems, forecasting, and cost prediction.
  • Analyze product usage to identify growth drivers and improvement opportunities.
  • Manage stakeholders, define project OKRs, and communicate results effectively.
  • Mentor team members and champion evidence-based decision-making with self-service data products.
  • Represent data science across the organization and at conferences.

Minimum Requirements

  • Bachelor’s or higher in a quantitative field (e.g., Statistics, Math, Computer Science, Engineering).
  • 8 -10+ years in analytics driving business decisions (e.g., product/marketing analytics, business intelligence).
  • Proven ability to work independently and engage stakeholders proactively.
  • Expertise in SQL, large datasets (e.g., Hadoop), statistical analysis, and techniques like regression.
  • Experience with optimized data formats for analytics, such as Parquet, and potentially Apache Iceberg.
  • Expertise in defining schemas, managing metadata, and crawling data sources.
  • Proficiency in writing complex SQL queries to analyze data stored in S3-based data lakes.
  • Integrate dbt with orchestration tools like Apache Airflow to automate, schedule, and monitor data pipelines that feed machine learning models.
  • Proficiency in Python and strong communication skills.

Ideal: 7+ years in data science/ML in high-growth tech; expertise in system reliability metrics, data pipelines (e.g., Airflow, Spark); familiarity with product analytics; strong coding (e.g., Python) and cross-functional collaboration skills; BS/MS/Ph.D. in quantitative field.

Wage ranges are based on various factors including the labor market, job type, and job level. Exact salary offers will be determined by factors such as the candidate's subject knowledge, skill level, qualifications, experience, and geographic location.


We encourage candidates from all backgrounds and identities to apply. We offer a supportive work environment and a competitive Total Rewards package to support you with your overall health and financial well-being.