Expoint – all jobs in one place
המקום בו המומחים והחברות הטובות ביותר נפגשים
Limitless High-tech career opportunities - Expoint

Palo Alto Senior Staff Data Platform Engineer 
United States, California 
205931071

26.08.2025

Being the cybersecurity partner of choice, protecting our digital way of life.

Your Career

This is an in-office role in our HQ (Santa Clara, CA).

Your Impact

  • Design, develop, and maintain data pipelines to extract, transform, and load (ETL) data from various sources into our data warehouse or data lake environment.

  • Automate, manage, and scale the underlying infrastructure for our data platforms (e.g., Airflow, Spark clusters), applying SRE and DevOps best practices for performance, reliability, and observability.

  • Collaborate with stakeholders to gather requirements and translate business needs into robust technical and platform solutions.

  • Optimize and tune existing data pipelines and infrastructure for performance, cost, and scalability.

  • Implement and enforce data quality and governance processes to ensure data accuracy, consistency, and compliance with regulatory standards.

  • Work closely with the BI team to design and develop dashboards, reports, and analytical tools that provide actionable insights to stakeholders.

  • Mentor junior members of the team and provide guidance on best practices for data engineering, platform development, and DevOps.

  • (Nice-to-have) Aptitude for proactively identifying and implementing GenAI-driven solutions to achieve measurable improvements in the reliability and performance of data pipelines or to optimize key processes like data quality validation and root cause analysis for data issues.

Your Experience

  • Bachelor's degree in Computer Science, Engineering, or a related field.

  • 5+ years of experience in data engineering, platform engineering, or a similar role, with a strong focus on building and maintaining data pipelines and the underlying infrastructure.

  • Must have proven experience in a DevOps, SRE, or System Engineering role, with hands-on expertise in infrastructure as code (e.g., Terraform, Ansible), CI/CD pipelines, and monitoring/observability tools.

  • Expertise in SQL programming and database management systems (e.g., BigQuery).

  • Hands-on experience with ETL tools and technologies (e.g., Apache Spark, Apache Airflow).

  • Experience with cloud platforms such as Google Cloud Platform (GCP), and experience with relevant services (e.g., GCP Dataflow, GCP DataProc, BigQuery, Cloud Composer, GKE).

  • Experience with Big Data tools like Spark, Kafka, etc.

  • Experience with object-oriented/object function scripting languages: Python/Scala, etc.

  • (Nice-to-have) Demonstrated readiness to leverage GenAI tools to enhance efficiency within the typical stages of the data engineering lifecycle, for example by generating complex SQL queries, creating initial Python/Spark script structures, or auto-generating pipeline documentation.

  • (Plus) Experience with BI tools and visualization platforms (e.g., Tableau).

  • (Plus) Experience with SAP HANA, SAP BW, SAP ECC, or other SAP modules.

  • Strong analytical and problem-solving skills, with the ability to analyze complex data sets and derive actionable insights.

  • Excellent communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams.

Compensation Disclosure

The compensation offered for this position will depend on qualifications, experience, and work location. For candidates who receive an offer at the posted level, the starting base salary (for non-sales roles) or base salary + commission target (for sales/commissioned roles) is expected to be between $122000/YR- $197000/YR. The offered compensation may also include restricted stock units and a bonus. A description of our employee benefits may be found .

All your information will be kept confidential according to EEO guidelines.