Expoint – all jobs in one place
מציאת משרת הייטק בחברות הטובות ביותר מעולם לא הייתה קלה יותר
Limitless High-tech career opportunities - Expoint

Palo Alto Principal Machine Learning Platform Engineer Prisma AIRS 
United States, California 
699622881

Yesterday

Being the cybersecurity partner of choice, protecting our digital way of life.

Your Career

As a Principal Machine Learning Inference Engineer, you will serve as a technical authority and visionary for the Prisma AIRS team. You will be responsible for the architectural design and long-term strategy of our AI platform - ML inference. Beyond individual contribution, you will lead complex technical projects, mentor senior engineers, and set the standard for performance, scalability, and engineering excellence across the organization. Your decisions will have a profound and lasting impact on our ability to deliver cutting-edge AI security solutions at a massive scale.

Your Impact

  • Architect and Design: Lead the architectural design of a highly scalable, low-latency, and resilient ML inference platform capable of serving a diverse range of models for real-time security applications.

  • Technical Leadership: Provide technical leadership and mentorship to the team, driving best practices in MLOps, software engineering, and system design.

  • Strategic Optimization: Drive the strategy for model and system performance, guiding research and implementation of advanced optimization techniques like custom kernels, hardware acceleration, and novel serving frameworks.

  • Set The Standard: Establish and enforce engineering standards for automated model deployment, robust monitoring, and operational excellence for all production ML systems.

  • Cross-Functional Vision: Act as a key technical liaison to other principal engineers, architects, and product leaders to shape the future of the Prisma AIRS platform and ensure end-to-end system cohesion.

  • Solve the Hardest Problems: Tackle the most ambiguous and challenging technical problems in large-scale inference, from mitigating novel security threats to achieving unprecedented performance goals.

Your Experience

  • BS/MS or Ph.D. in Computer Science, a related technical field, or equivalent practical experience.

  • Extensive professional experience in software engineering with a deep focus on MLOps, ML systems, or productionizing machine learning models at scale.

  • Expert-level programming skills in Python are required; experience in a systems language like Go, Java, or C++ is nice to have.

  • Deep, hands-on experience designing and building large-scale distributed systems on a major cloud platform (GCP, AWS, Azure, or OCI).

  • Proven track record of leading the architecture of complex ML systems and MLOps pipelines using technologies like Kubernetes and Docker.

  • Mastery of ML frameworks (TensorFlow, PyTorch) and extensive experience with advanced inference optimization tools (ONNX, TensorRT).

  • A strong understanding of popular model architectures (e.g., Transformers, CNNs, GNNs) is a significant plus.

  • Demonstrated expertise with modern LLM inference engines (e.g., vLLM, SGLang, TensorRT-LLM) is required. Open-source contributions in these areas are a significant plus.

  • Experience with low-level performance optimization, such as custom CUDA kernel development or using Triton Language, is a plus.

  • Experience with data infrastructure technologies (e.g., Kafka, Spark, Flink) is great to have.

  • Familiarity with CI/CD pipelines and automation tools (e.g., Jenkins, GitLab CI, Tekton) is a plus.

Compensation Disclosure

The compensation offered for this position will depend on qualifications, experience, and work location. For candidates who receive an offer at the posted level, the starting base salary (for non-sales roles) or base salary + commission target (for sales/commissioned roles) is expected to be between $151000/YR - $246500/YR. The offered compensation may also include restricted stock units and a bonus. A description of our employee benefits may be found .

All your information will be kept confidential according to EEO guidelines.