Bachelor’s or Master’s degree in Computer Science (or related technical field) with 10+ years of relevant experience.
Expertise in designing, deploying, and scaling big data systems using open-source technologies such as Apache Spark, Flink, Trino, Iceberg, and object storage (e.g., S3).
Deep understanding of Linux internals, system tuning, and network performance optimization.
Hands-on experience with high-availability deployments, disaster recovery, and SLA-driven services.
Track record of optimally leading platform modernization and migration efforts at scale.
Strong experience in Kubernetes (or AWS EKS), Python scripting, configuration management, and observability tooling.
Proven track record to engage with senior stakeholders to define strategy and guide technical transitions.
Strong sense of ownership and integrity, reflected in both communication and outcomes.
Passion for automating operational workflows and eliminating manual processes through scripting.
Exceptional incident response and troubleshooting skills—able to generate and test hypotheses under pressure to identify root causes.
Experience working with virtualized environments, EBS/S3-based storage, and platform-level reliability.
Familiarity with Git for configuration management, including cluster-level settings and deployment infrastructure.
Comfortable supporting application developers without modifying source application code.
Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.