Bachelor’s degree in Engineering, Computer Science, Business Information Systems, or related field
At least 5+ years relevant industry experience
Experience with distributed computing technologies such as Hadoop and Spark
Proficiency in Scala, Java and SQL
Expertise in designing, implementing and supporting highly scalable data systems and services
Expertise building and running large-scale data pipelines, including distributed messaging such as Kafka, data ingest from various sources to feed batch and near-realtime or streaming compute components
Solid understanding of data-modeling and data-architecture optimized for big data patterns, such as efficient storage and query on HDFS
Experience with distributed storage and network resources, at the level of hosts, clusters and DCs, to troubleshoot and prevent performance issues
Experience with data lake and data warehouse solutions
Experience with Apache Flink
Experience with Apache Iceberg tables
Experience with Apache Beam
Familiarity with Docker and Kubernetes
Familiarity with Apache Airflow
Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.