At least 5+ years and Bachelor’s degree, or equivalent work experience in Engineering, Computer Science, Business Information Systems, or similar.
Experience with distributed computing technologies like Hadoop and Spark.
Proficiency in Scala, Java, and SQL.
Strong data analysis skills, including the ability to analyze large-scale datasets and possess a keen data sense.
Familiarity with data visualization tools, excellent analytical and critical thinking skills, and the ability to communicate complex insights effectively.
A background in data science or statistics, with the ability to provide scientific support to the team.
Expertise in designing, implementing, and supporting highly scalable data systems and services.
Expertise in building and running large-scale data pipelines, including distributed messaging systems like Kafka, data ingestion from various sources to feed batch and near-realtime or streaming compute components.
A solid understanding of data modeling and architecture optimized for big data patterns, such as efficient storage and querying on HDFS.
Experience with distributed storage and network resources, including hosts, clusters, and Data Centers, to troubleshoot and prevent performance issues.
Data analysis and data science background are preferred.
Familiarity with Tableau, Apache Iceberg tables, Docker, Kubernetes, and Apache Airflow is also advantageous.
Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.