Experience in architecting, building and operating end to end Kafka-based data-pipelines / Spark Jobs / Airflow DAGs / Jupyter Notebooks
Designing, building and operating distributed applications for scale
Expert level understanding of distributed processing technologies(spark, mapreduce etc) with a focus on internals
Expert level understanding of performance tuning applications as they scale to millions of requests/day and 100’s of terabytes of data
Excellent troubleshooting, problem solving, critical thinking, and communication skills
Good understanding of Unix/Linux based operating system. Proficient in unix, command-line tools, and general system debugging
Experience with Kubernetes or other container orchestration framework.
Experience in large Hadoop/S3 clusters with 100’s of terabytes of data
A strong data background - dealing with OLTP or Analytics workloads at scale
Apple is an Equal Opportunity Employer that is committed to inclusion and diversity. We also take affirmative action to offer employment and advancement opportunities to all applicants, including minorities, women and protected veterans. Apple will not discriminate or retaliate against applicants who inquire about, disclose, or discuss their compensation or that of other applicants.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.