Finding the best job has never been easier
Share
About the Role
Our team has grown a lot in the last few years (currently 2K+ yarn nodes, 2800+ pipelines). As part of the Athena team, you will design, implement, optimize, and manage large scale streaming computing infrastructure. You will work on problems like unification of stream and batch, common DSL for streaming analytics, streaming ingestion for data lake, and minimum downtime support that will impact multiple business use cases at Uber scale. At the same time, you will also have the opportunity to collaborate with the open source community for Flink, Presto, Pinot, and Kafka.
Deep-Dive, the internal of Apache Flink, improves the platform usability and efficiency by building Presto SQL top of Flink, optimizing on the runtime, data delivery completeness, unifying streaming and batch processing top of Flink.
Design and implement distributed algorithms for streaming engine reliability to achieve zero downtime for critical use cases.
- - - - Basic Qualifications ----
Bachelor's degree in Computer Science or related field.
5+ years of total experience Solid understanding of Java for backend / systems software development.
- - - - Preferred Qualifications ----
MS / PhD in Computer Science or related field.
2+ years of experience building large scale distributed software systems. Experience managing streaming processing systems with a strong availability SLA.
Experience working with Apache Flink, Apache Samza/Storm, Apache Calcite, Apache Spark or similar analytics technologies.
* Accommodations may be available based on religious and/or medical conditions, or as required by applicable law. To request an accommodation, please reach out to .
These jobs might be a good fit