The point where experts and best companies meet
Share
Understand customer requirements and define product strategies.
Design, develop, and operate highly reliable large scale data lake systems.
Embrace Snowflake innovations with open source standards and tool sets.
Be an active influencer for the direction of open source standards.
Partner closely with Product teams to understand requirements and design cutting edge new capabilities that go directly into customer’s hands.
Analyze fault-tolerance and high availability issues, performance and scale challenges, and solve them.
Ensure operational excellence of the services and meet the commitments to our customers regarding reliability, availability, and performance.
8+ years of hands-on experience in large scale data intensive distributed systems, especially in distributed file systems, object storage, data warehouse, data lake, data analytics, and data platform infrastructure.
Strong development skills in Java and C++.
An active PMC (Program Management Committee) or Committer to open source like Apache Iceberg, Parquet, Spark, Hive, Flink, Delta Lake, Presto, Trino, and Avro.
Proven track record of leading and delivering large and complex big data projects across organizations.
A growth mindset and excitement about breaking the status quo by seeking innovative solutions.
An excellent team player who is consistent in making everyone around you better.
Experience with public clouds (AWS, Azure, GCP) is a plus
BS/MS in Computer Science or related major, or equivalent experience
The following represents the expected range of compensation for this role:
These jobs might be a good fit