Expoint - all jobs in one place

The point where experts and best companies meet

Limitless High-tech career opportunities - Expoint

Snowflake SENIOR SOFTWARE ENGINEER - POLARIS & DATA LAKE CATALOG 
United States, California 
326659142

19.11.2024
AS A SENIOR SOFTWARE ENGINEER, YOU WILL:
  • Design and implement scalable, distributed systems to enable support for Iceberg DML/DDL transactions, schema evolution, partitioning, time travel, and more.

  • Architect and build systems that integrate Snowflake queries with external Iceberg catalogs (e.g., AWS Glue, Databricks Unity) and various data lake architectures, enabling seamless interoperability across cloud providers.

  • Develop high-performance, low-latency solutions for catalog federation, allowing customers to manage and query their data lake assets across multiple catalogs from a single interface.

  • Collaborate with Snowflake’s open-source team and the Apache Iceberg community to contribute new features and enhance the Iceberg REST specification.

  • Work on core data access control and governance features for Polaris, including fine-grained permissions such as row-level security, column masking, and multi-cloud federated access control.

  • Contribute to our managed Polaris service, ensuring that external query engines like Spark and Trino can read from and write to Iceberg tables through Polaris in a way that’s decoupled from Snowflake’s core data platform.

  • Build tooling and services that automate data lake table maintenance, including compaction, clustering, and data retention for enhanced query performance and efficiency.

OUR IDEAL SENIOR SOFTWARE ENGINEER WILL HAVE:
  • 8+ years of experience designing and building scalable, distributed systems.

  • Strong programming skills in Java, Scala, or C++ with an emphasis on performance and reliability.

  • Deep understanding of distributed transaction processing, concurrency control, and high-performance query engines.

  • Experience with open-source data lake formats (e.g., Apache Iceberg, Parquet, Delta) and the challenges associated with multi-engine interoperability.

  • Experience building cloud-native services and working with public cloud providers like AWS, Azure, or GCP.

  • A passion for open-source software and community engagement, particularly in the data ecosystem.

  • Familiarity with data governance, security, and access control models in distributed data systems.

BONUS POINTS FOR EXPERIENCE WITH:
  • Contributing to open-source projects, especially in the data infrastructure space.

  • Designing or implementing REST APIs, particularly in the context of distributed systems.

  • Managing large-scale data lakes or data catalogs in production environments.

  • Working on highly-performant and scalable query engines such as Spark, Flink, or Trino.

WHY JOIN THE POLARIS & DATA LAKE CATALOG TEAM AT SNOWFLAKE?
  • Be part of a pioneering effort to build the most open and interoperable data lake ecosystem in the industry.

  • Work on a high-impact open-source project that solves real-world data challenges for enterprise customers like Netflix, AWS, and others.

  • Collaborate with some of the brightest minds in the data ecosystem, including core contributors to Apache Iceberg.

  • Have the opportunity to innovate in one of the fastest-growing and evolving areas in data infrastructure, where you can make a direct impact on Snowflake’s growth and the broader open-source community.

The following represents the expected range of compensation for this role:

  • The estimated base salary range for this role is $187,000 - $276,000.
  • Additionally, this role is eligible to participate in Snowflake’s bonus and equity plan.