Expoint - all jobs in one place

The point where experts and best companies meet

Limitless High-tech career opportunities - Expoint

Snowflake SENIOR PERFORMANCE ENGINEER SNOWPARK 
United States, California 
463159332

25.06.2024
Job Description
  • Be an expert on the topics of Snowflake architecture, Snowpark, query processing, workload profiling, and Snowflake tools.
  • Partner closely with sales engineers, solutions, support, and engineering teams to deliver guidance on performance issues at any phase in the life cycle, including POC and production customers.
  • Engage directly on customer performance challenges for our most critical and impactful customers.
  • Maintain deep understanding of existing and complementary technologies and vendors and develop best practices for Snowflake to integrate with them.
  • Collaborate with Product Management and Engineering to continuously improve Snowflake’s products and eco-system roadmaps.

Our Ideal Applied Performance Engineer will have:

  • Minimum 5 years of experience in technical role delivering Database, Data Warehouse, or Data Engineering implementations.
  • Experience and ability to work directly with customers.
  • Strong communication skills, able to effectively communicate ideas and technical concepts to both technical and executive audiences.
  • Deep understanding of complete data engineering and analytics stack and workflow, from data ingestion and load to transformation to data platform design to BI and analytics tools.
  • Deep technical expertise in databases, data warehouses, data processing, and applications.
  • Strong background in Python, Spark, and other data engineering platforms and tools.
  • Strong SQL language experience, ability to write and troubleshoot and tune complex SQL queries.

Strongly Desired:

  • Extensive knowledge of and experience with large-scale database technology (e.g. Databricks, Spark, Netezza, Oracle Exadata, Teradata, Greenplum, Google BigQuery, Amazon Redshift, Microsoft Synapse, etc.).
  • Experience with non-relational platforms and tools for large-scale data processing (e.g. Spark, Hadoop, HBase).
  • Software development experience with C/C++ or Java.
  • Scripting experience with Python, Ruby, Perl, Bash.
  • University degree in computer science, engineering, mathematics or related fields, or equivalent experience.

Bonus points for experience with the following:

  • Familiarity and experience with common BI and data exploration tools (e.g. Microstrategy, Business Objects, Tableau).
  • Experience and understanding of large-scale infrastructure-as-a-service platforms (e.g. Amazon AWS, Microsoft Azure, OpenStack).
  • Experience implementing ETL pipelines using custom and packaged tools.
  • Experience using AWS services such as S3, Kinesis, Elastic MapReduce, Data Pipeline.

The following represents the expected range of compensation for this role:

  • The estimated base salary range for this role is $246,100 - $376,913.
  • Additionally, this role is eligible to participate in Snowflake’s bonus and equity plan.