Expoint – all jobs in one place
Finding the best job has never been easier
Limitless High-tech career opportunities - Expoint

Airbnb Staff Data Platform Engineer Community Support 
United States 
564448144

Today

A Typical Day:

  • Work closely with the core data engineering team, data services team, and other product engineering teams in the Community Support Platform, understanding their productivity and data authoring pain points, and build solutions to resolve them scalably and flexibly.
  • Deeply understand the different logging and data processing frameworks and collaborate with data infrastructure teams and evolve how we integrate data flows between different frameworks and serving layers worlds, ML and non-ML, and allow systems to deal with data more effectively
  • Own challenges end-to-end, proactively addressing gaps and acquiring new skills to resolve complex issues.
  • Mentor and inspire your teammates in enabling code quality, operational excellence, and shared learning.
  • Participate in all phases of software development from architecture/design through implementation, testing, and on-call.
  • Develop, automate and standardize: logging, enriching, serving data for ML training, inference, benchmarking and monitoring (anomaly detection, safe deploys) to build the next generation of Generative AI products
  • Enable data engineers and analytics engineers to author data products
  • Design, build, and maintain robust and efficient data pipelines and APIs that collect, process, and serve data from various sources, including backend events logged as part of LLM flows, customer interactions across multiple channels, CS agents, LLM evaluations etc.

Your Expertise:

  • 9+ years of industry experience at the intersection of platform data engineering and software engineering with BS, MS or PhD in CS or similar
  • Proven ability to not only developing data processing pipelines, but design guardrails and abstractions that scale—reducing toil and raising confidence for all engineers
  • Deep experience building distributed batch/streaming pipelines (e.g., Spark, Flink, Kafka) and working with distributed storage (e.g., HDFS, S3).
  • Expertise with ETL orchestration tools (Airflow, Luigi, AWS Glue, etc.).
  • Strong knowledge of data warehousing, relational databases (PostgreSQL, MySQL), and columnar stores (Redshift, BigQuery, ClickHouse, HBase).
  • Comfort driving architectural discussions across services, repositories, and environments.
  • Skilled in SQL and data processing (batch and streaming), with a track record of analyzing large datasets to uncover gaps and insights.
  • Excellent collaboration and communication skills; able to influence technical direction across teams.

How We'll Take Care of You:

Pay Range
$255,000 USD

Offices: United States