Expoint – all jobs in one place
Finding the best job has never been easier
Limitless High-tech career opportunities - Expoint

Uber Data Engineer II 
United States, West Virginia 
651016285

23.07.2025

About the Role

You’ll collaborate with engineers, analysts, and product stakeholders to understand requirements and shape data architecture that supports business goals. This is a high-impact role that demands strong communication skills, technical depth, and a proactive mindset.

- - - - What the Candidate Will Do ----

  1. Bring industry experience in data engineering or a related field.
  2. Partner with cross-functional Safety and Insurance teams across global tech hubs to deliver on Uber’s strategic objectives.
  3. Design and develop scalable data pipelines for real-time and batch processing to extract, clean, enrich, and load data.
  4. Enhance data quality through monitoring, validation, and alerting mechanisms.
  5. Continuously evolve our data architecture to support new products, features, and safety initiatives.
  6. Contribute to building feature pipelines that support data science models for predictions and business decisions.
  7. Own end-to-end data solutions—from requirements gathering through to production deployment.

- - - - Basic Qualifications ----

  1. Bachelor’s degree in Computer Science, Engineering, or a related technical field—or equivalent practical experience.
  2. 3+ years of professional software development experience, with a strong focus in Data Engineering and Data Architecture .
  3. Proven ability to work closely with product managers and business stakeholders to gather requirements and design scalable data infrastructure that supports cross-functional needs.
  4. Advanced SQL expertise , including proficiency with:
    • Window functions
    • Common Table Expressions (CTEs)
    • Dynamic SQL variables
    • Hierarchical queries
    • Materialized views
  5. Hands-on experience with big data and distributed computing technologies , such as:
    • HDFS
    • Apache Spark
    • Apache Flink
    • Hive
    • Presto
  6. Strong programming skills in Python , with solid understanding of object-oriented programming principles.
  7. Experience designing and maintaining large-scale distributed storage and database systems , including both SQL and NoSQL solutions (e.g., Hive, MySQL, Cassandra).
  8. Deep understanding of data warehousing architecture and data modeling best practices.
  9. Familiarity with major cloud platforms such as Google Cloud Platform (GCP) , AWS , or Azure .
  10. Working knowledge of reporting and business intelligence tools , such as Tableau or similar platforms.

- - - - Preferred Qualifications ----

  1. Advanced experience with SQL , including Spark SQL , Hive , and Presto , with a deep understanding of query optimization and performance tuning.
  2. Hands-on experience with streaming data technologies , such as Apache Kafka , Apache Flink , or Spark Structured Streaming , for building real-time data pipelines.
  3. Experience working with Apache Pinot or similar OLAP data stores for high-performance, real-time analytics.
  4. Familiarity with Python libraries for big data processing (e.g., PySpark ) and working knowledge of Scala in distributed data environments.
  5. Practical experience in deploying and managing data solutions on cloud platforms such as GCP , AWS , or Azure .

For San Francisco, CA-based roles: The base salary range for this role is USD$167,000 per year - USD$185,500 per year.

For Sunnyvale, CA-based roles: The base salary range for this role is USD$167,000 per year - USD$185,500 per year.