- - - - What the Candidate Will Do ----
- Partner with engineers, analysts, and product managers to define technical solutions that support business goals
- Contribute to the architecture and implementation of distributed data systems and platforms
- Identify inefficiencies in data processing and proactively drive improvements in performance, reliability, and cost
- Serve as a thought leader and mentor in data engineering best practices across the organization
- - - - Basic Qualifications ----
- 7+ years of hands-on experience in software engineering with a focus on data engineering
- Proficiency in at least one programming language such as Python, Java, or Scala
- Strong SQL skills and experience with large-scale data processing frameworks (e.g., Apache Spark, Flink, MapReduce, Presto)
- Demonstrated experience designing, implementing, and operating scalable ETL pipelines and data platforms
- Proven ability to work collaboratively across teams and communicate technical concepts to diverse stakeholders
- - - - Preferred Qualifications ----
- Deep understanding of data warehousing concepts and data modeling best practices
- Hands-on experience with Hadoop ecosystem tools (e.g., Hive, HDFS, Oozie, Airflow, Spark, Presto)
- Familiarity with streaming technologies such as Kafka or Samza
- Expertise in performance optimization, query tuning, and resource-efficient data processing
- Strong problem-solving skills and a track record of owning systems from design to production
* Accommodations may be available based on religious and/or medical conditions, or as required by applicable law. To request an accommodation, please reach out to .