The point where experts and best companies meet
Share
About the Role:You will report to an Engineering Manager and work in San Francisco / Bay Area.You Will:
- Design and build scalable data pipelines to support personalization models.
- Develop and maintain low-latency, large-scale streaming and batch data processing systems.- Optimize data workflows for performance and cost efficiency.
- Implement best practices for data governance and security.
- Troubleshoot and resolve data-related issues, with a focus on identifying and solving data quality problems.
- 6+ years of experience as a data engineer or in a similar role.
- Proficiency in SQL, Python, or Scala.
- Experience with building batch and streaming data pipelines with high throughput and low latency.
- Strong understanding of data architecture and data modeling principles.
- Experience analyzing large datasets to identify gaps and inconsistencies, provide data insights, and promote effective product solutions
- Hands-on experience with cloud platforms (AWS, GCP, or Azure) and their data services.
- Familiarity with ETL tools and data warehousing solutions.
- Experience with distributed data processing technologies such as Apache Spark, Flink, and Kafka.
- Experience working with cross-functional roles like ML engineers and scientists.
- Experience with AWS data ecosystems like Redshift, Kinesis and Glue.
- Understand data requirements for ML production systems.
- Extensive experience with mature and large-scale production data systems and capable of defining a strong North Star and making increments progress towards that.Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.Pursuant to the Los Angeles Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
These jobs might be a good fit