What you’ll be doing:
Enhance and scale our data ingestion pipelines to ensure a continuous and reliable flow of AV data into the system.
Monitor data pipelines and services to ensure data availability and reliability.
Collaborate with multi-functional teams to improve data processing efficiency, reduce latency, and enhance overall system performance.
Implement validation and data quality checks to ensure the integrity and accuracy of ingested data.
Work closely with the AV development team to understand data requirements and contribute to the enhancement of data-driven solutions.
What We Need to See:
BS, MS, or PhD in Computer Science, Computer Engineering, or relevant field (or equivalent experience).
8+ years of relevant professional experience working with data ingestion pipelines.
Proficiency in Golang, GRPC and distributed systems, monitoring and tracing.
Hands-on experience with temporal workflows.
Strong understanding of computer science principles and data science.
Proven track record of designing resilient, fault-tolerant distributed services that handle petabyte-scale data.
Demonstrated ability to drive cross-team initiatives and improve engineering efficiency through tooling or automation.
You take pride in your work, strive to achieve incredible results, and possess excellent communication and planning skills.
Ways to Stand Out from the crowd:
Hands-on experience with large-scale data ingestion systems, particularly in autonomous vehicle or high-throughput environments.
Strong contributions to open-source projects in Golang, Temporal, or related distributed systems.
Familiarity with data quality frameworks, anomaly detection, or advancedmonitoring/observabilitytechniques.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך