This role requires the experience and skills to design and build key components and infrastructure for our global data teams (Data engineering, BI, Data science) with experience in designing, building, and maintaining streaming data pipelines and data lake architectures, together with hands-on expertise with technologies like Apache Spark, Kafka, and cloud-based data lake implementations.
As a Data Engineer you will…- Build Infrastructure to empower our Engineers/Data Scientists/BI teams to work by best practices of data processing
- Work in a high-volume production environment
- Develop and manage ETL/ELT processes for structured and unstructured data
- Collaborate with colleagues both locally and in remote locations
- Influence the software architecture and working procedures for building data and analytics
- Ensure data quality, integrity, and security within the data pipeline and data lake
- Monitor, troubleshoot, and optimize data workflows to improve performance and reliability.
To be a Data Engineer in JFrog you need…- 4+ years in Data/Backend engineering with experience in designing, developing and optimizing streaming data pipelines using Apache Spark, Kafka, or similar technologies.
- Dealing with data on high volume, high availability production systems
- Practical experience with Python in the domain of data pipelines.
- Experience with cloud-based data lake architectures (AWS S3, Google Cloud Storage).
- Exposure to DevOps practices, CI/CD pipelines, and infrastructure as code.
- Excellent problem-solving skills and the ability to work in a collaborative environment.