Design, implement, and optimize scalable data pipelines for efficient data processing and analysis.
Build and maintain robust data acquisition systems to gather, process, and store data from various sources.
Work closely with DevOps engineers, Data Science and Product teams to understand their requirements and provide data solutions that meet business objectives.
Proactively monitor data pipelines and production environments to identify and resolve issues promptly.
Implement best practices for data security, integrity, and performance.
Mentor and guide junior team members, sharing expertise and fostering their professional development.
Requirements:
Requirements
6+ years of experience in data or backend engineering, preferably with Python proficiency for data tasks.
Demonstrated experience in designing, developing, and delivering sophisticated data applications
Ability to thrive under pressure, consistently delivering results, and making strategic prioritization decisions in challenging situations.
Hands-on experience with data pipeline orchestration and data processing tools, particularly Apache Airflow and Spark.
Deep experience with public cloud platforms, preferably GCP, and expertise in cloud-based data storage and processing.
Excellent communication skills and the ability to collaborate effectively with cross-functional teams.
Bachelor’s degree in Computer Science, Information Technology, or a related field or equivalent experience.
Advantage:
Familiarity with data science tools and libraries.