Design, develop, and maintain scalable data pipelines and ETL workflows using tools such as Python, dbt, and Airflow.
Architect and optimize our data warehouse to support efficient analytics, reporting, and business intelligence at scale.
Model and structure data from multiple internal and external sources (such as Salesforce, Jira, Mixpanel, etc.) into clean, reliable, and analytics-ready datasets.
Collaborate closely with our systems architect, analytics, and development teams to translate business requirements into robust and efficient technical data solutions.
Monitor and optimize pipeline performance to ensure data completeness and scalability.
Serve as a key partner and subject-matter expert on all data-related topics within the team.
Implement data quality checks, anomaly detection and validation processes to ensure data reliability.
Requirements:
3+ years of hands-on experience as a Data Engineer or in a similar role.
Expert-level SQL skills, capable of performing complex table transformations and designing efficient data workflows.
Proficiency in Python for data processing and scripting tasks.
Experience building and maintaining ELT/ETL pipelines using dbt.
Hands-on experience with orchestration tools such as Airflow.
Deep understanding of data warehouse concepts and methodologies, including data modeling.
Self-motivated, capable of working autonomously while effectively collaborating with stakeholders to deliver end-to-end solutions.
B.Sc. in Information Systems Engineering, Computer Science, Industrial Engineering, or a related field.