Collaborate with analysts to translate business challenges into clear technical requirements and data-driven solutions.
Develop and optimize scalable data pipelines using SQL, Python, and Pandas to integrate data from multiple sources.
Design and implement data models and workflows that support complex business logic.
Write clean, reusable, efficient, and scalable code while applying design patterns to ensure maintainability and performance.
Ensure data integrity, quality, and security by maintaining high-performance governance standards.
Utilize AI-driven solutions to enhance data processing and analytical capabilities.
Work with IT, DevOps, and Security teams to enable secure and seamless data infrastructure deployment.
Identify and act on opportunities to improve data processes, driving product and business success.
Qualifications
Bachelor’s degree in engineering, Computer Science, or a related field.
5+ years of hands-on experience in SQL.
3+ years of hands-on Python/Pandas.
3+ years of experience developing data pipelines and implementing data modeling techniques and concepts such as Facts, Dimensions, Partitions, etc.
Snowflake or dbt hands-on experience (a must-have).
Strong problem-solving skills with the ability to work independently, document processes effectively, and communicate complex technical concepts clearly.
Work in a global team with strong English proficiency.
Preferred Skills:
Experience with: Snowflake, Rivery, AWS, salesforce. Databricks, Iceberg, and dbt.
Work in an agile methodology.
Experience developing in business domains such as Finance, sales, R&D, HR etc.
Experience & knowledge in cybersecurity data domains is a plus.