- Design, develop, and manage our data infrastructure on AWS, with a focus on data warehousing solutions.
- Write efficient, complex SQL queries for data extraction, transformation, and loading.
- Utilize DBT for data modelling and transformation.
- Use Python for data engineering tasks, demonstrating strong work experience in this area.
- Implement scheduling tools like Airflow, Control M, or shell scripting to automate data processes and workflows.Participate in an Agile environment, adapting quickly to changing priorities and requirements.
- Minimum of 5 years of experience as a Data Engineer with extensive expertise in AWS, and PySpark.
- Deep knowledge of SQL and experience with data warehouse design and optimization.
- Strong understanding of AWS services and how they integrate with Databricks and other data engineering tools.
- Demonstrated ability to design, build, and maintain end-to-end data pipelines.
- Excellent problem-solving abilities, with a track record of implementing complex data solutions
- Experience in managing and automating workflows using Apache Airflow.
- Familiarity with Python, Snowflake, and CI/CD processes using GitHub.
- Strong communication skills for effective collaboration across technical teams and stakeholders