- Design, develop, and manage our data infrastructure on AWS, with a focus on data warehousing solutions.
- Write efficient, complex SQL queries for data extraction, transformation, and loading.
- Utilize DBT for data modelling and transformation.
- Use Python for data engineering tasks, demonstrating strong work experience in this area.
- Implement scheduling tools like Airflow, Control M, or shell scripting to automate data processes and workflows.Participate in an Agile environment, adapting quickly to changing priorities and requirements.
- Proven expertise in AWS technologies, with a strong understanding of AWS services. Experience in Redshift is optional
- Experience in data warehousing with a solid grasp of SQL, including ability to write complex queries.
- Proficiency in Python, demonstrating good work experience in data engineering tasks.
- Familiarity with scheduling tools like Airflow, Control M, or shell scripting.
- Excellent communication skills, willing attitude towards learning
- Knowledge of DBT for data modelling and transformation is a plus.
- Experience with PySpark or Spark is highly desirable.
- Familiarity with DevOps, CI/CD, and Airflow is beneficial.Experience in Agile environments is a nice-to-have