Job Responsibilities
- Design and implement complex, scalable solutions to process data and ensure timely availability for querying.
- Develop reusable frameworks, prioritizing quality and long-term sustainability.
- Thrive in a challenging environment and provide hands-on contributions across various projects.
- Participate in regular code reviews to maintain code quality and adhere to best practices.
Required Qualifications, Capabilities, and Skills
- Formal training or certification on Python programming concepts and 3+ years applied experience.
- Extensive development experience using SQL.
- Hands-on experience with MPP databases such as Redshift, BigQuery, or Snowflake, and modern transformation/query engines like Spark, Flink, Trino.
- Familiarity with workflow management tools (e.g., Airflow) and/or dbt for transformations.
- Comprehensive understanding of modern data platforms, including data governance and observability.
- Experience with cloud platforms (AWS, GCP, Azure).
- Self-starter capable of delivering production-ready solutions with minimal supervision.
- Strong SDLC practices, viewing data engineering as a software engineering discipline.
- Solid theoretical fundamentals in topics such as database internals, distributed systems, and design patterns.
Preferred Qualifications, Capabilities, and Skills
- Data modeling skills.
- Familiarity with Terraform, Kubernetes, Kafka.
- Experience with web frameworks such as FastAPI or Flask.