Share
WHO YOU’LL WORK WITH
As a Senior Data Engineer, you will play a key role in ensuring that our data products are robust and capable of supporting our Data Engineering and Business Intelligence initiatives.
A data engineer with 2+ years of experience in data engineering.
Proficient in SQL, Python, PySpark, and Apache Airflow (or similar workflow management tools).
Hands-on experience with Databricks, Snowflake, and cloud platforms (AWS/GCP/Azure).
Good understanding of Spark, Delta Lake, Medallion architecture, and ETL/ELT processes.
Solid data modeling and data profiling skills.
Familiarity with Agile methodologies (Scrum/Kanban).
Awareness of DevOps practices in data engineering (automated testing, security administration, workflow orchestration)
Exposure to Kafka or real-time data processing
Strong communication and collaboration skills.
Preferred:
familiarity with Tableau or similar BI tools
exposure to GenAI/ML pipelines
Nice to have: Databricks certifications for data engineer, developer, or Apache Spark.
WHAT YOU’LL WORK ON
Build and maintain ETL/ELT pipelines and reusable data components.
Collaborate with peers and stakeholders to gather data requirements.
Participate in code reviews and contribute to quality improvements.
Monitor and troubleshoot data pipelines for performance and reliability.
Support CI/CD practices in data engineering workflows.
These jobs might be a good fit