

Share
Key job responsibilities
- Develop and maintain ETL pipelines in Spark to transform and load FBA metrics into the Data Lakehouse.
- Optimize SQL queries for fast, cost-efficient access by AI systems.
- Support dbt semantic models and automate metadata enrichment with Glue.
- Write automated tests to ensure data quality and freshness.
- 1+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
- Experience with one or more scripting language (e.g., Python, KornShell)
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
- Proficiency in dbt, Airflow/MWAA, AWS Glue, Kinesis.
- Experience building semantic layers or BI models.
- Familiarity with prompt-driven SQL generation and AI-assisted query validation.
These jobs might be a good fit