Job Responsibilities
Required qualifications, capabilities, and skills
- Formal training or certification on Software / Data Engineering concepts and 3+ years applied experience
- Experience across the data lifecycle
- Experience working with modern Lakehouse : Databricks , Glue )
- Proficient in SQL coding (e.g., joins and aggregations)
- Experience in Micro service based component using ECS or EKS
- Experience in building and optimizing data pipelines, architectures, and data sets ( Glue or Data bricks etl)
- Proficient in object-oriented and object function scripting languages (Python etc.)
- Experience in developing ETL process and workflows for streaming data from heterogeneous data sources ( Kafka)
- Experience building Pipeline on AWS using Terraform and using CI/CD pipelines
Preferred qualifications, capabilities, and skills
- Advanced knowledge of RDBMS like Aurora , Open Search
- Experience with data pipeline and workflow management tools (Airflow, etc.)
- Strong analytical and problem-solving skills, with attention to detail.
- Ability to work independently and collaboratively in a team environment.
- A proactive approach to learning and adapting to new technologies and methodologies.