Required qualifications, capabilities, and skills
- Formal training or certification on Data Engineeringconcepts and 5+ years applied experience
- Minimum of 10+ years of expertise in software engineering, emphasizing strong architecture and design principles.
- Proven experience of 10+ years in designing and implementing distributed, scalable, and event-driven services to support large-scale data processing and analytics workloads.
- Advanced expertise in Data Engineering and end-to-end software solutions skills, showcasing a high level of proficiency.
- Advanced proficiency with at least 8+ years of hands-on experience in one or more programming languages, such as Python or Java, with a strong understanding of object-oriented principles (OOPs).
- Proficient and hands-on with SQL, Spark SQL, and PySpark.
- Demonstrated expertise in designing and utilizing AWS services, micro-services, Databricks, and Spark for complex projects.
- Extensive working experience with both relational databases (Oracle, SQL Server, RDS) and NoSQL databases.
- Expertise in employing CD/CI practices within Agile SDLC to enhance agility and software quality, effectively collaborating across different sprint cycles and cross-functional teams for application development.
- Solid background in Computer Science, Computer Engineering, or a related technical field.
- Associate/Developer or Architect level certification required in at least one of the following technologies: Databricks, Spark, or AWS.
Preferred qualifications, capabilities, and skills
- Excellent communication skills, with the ability to convey complex technical concepts to non-technical audiences.
- Experience with version control systems like Git and CI/CD pipelines for data engineering workflows.