Share
Responsibilities:
Qualifications:
10+ years of progressive experience in the IT field, with a focus on data engineering and analytics.
• 8+ years of hands-on experience working with Hadoop and other big data technologies.
• 8+ years of experience in Python programming with expertise in automation testing to design, develop, and automate robust software solutions and testing frameworks.
• 8+ years of experience in designing, developing, and implementing complex data pipelines for data ingestion, transformation, and processing using PySpark and Python.
• 8+ years of experience in designing, developing, and implementing complex data pipelines for data ingestion, transformation, and processing using SAS and other ETL Tools like Ab Initio/ Data Stage.
• 8+ years of experience in designing, developing, and implementing complex data pipelines for scheduling.
• Experience in modernizing legacy pipelines by converting them to big data technologies, ensuring improved scalability and performance.
• 5+ years of experience leading large-scale data and analytics projects, with demonstrated ability to manage all aspects of the project lifecycle, from initiation to delivery.
• Familiarity with CI/CD practices and automation tools to streamline development, test automation and deployment processes.
• Proven experience in team leadership, including mentoring and guiding technical teams to achieve project goals.
• Industry experience within banking, capital markets, or the broader financial services sector is strongly preferred.
• Hands-on experience with enterprise-scale, multi-region projects, showcasing the ability to work in complex, distributed environments.
This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
Time Type:
Full timeThese jobs might be a good fit