Manage integrations for data ingestion from multiple source systems via APIs, queries, Apache Spark and Airflow workflows, and Kafka streaming.
Support data transformations and aggregations in a Cloud Object Storage (COS) integration zone, and subsequent data feeds to a DB2 warehouse.
Assist in working incoming support tickets for the application and provide necessary feedback/resolutions.
Contribute to innovation and development of new ETL and Data Transfer processes through Spark/Airflow, APIs, SQL, etc while protecting data quality and integrity during movement.
Perform comprehensive testing of individual components as well as end-to-end solution.
Required Technical and Professional Expertise
Bachelor’s degree or higher in Computer Science, Engineering, or related field or Master’s degree preferred.
5+ years of experience with managing data warehouses and integration solutions
API integration experience
Apache Spark and Airflow experience
SQL development, relational database table structure, and database design
Expertise in working with structured and unstructured data
Excellent communication skills (written and verbal). Able to clearly communicate with leadership and colleagues about technical capabilities, limitations, issues, and recommendations.
Highly organized, detail oriented, independent, and resourceful.
Able to manage complex technical projects with diverse global stakeholders and detailed, interdependent requirements
Experience with Agile practices and associated tools, including Jira