Bachelor’s degree/ Master’s degree /University degree or equivalent experience
Must Have:
12+ years of application/software development/maintenance
8+ Years of experience on Big Data Technologies like Apache Spark, Hive, Hadoop is must.
Must have worked on Abinitio or similar ETL technologies like Data stage, Talend etc.
Knowledge of Scala, Java & Python programming language. Experience on any two languages is mandatory.
Experience with JAVA (Core Java, J2EE, Spring Boot Restful Services), Web services (REST, SOAP), XML, Java Script, Micro services, SOA etc.
Strong technical knowledge of Apache Spark, Hive, SQL, and Hadoop ecosystem
Experience with developing frameworks and utility services including logging/monitoring.
Experience delivering high quality software following continuous delivery and using code quality tools (JIRA, GitHub, Jenkin, Sonar, etc.).
Experience creating large-scale, multi-tiered, distributed applications with Hadoop and Spark
Profound knowledge implementing to different data storage solutions such as RDMBS(Oracle), Hive, HBase, Impala and NO SQL databases like MongoDB, HBase, Cassandra etc.
Ability to work independently, multi-task, and take ownership of various analyses or reviews.
Has to be results-oriented, willing and able to take ownership of engagements.
Banking domain experience is a must.
Strong analytical and communication skills
Good to Have:
Work experience in Citi or Regulatory Reporting applications.
Hands on experience on cloud technologies especially around data engineering
Hands on experience on AI/ML integration and creation of data pipelines
Experience with vendor products like Tableau, Arcadia, Paxata, KNIME is a plus