Develop real time and batch data transformation processes using wide range of technologies using Hadoop, Spark Stream, Spark SQL, Python, Hive etc.
Ability to translate architecture and low-level requirements to design and code using Big-data tools and processes.
Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgement
Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establish and implement new or revised applications systems and programs to meet specific business needs or user areas
Monitor and control all phases of development process and analysis, design, construction, testing, and implementation as well as provide user and operational support on applications to business users.
Critically evaluate the current processing and recommend processefficiencies/enhancements
Work closely with Technology partners to ensure the Business requirements are met by the development team
Qualifications:
5-8 years of Development experience in big data space
Desired Skills: Core Java, Full Stack developer, Big Data Frameworks, Hadoop, Scala, Hive, Impala, Kafka and Elastic along with focus on data analysis
Good to Have: Python, Service Now, and JORA/Confluence experience
Intermediate Java resource with experience in Java/J2EE, Hadoop, Scala, Hive, Impala, Kafka and Elastic to resolve data concerns and implement data remediation requirements
Strong computer science fundamentals in data structures, algorithm, databases and operating systems
Experience in developing high-performance multi-threaded applications
Good knowledge of design patterns and identification and fixing code issues
Experience with source code management tools such as Bitbucket
Education:
Bachelor’s degree/University degree or equivalent experience