Expoint – all jobs in one place
The point where experts and best companies meet
Limitless High-tech career opportunities - Expoint

Bank Of America Software Engineer III - GBS 
India, Tamil Nadu, Chennai 
743297105

Today

Responsibilities:

  • Develop high-performance and scalable Analytics solutions using the Big Data platform to facilitate the collection, storage, and analysis of massive data sets from multiple channels.
  • Develop efficient utilities, data pipelines, ingestion frameworks that can be utilized across multiple business areas.
  • Utilize your in-depth knowledge of Hadoop stack and storage technologies, including HDFS, Spark, MapReduce, Yarn, Hive, Sqoop, Impala, Hue, and Oozie, to design and optimize data processing workflows.
  • Data analysis, coding, Performance Tunning, propose improvement ideas, drive the development activities at offshore.
  • Analyze complex Hive Queries, able to modify Hive queries, tune Hive Queries
  • Hands on experiences writing scripts in python/shell scripts and modify scripts.
  • Provide guidance and mentorship to junior teammates.
  • Work with the strategic partners to understand the requirements work on high level & detailed design to address the real time issues in production.
  • Partnering with near shore and offshore teammates in Agile environment, coordinating with other application teams, development, testing, up/down stream partners, etc.
  • Hands on experiences writing scripts in python/shell scripts and modify scripts.
  • Work on multiple projects concurrently, take ownership & pride in the work done by them, attending project meetings, understanding requirements, designing solutions, developing code.
  • Identify gaps in technology and propose viable solutions.
  • Identify improvement areas within the application and work with the respective teams to implement the same.
  • Ensuring adherence to defined process & quality standards, best practices, high quality levels in all deliverables.

Desired Skills*

  • Data Lake Architecture: Understanding of Medallion architecture
  • ingestion Frameworks: Knowledge of ingestion frameworks like structured, unstructured, and semi structured
  • Data Warehouse: Familiarity with Apache Hive and Impala
  • Performs Continuous Integration and Continuous Development (CI-CD) activities.
  • Hands on experience working in a Cloudera data platform (CDP) to support the Data Science
  • Contributes to story refinement and definition of requirements.
  • Participates in estimating work necessary to realize a story/requirement through the delivery lifecycle.
  • Extensive hands-on supporting platforms to allow modelling and analysts go through the complete model lifecycle management (data munging, model develop/train, governance, deployment)
  • Experience with model deployment, scoring and monitoring for batch and real-time on various technologies and platforms.
  • Experience in Hadoop cluster and integration includes ETL, streaming and API styles of integration.
  • Experience in automation for deployment using Ansible Playbooks, scripting.
  • Experience with developing and building RESTful API services in an efficient and scalable manner.
  • Design and build and deploy streaming and batch data pipelines capable of processing and storing large datasets quickly and reliably using Kafka, Spark and YARN for large volumes of data (TBs)
  • Experience with processing and deployment technologies such YARN, Kubernetes /Containers and Serverless Compute for model development and training.
  • Effective communication, Strong stakeholder engagement skills, Proven ability in leading and mentoring a team of software engineers in a dynamic environment.

Education*

  • Graduation / Post Graduation

Experience Range*

  • 7 to 12 years

Foundational Skills

  • Hadoop, Hive, Sqoop, Impala, Unix/Linux scripts.

Desired Skills

  • Python, CI/CD, ETL.

11:30 AM to 8:30 PM IST

Chennai / Hyderabad