

Share
Process Overview*
Document, Content and Records services provide technology platform for managing the digital document lifecycle for bank documents. This includes composing a document, capturing document, providing a storage repository and ability to search , retrieve and view the document. The capability also includes extraction of data from documents using Machine learning/OCR and other advanced technologies.
Job Description*
Will be involved in designing / developing / maintaining/ imaging applications for Document, Content and Record Services group. The person should be able to independently analyze and gather requirement, estimate work, plan schedule, document system design specs coordinate with multiple teams, attend technical meetings to satisfy both functional and performance requirements, provide deployment build and release documentation. Should be able to multitask with more than one project at a time. Should be familiar with the entire SDLC of a project. The role would also involve troubleshooting of urgent production issues during deployment, major system down time. The role involves working closely with the production support team to understand the issues coming in production and building automation to improve the stability, availability and resiliency of the production platform. The role involves analyzing Splunk and Dynatrace dashboard to identify problems and provide solutions.
Responsibilities*
Education*
Experience Range*
Foundational Skills*
Desired Skills*
Work Timings*
Job Location*
These jobs might be a good fit

Share
This job is responsible for defining and leading the engineering approach for complex features to deliver significant business outcomes. Key responsibilities of the job include delivering complex features and technology, enabling development efficiencies, providing technical thought leadership based on conducting multiple software implementations, and applying both depth and breadth in a number of technical competencies. Additionally, this job is accountable for end-to-end solution design and delivery.
Responsibilities:
Skills:

Share
Process Overview*
.
Responsibilities*
Key responsibilities include:
Education*
Experience Range*
Foundational skills*
Desired skills*
Work Timings*:11:30 AM to 8:30 PM IST
Chennai

Share
Job Description*
An expert who envisions the emerging technology trends, research, keeps track of new software & tools that can help build the Data Science platform.
Responsibilities*
Actively contribute to hands-on coding, building core components, APIs and microservices while ensuring high code quality, maintainability, and performance.
Ensure adherence to engineering excellence standards and compliance with key organizational metrics such as code quality, test coverage and defect rates.
Integrate secure development practices, including data encryption, secure authentication, and vulnerability management into the application lifecycle.
Work on adopting and aligning development practices with CI/CD best practices to enable efficient build and deployment of the application on the target platforms like VMs and/or Container orchestration platforms like Kubernetes, OpenShift etc.
Collaborate with stakeholders to align technical solutions business requirements, driving informed decision-making and effective communication across teams.
Develop efficient utilities, automation frameworks, data science platforms that can be utilized across multiple Data Science teams.
Propose/Build variety of efficient Data pipelines to support the ML Model building & deployment.
Propose/Build automated deployment pipelines to enable self-help continuous deployment process for the Data Science teams.
Analyze, understand, execute and resolve the issues in user scripts / model / code.
Perform release and upgrade activities as required.
Well versed in the open-source technology and aware of emerging 3rd party technology & tools in AI-ML space.
Ability to fire fight, propose fix, guide the team towards day-to-day issues in production.
Ability to train partner Data Science teams on frameworks and platform.
Flexible with time and shift to support the project requirements. It doesn’t include any night shift.
This position doesn’t include any L1 or L2 (first line of support) responsibility.
Education*
Graduation / Post Graduation: BE/B.Tech/MCA/MTech.
Certifications If Any: Generative AI, Data Science & NLP.
Experience Range*
02 Years To 04 Years.
Foundational Skills*
Experience with Data Science, artificial Intelligence and Machine Learning tools and technologies (Python, R, H2O, Spark, SparkML)
Strong knowledge in cloud platform technologies and good to have experience in at least one major cloud platform like AWS, Azure or GCP.
Desired Skills*
Experience building and supporting E2E Data Science (using AI and ML) and Advanced Analytics platform for Model development, Model building & deployment.
Extensive hands-on supporting platforms to allow modelling and analysts go through the complete model lifecycle management (data munging, model develop/train, governance, deployment)
Experience with model deployment, scoring and monitoring for batch and real-time on various technologies and platforms.
Experience in automation for deployment using Ansible Playbooks, scripting.
Design and build and deploy streaming and batch data pipelines capable of processing and storing large datasets quickly and reliably using Kafka, Spark and YARN for large volumes of data (TBs)
Experience designing and building full stack solutions utilizing distributed computing or multi-node architecture for large datasets (terabytes to petabyte scale)
Hands on experience working in a Cloud Platform (AWS/Azure/GCP) to support the Data Science
Effective communication, Strong stakeholder engagement skills, Proven ability in leading and mentoring a team of software engineers in a dynamic environment.
Work Timings*
11:30 AM to 8:30 PM IST
Job Location*
Chennai & GIFT.

Share
Job Description
Responsibilities
Requirements
Education :B.E. / B. Tech/M.E. /M. Tech/B.Sc./M.Sc./BCA/MCA (prefer IT/CS specialization)
Certifications If Any :NA
: 9+ Years
Foundational skills
Desired Skills
Work Timings
12:30 PM to 9.30 PM IST
Hyderabad; Chennai, Mumbai

Share
Job Description*
We are seeking a mid-level Data Science and Engineering professional with 4+ years of relevant experience, particularly in Hadoop and Unix environments. The candidate will work on the CSWT Data Science Platform and SDP, supporting the development of a unified AI/user interface for consumer applications, fraud detection, and AML tracking. Responsibilities include working on the Hadoop ecosystem, ensuring system stability and performance, and supervising daily batch processes. Strong hands-on experience with big data technologies such as Hive, Spark, and Impala, and handling large datasets is essential. The role involves development tasks assigned by CIO leads, generating business reports, providing required data, and supporting ad hoc requests. The ideal candidate should be passionate about delivering high-quality software, solving complex problems, and continuously driving process improvements through innovative ideas.
Responsibilities*
Education*
Experience Range*
Foundational Skills*
Desired Skills*
Work Timings*
Job Location*

Share
Process Overview*
EET Cognitive Linguist will be responsible for building and maintaining machine learning models to identify end user intent for a multi-channel Virtual Assistant.
Job Description*
Requirement is for an CIE for Cognitive and Automated Solutions, who will be an integral part of an agile setup that's constantly pushing the envelope to enhance, build and deliver top-notch platform.
Responsibilities*
Design and development of conversational solutions using various no-code development tools like Amelia.
Architect and implement scalable, efficient, and secure software solutions.
Collaborate with other functions within the bank to validate feasibility, obtain technical requirements, and ensure the process is optimized during the development cycle.
Responsible for the implementation of processes into the Virtual Assistant using the BPMN standard.
Build and maintain a script library that can be used across multiple domains.
Collaborate with cross-functional teams to identify and prioritize project requirements.
Develop and maintain technical documentation and architecture diagrams.
Participate in code reviews and ensure adherence to coding standards and best practices.
Collaborate with various stake holders and teams to ensure smooth deployment and monitoring of applications.
Work with the wider Team to ensure essential security standards and best practice for data protection are in place, focusing on the key requirements from the Public Services Network
Ensure all relevant software documentation is kept up to data and reflects current and future requirements for the organization.
Demonstrate available capabilities and solutions to business areas.
Demonstrate reports, dashboards, and analytic services.
Collaborate with data scientists, UX researchers, and engineers to build out the “brain” of a virtual assistant.
Responsible for developing new conversational AI experiences and integrations into other tools which automate support.
Education*
Graduation / Post Graduation: BE/B.Tech/MCA
NA
Experience Range*
10+ Years
Foundational Skills*
10+ years of experience in software development, with a good exposure in developing NLP models, AI/ML concepts.
Good to have - chatbot development experience.
10+ years of overall and 3+ years of hands-on relevant experience in data engineering with a focus on AI/ML-driven solutions
Experience in Lang chain, Lang graph, FAST API, LLM Models and ML OPS.
Write code using Python/JavaScript, develop algorithms, identify and build appropriate datasets, train machine learning models, and implement to production.
Ability to shift and pivot with changing responsibilities
Exceptional organizational and analytical skills with high attention to detail
Possess a personal sense of urgency and the ability to handle a fast-paced environment
Strong knowledge and working experience on Natural Language Processing technologies and frameworks.
Experience with Python, Groovy etc.
Experience training machine learning algorithms for data classification and/or speech recognition.
Experience improving intent recognition of a data classification model.
Understanding of various web services like REST and/or SOAP.
Familiarity with using version control technologies such as git, svn, or JIRA.
Experience in DevOps and Agile methodology.
Strong analytical and troubleshooting skills.
Ability to build and maintain solid validation sets to test against.
Develop tools and telemetry that can measure/monitor accuracy and performance.
Stay up to date with industry trends and emerging technologies, applying this knowledge to improve our software solutions.
Strong understanding of software architecture principles and patterns
Strong problem-solving skills and ability to work independently.
Excellent communication and leadership skills
Desired Skills*
Hands-on expertise designing and implementing AI conversational experiences via chatbots or virtual assistants, at enterprise scale
Handling voice chats
Strong experience with conversational agents like Amelia, Rasa, Rasa Pro
Unique skillset in computational linguistics and technical experience.
Experience with integration frameworks like Apache CAMEL or similar technologies such as MuleSoft
Exposure to various processes for taking the model through MRM lifecycle and approval.
Work Timings*
11:30 AM to 8:30 PM IST

Share
Process Overview*
Document, Content and Records services provide technology platform for managing the digital document lifecycle for bank documents. This includes composing a document, capturing document, providing a storage repository and ability to search , retrieve and view the document. The capability also includes extraction of data from documents using Machine learning/OCR and other advanced technologies.
Job Description*
Will be involved in designing / developing / maintaining/ imaging applications for Document, Content and Record Services group. The person should be able to independently analyze and gather requirement, estimate work, plan schedule, document system design specs coordinate with multiple teams, attend technical meetings to satisfy both functional and performance requirements, provide deployment build and release documentation. Should be able to multitask with more than one project at a time. Should be familiar with the entire SDLC of a project. The role would also involve troubleshooting of urgent production issues during deployment, major system down time. The role involves working closely with the production support team to understand the issues coming in production and building automation to improve the stability, availability and resiliency of the production platform. The role involves analyzing Splunk and Dynatrace dashboard to identify problems and provide solutions.
Responsibilities*
Education*
Experience Range*
Foundational Skills*
Desired Skills*
Work Timings*
Job Location*
These jobs might be a good fit