The point where experts and best companies meet
Share
What you’ll be doing:
Architect, implement and optimize GPU accelerated scalable Retrieval Augmented Generation(RAG) workflow. Build a scalable microservice based architecture deployable on multi-node, multi-cloud environment
Designing, implementing and testing domain specific agents and workflows and a framework which can support multi-turn, multi-modal, multi-user conversations with a LLM driven agents.
Develop knowledge discovery, and reasoning capabilities including but not limited to disambiguation, clarification, and anticipation for dialogue systems
Analyze RAG and conversational AI agent end to end accuracy and limitations and recommend the next course of action & Improvements.
Characterize performance and quality metrics across platforms for various AI and system components
Collaborate with various teams on new product features and improvements of existing products. Customize and integrate the conversational AI framework with other NVIDIA products
Participate in developing and reviewing code, design documents, use case reviews, and test plan reviews and help innovate, identify problems, recommend solutions and perform triage in a collaborative team environment.
What we need to see:
Bachelor's degree or Master’s degree (or equivalent experience) in Computer Science, Electrical Engineering, Artificial Intelligence, or Applied Math
5+ years of experience
Excellent programming skills in Python
Hands-on experience of working with Retrieval Augmented Generation based applications
Knowhow of Large Language model applications, agentic workflows, LLM guardrails
Understanding of scalable deployment of LLM driven RAG and Agent applications in production environment
Familiarity with microservices, Docker, helm, kubernetes etc.
Experience of working on end to end Software lifecycle, release packaging & CI/CD pipeline
Hands-on experience on conversational AI Technologies like Large Language Model(LLM), LLM function calling, Information Retrieval, Vector Databases, Embedding and Rerank models, autonomous agents etc.
General background around version control and code review tools like Git, Gerrit, Gitlab.
Strong collaborative and interpersonal skills, specifically a proven ability to effectively guide and influence within a dynamic environment
Ways to stand out from the crowd:
Strong fundamentals in Programming, optimizations and Software design
Experience of working with open source frameworks like LangChain, LlamaIndex for building LLM driven applications
Strong knowledge of ML/DL techniques, algorithms and tools with exposure to Language Models
Familiarity with GPU based technologies like CUDA, CuDNN and TensorRT
Background with deploying machine learning models on data center, cloud, and embedded systems
These jobs might be a good fit