Expoint – all jobs in one place
Finding the best job has never been easier

Jobs at Nvidia in India, Mumbai

Join the leading companies Nvidia in India, Mumbai with Expoint. Explore job opportunities in the high tech industry and take your career to the next level. Sign up now and experience the power of Expoint.
Company (1)
Job type
Job categories
Job title
India
Mumbai
4 jobs found
07.09.2025
N

Nvidia Senior Solutions Architect Generative AI India, Maharashtra, Mumbai

Limitless High-tech career opportunities - Expoint
Architect end-to-end generative AI solutions with a focus on LLMs and RAG workflows. Collaborate closely with customers to understand their language-related business challenges and design tailored solutions. Collaborate with sales...
Description:
India, Mumbai
time type
Full time
posted on
Posted 6 Days Ago
job requisition id

What you will be doing

  • Architect end-to-end generative AI solutions with a focus on LLMs and RAG workflows.

  • Collaborate closely with customers to understand their language-related business challenges and design tailored solutions.

  • Collaborate with sales and business development teams to support pre-sales activities, including technical presentations and demonstrations of LLM and RAG capabilities.

  • Work closely with NVIDIA engineering teams to provide feedback and contribute to the evolution of generative AI technologies.

  • Engage directly with customers to understand their language-related requirements and challenges.

  • Lead workshops and design sessions to define and refine generative AI solutions focused on LLMs and RAG workflows and lead the training and optimization of Large Language Models using NVIDIA’s hardware and software platforms.

  • Implement strategies for efficient and effective training of LLMs to achieve optimal performance.

  • Design and implement RAG-based workflows to enhance content generation and information retrieval.

  • Work closely with customers to integrate RAG workflows into their applications and systems and stay abreast of the latest developments in language models and generative AI technologies.

  • Provide technical leadership and guidance on best practices for training LLMs and implementing RAG-based solutions.


What we need to see

  • Master's or Ph.D. in Computer Science, Artificial Intelligence, or equivalent experience

  • 5+ years of hands-on experience in a technical role, specifically focusing on generative AI, with a strong emphasis on training Large Language Models (LLMs).

  • Proven track record of successfully deploying and optimizing LLM models for inference in production environments.

  • In-depth understanding of state-of-the-art language models, including but not limited to GPT-3, BERT, or similar architectures.

  • Expertise in training and fine-tuning LLMs using popular frameworks such as TensorFlow, PyTorch, or Hugging Face Transformers.

  • Proficiency in model deployment and optimization techniques for efficient inference on various hardware platforms, with a focus on GPUs.

  • Strong knowledge of GPU cluster architecture and the ability to leverage parallel processing for accelerated model training and inference.

  • Excellent communication and collaboration skills with the ability to articulate complex technical concepts to both technical and non-technical stakeholders.

  • Experience leading workshops, training sessions, and presenting technical solutions to diverse audiences.


Ways to stand out from the crowd

  • Experience in deploying LLM models in cloud environments (e.g., AWS, Azure, GCP) and on-premises infrastructure.

  • Proven ability to optimize LLM models for inference speed, memory efficiency, and resource utilization.

  • Familiarity with containerization technologies (e.g., Docker) and orchestration tools (e.g., Kubernetes) for scalable and efficient model deployment.

  • Deep understanding of GPU cluster architecture, parallel computing, and distributed computing concepts.

  • Hands-on experience with NVIDIA GPU technologies, and GPU cluster management and ability to design and implement scalable and efficient workflows for LLM training and inference on GPU clusters

Show more
27.07.2025
N

Nvidia Senior Solutions Architect Generative AI India, Maharashtra, Mumbai

Limitless High-tech career opportunities - Expoint
Architect end-to-end generative AI solutions with a focus on LLMs and RAG workflows. Collaborate closely with customers to understand their language-related business challenges and design tailored solutions. Collaborate with sales...
Description:
India, Mumbai
time type
Full time
posted on
Posted 3 Days Ago
job requisition id

What you will be doing:

  • Architect end-to-end generative AI solutions with a focus on LLMs and RAG workflows.

  • Collaborate closely with customers to understand their language-related business challenges and design tailored solutions.

  • Collaborate with sales and business development teams to support pre-sales activities, including technical presentations and demonstrations of LLM and RAG capabilities.

  • Work closely with NVIDIA engineering teams to provide feedback and contribute to the evolution of generative AI technologies.

  • Engage directly with customers to understand their language-related requirements and challenges.

  • Lead workshops and design sessions to define and refine generative AI solutions focused on LLMs and RAG workflows and lead the training and optimization of Large Language Models using NVIDIA’s hardware and software platforms.

  • Implement strategies for efficient and effective training of LLMs to achieve optimal performance.

  • Design and implement RAG-based workflows to enhance content generation and information retrieval.

  • Work closely with customers to integrate RAG workflows into their applications and systems and stay abreast of the latest developments in language models and generative AI technologies.

  • Provide technical leadership and guidance on best practices for training LLMs and implementing RAG-based solutions.


What we need to see:

  • Master's or Ph.D. in Computer Science, Artificial Intelligence, or equivalent experience

  • 5+ years of hands-on experience in a technical role, specifically focusing on generative AI, with a strong emphasis on training Large Language Models (LLMs).

  • Proven track record of successfully deploying and optimizing LLM models for inference in production environments.

  • In-depth understanding of state-of-the-art language models, including but not limited to GPT-3, BERT, or similar architectures.

  • Expertise in training and fine-tuning LLMs using popular frameworks such as TensorFlow, PyTorch, or Hugging Face Transformers.

  • Proficiency in model deployment and optimization techniques for efficient inference on various hardware platforms, with a focus on GPUs.

  • Strong knowledge of GPU cluster architecture and the ability to leverage parallel processing for accelerated model training and inference.

  • Excellent communication and collaboration skills with the ability to articulate complex technical concepts to both technical and non-technical stakeholders.

  • Experience leading workshops, training sessions, and presenting technical solutions to diverse audiences.


Ways to stand out from the crowd:

  • Experience in deploying LLM models in cloud environments (e.g., AWS, Azure, GCP) and on-premises infrastructure.

  • Proven ability to optimize LLM models for inference speed, memory efficiency, and resource utilization.

  • Familiarity with containerization technologies (e.g., Docker) and orchestration tools (e.g., Kubernetes) for scalable and efficient model deployment.

  • Deep understanding of GPU cluster architecture, parallel computing, and distributed computing concepts.

  • Hands-on experience with NVIDIA GPU technologies, and GPU cluster management and ability to design and implement scalable and efficient workflows for LLM training and inference on GPU clusters

Show more

These jobs might be a good fit

14.07.2025
N

Nvidia Senior Instructor Training Certification Specialist India, Maharashtra, Mumbai

Limitless High-tech career opportunities - Expoint
Evaluating and certifying instructors to instruct DLI training content. Collaborating with local governments and universities in India to support the adoption of NVIDIA technologies into their AI curricula. Supporting training...
Description:
India, Mumbai
India, Bengaluru
India, Hyderabad
India, Pune
time type
Full time
posted on
Posted 6 Days Ago
job requisition id

What you´ll be doing:

  • Evaluating and certifying instructors to instruct DLI training content.

  • Collaborating with local governments and universities in India to support the adoption of NVIDIA technologies into their AI curricula.

  • Supporting training advisors by educating customers on GPU-accelerated AI solutions and providing recommendations for content aligned with their needs and goals.

  • Contributing to course content via activities such as adding instructor notes, creating assessment guides, sharing instructor feedback with the content development team, and training others on new content.

  • Conducting Train-the-Trainer workshops and promoting instructor achievements to build our community of Certified Instructors.

  • Traveling up to 30%.

What we need to see:

  • 5+ years’ experience delivering technical training both online and in-person

  • Professional experience working with government entities and universities

  • Experience in at least one of the following areas: Predictive AI, Generative AI, LLMs, or Omniverse

  • Experience building or using AI applications

  • BS, CSE, CS or EE degree

  • Customer facing skills and background

  • Python or C / C++ programming experience

  • Excellent oral / written English skills

Ways to stand out from the crowd:

  • Demonstrate to us effective presentation skills while training developers!

  • Tell us about AI projects you have worked on and / or AI tools you have used.

  • Highlight your work with government entities and universities.

  • Share with us your GPU-based parallel programming expertise!

Show more

These jobs might be a good fit

13.04.2025
N

Nvidia Senior Solutions Architect Generative AI India, Maharashtra, Mumbai

Limitless High-tech career opportunities - Expoint
Architect end-to-end generative AI solutions with a focus on LLMs and RAG workflows. Collaborate closely with customers to understand their language-related business challenges and design tailored solutions. Collaborate with sales...
Description:
India, Mumbai
time type
Full time
posted on
Posted 14 Days Ago
job requisition id

What you will be doing

  • Architect end-to-end generative AI solutions with a focus on LLMs and RAG workflows.

  • Collaborate closely with customers to understand their language-related business challenges and design tailored solutions.

  • Collaborate with sales and business development teams to support pre-sales activities, including technical presentations and demonstrations of LLM and RAG capabilities.

  • Work closely with NVIDIA engineering teams to provide feedback and contribute to the evolution of generative AI technologies.

  • Engage directly with customers to understand their language-related requirements and challenges.

  • Lead workshops and design sessions to define and refine generative AI solutions focused on LLMs and RAG workflows and lead the training and optimization of Large Language Models using NVIDIA’s hardware and software platforms.

  • Implement strategies for efficient and effective training of LLMs to achieve optimal performance.

  • Design and implement RAG-based workflows to enhance content generation and information retrieval.

  • Work closely with customers to integrate RAG workflows into their applications and systems and stay abreast of the latest developments in language models and generative AI technologies.

  • Provide technical leadership and guidance on best practices for training LLMs and implementing RAG-based solutions.


What we need to see

  • Master's or Ph.D. in Computer Science, Artificial Intelligence, or equivalent experience

  • 5+ years of hands-on experience in a technical role, specifically focusing on generative AI, with a strong emphasis on training Large Language Models (LLMs).

  • Proven track record of successfully deploying and optimizing LLM models for inference in production environments.

  • In-depth understanding of state-of-the-art language models, including but not limited to GPT-3, BERT, or similar architectures.

  • Expertise in training and fine-tuning LLMs using popular frameworks such as TensorFlow, PyTorch, or Hugging Face Transformers.

  • Proficiency in model deployment and optimization techniques for efficient inference on various hardware platforms, with a focus on GPUs.

  • Strong knowledge of GPU cluster architecture and the ability to leverage parallel processing for accelerated model training and inference.

  • Excellent communication and collaboration skills with the ability to articulate complex technical concepts to both technical and non-technical stakeholders.

  • Experience leading workshops, training sessions, and presenting technical solutions to diverse audiences.


Ways to stand out from the crowd

  • Experience in deploying LLM models in cloud environments (e.g., AWS, Azure, GCP) and on-premises infrastructure.

  • Proven ability to optimize LLM models for inference speed, memory efficiency, and resource utilization.

  • Familiarity with containerization technologies (e.g., Docker) and orchestration tools (e.g., Kubernetes) for scalable and efficient model deployment.

  • Deep understanding of GPU cluster architecture, parallel computing, and distributed computing concepts.

  • Hands-on experience with NVIDIA GPU technologies, and GPU cluster management and ability to design and implement scalable and efficient workflows for LLM training and inference on GPU clusters

Show more

These jobs might be a good fit

Limitless High-tech career opportunities - Expoint
Architect end-to-end generative AI solutions with a focus on LLMs and RAG workflows. Collaborate closely with customers to understand their language-related business challenges and design tailored solutions. Collaborate with sales...
Description:
India, Mumbai
time type
Full time
posted on
Posted 6 Days Ago
job requisition id

What you will be doing

  • Architect end-to-end generative AI solutions with a focus on LLMs and RAG workflows.

  • Collaborate closely with customers to understand their language-related business challenges and design tailored solutions.

  • Collaborate with sales and business development teams to support pre-sales activities, including technical presentations and demonstrations of LLM and RAG capabilities.

  • Work closely with NVIDIA engineering teams to provide feedback and contribute to the evolution of generative AI technologies.

  • Engage directly with customers to understand their language-related requirements and challenges.

  • Lead workshops and design sessions to define and refine generative AI solutions focused on LLMs and RAG workflows and lead the training and optimization of Large Language Models using NVIDIA’s hardware and software platforms.

  • Implement strategies for efficient and effective training of LLMs to achieve optimal performance.

  • Design and implement RAG-based workflows to enhance content generation and information retrieval.

  • Work closely with customers to integrate RAG workflows into their applications and systems and stay abreast of the latest developments in language models and generative AI technologies.

  • Provide technical leadership and guidance on best practices for training LLMs and implementing RAG-based solutions.


What we need to see

  • Master's or Ph.D. in Computer Science, Artificial Intelligence, or equivalent experience

  • 5+ years of hands-on experience in a technical role, specifically focusing on generative AI, with a strong emphasis on training Large Language Models (LLMs).

  • Proven track record of successfully deploying and optimizing LLM models for inference in production environments.

  • In-depth understanding of state-of-the-art language models, including but not limited to GPT-3, BERT, or similar architectures.

  • Expertise in training and fine-tuning LLMs using popular frameworks such as TensorFlow, PyTorch, or Hugging Face Transformers.

  • Proficiency in model deployment and optimization techniques for efficient inference on various hardware platforms, with a focus on GPUs.

  • Strong knowledge of GPU cluster architecture and the ability to leverage parallel processing for accelerated model training and inference.

  • Excellent communication and collaboration skills with the ability to articulate complex technical concepts to both technical and non-technical stakeholders.

  • Experience leading workshops, training sessions, and presenting technical solutions to diverse audiences.


Ways to stand out from the crowd

  • Experience in deploying LLM models in cloud environments (e.g., AWS, Azure, GCP) and on-premises infrastructure.

  • Proven ability to optimize LLM models for inference speed, memory efficiency, and resource utilization.

  • Familiarity with containerization technologies (e.g., Docker) and orchestration tools (e.g., Kubernetes) for scalable and efficient model deployment.

  • Deep understanding of GPU cluster architecture, parallel computing, and distributed computing concepts.

  • Hands-on experience with NVIDIA GPU technologies, and GPU cluster management and ability to design and implement scalable and efficient workflows for LLM training and inference on GPU clusters

Show more
Unlock new career opportunities in the high tech industry with Expoint. Our platform offers a comprehensive search for jobs at Nvidia in India, Mumbai. Find the best job opportunities in your desired area and take your career to the next level. Connect with leading organizations and start your high tech journey with Expoint. Sign up today and discover your dream career with Expoint.