Expoint - all jobs in one place

המקום בו המומחים והחברות הטובות ביותר נפגשים

Limitless High-tech career opportunities - Expoint

Dell Machine Learning Engineer Senior Advisor _ AI & ML 
India, Karnataka, Bengaluru 
464085805

15.08.2024


What you’ll achieve

As an AI/ML Engineer specializing in LLMOps, you will be integral to deploying sophisticated General/Generative AI solutions, ensuring their operational efficiency, scalability, and responsible use. Engage in innovative projects that leverage massive datasets and state-of-the-art language models to drive decision-making and operational efficiencies across global platforms.


You will:

  • Architect and scale machine learning and large language models for efficient deployment across various platforms, implementing LLMOps best practices.
  • Build and optimize data pipelines to operationalize ML and LLM models at scale, including advanced prompt engineering techniques and LLM guardrails.
  • Develop and deploy LLM-based multi-agent systems, ensuring scalability and efficient communication between agents.
  • Work collaboratively with data scientists to refine algorithms and models based on performance metrics and implement human feedback loops and RLHF processes.
  • Develop APIs, SDKs, and LLM chains/pipelines to enable seamless interaction with deployed models, incorporating ethical AI principles.
  • Implement Docker containers, orchestrate load balancing, and manage LLM-specific infrastructure to optimize resource allocation and utilize vector databases for efficient data handling, retrieval, and context management in LLM/RAG applications.

Essential Requirements

  • Masters/ Bachelors with 8+ years of relevant experience with mastery in data science platforms (such as Microsoft Azure, AWS, Google Cloud) for building and deploying ML and LLM models, with expertise in LLMOps best practices.
  • Proficient in object-oriented programming languages and LLM-specific frameworks such as LangChain, LangGraph, LlamaIndex, with experience in prompt engineering
  • Significant software engineering experience with a focus on ML/LLM model production, scalability in low-latency environments, and responsible AI implementation.
  • Advanced understanding of Docker, Kubernetes, cloud-native computing, DevOps, data/LLM response streaming, and parallelized workloads for ML and LLM deployments.
  • Knowledge of vector databases, LLM fine-tuning techniques, and implementation of LLM guardrails along with experience in LLM operations, including Retrieval-Augmented Generation, Chatbot and multi-agent system deployments (CrewAI, AutoGen, LangGraph).

Desirable Requirements

  • Experience in Data Engineering (Spark), Message Queues (RabbitMQ, Kafka) & languages like Python, SQL, C++, R.
  • Proficiency in databases (Postgres, MongoDB, SQL Server, Redis) and their optimization for ML/LLM workloads, including vector databases for efficient context retrieval.