As ayou will be pivotal in contributing to the team responsible for designing and developing the next generation of scalable kubernetes' infrastructure with machine learning platforms that support both traditional ML and innovative Large Language Models (LLMs). This is a position for experienced engineers where you will lead the technical direction, ensuring the performance, reliability, and scalability of AI systems while collaborating closely with data scientists, researchers, and other engineering teams.
Core ResponsibilitiesAs a
Software Engineer with AI/ML Platform Experienceyou will:
- Architect and design scalable Kubernetes platforms supporting both traditional ML and Large Language Models (LLMs).
- Provide client support for hosting AI/ML workload on Kubernetes platform along with Re-ranking and Embedding as a Service (RAGaaS)
- Lead the development of end-to-end ML pipelines, including data ingestion, model training, evaluation, and deployment.
- Drive AIOPs initiative across Middleware platform by collaborating with multi-functional teams across organization to operationalize and optimize ML models effectively.
- Define and implement MLOps standard methodologies such as monitoring, logging, and automated maintenance of models in production.
- Develop infrastructure automation tools and frameworks to improve efficiency across teams.
- Ensure platform reliability, scalability, and performance through meticulous engineering practices.
- Conduct code reviews, establish standard processes, and mentor junior engineers.
- Stay updated on the latest trends in AI/ML to influence platform enhancements.
Minimum Qualifications / Requirement -
- Experience : 10+ years of software engineering experience, including at least 2+ years in AI & machine learning-related roles.
- Expertise in Golang or Python , with hands-on experience with Kubernetes platform along with ML frameworks (TensorFlow, PyTorch).
- Drive Architecture discussion with regards to software development, building AI/ML supporting platforms & services such as Kubernetes platform, Re-ranking & embedding as a service [RAGaaS]
- Subject Matter expertise with microservice architecture, SSO and containerized cloud applications.
- Consistent track record in designing and deploying scalable machine learning systems in production.
- Deep understanding of ML algorithms, data pipelines, and optimization techniques.
- Experience building CI/CD pipelines for ML workflows, including model monitoring and retraining.
- Proficiency in cloud platforms and orchestration tools for distributed systems.
- Strong problem-solving and debugging skills for complex, large-scale systems.
- Experience in mentoring engineers and driving technical decision-making.
Preferred Qualifications / Requirements -
- Kubernetes and Container Orchestration:
- Expert understanding in Kubernetes for managing production-grade systems and ensuring scalability.
- Expert with Docker and orchestration of complex services.
- Software development:
- Expertise in Golang or Python
- Develop & enforce secure software development lifecycle
- MLOps Tools and Frameworks:
- Strong hand-on with architecting and optimizing workflows using Kubeflow pipelines, KServe, Airflow, and MLflow.
- Ability to design and implement efficient CI/CD pipelines for ML systems.
- Large Language Models (LLMs):
- Experience of LangChain and experience designing RAG systems.
- Experience of integrating and scaling vector databases (e.g., Pinecone, FAISS) for real-world applications.
- Distributed Systems and Microservices:
- Consistent track record of designing and leading the development of distributed systems.
- Experience with implementing robust inter-service communication patterns and solving scalability issues.