Finding the best job has never been easier
Share
What you'll be doing:
Develop and maintain the infrastructure for managing large language models (LLMs) based application specifically adapted for the chip design and hardware domain.
Develop and maintain LLM based applications to serve hardware engineers, such as LLM based QA bot, code generator etc.
Collaborate with HW chip designers and LLM research teams to understand the specific needs and challenges of GPU design and ensure the LLM infrastructure is well-suited to these needs.
Collaborate with LLM research teams to collect & organize training / fine-tuning data to train hardware specific language model
Optimize the infrastructure for performance, scalability, and reliability, and ensure the secure and efficient management of data.
Stay updated with the latest industry trends in AI and machine learning, and continuously look for opportunities to apply these advancements to improve the LLM infrastructure.
What we need to see:
BS in computer science or related or equivalent experience
5+ years experience
Experience in developing and maintaining AI or machine learning infrastructure, preferably in the context of large language models.
Strong proficiency in Python and web development, and familiarity with LLM related techniques e.g., langchain, vector database, prompt engineering, etc.
Understanding of chip design and related computational and data challenges.
Experience with data management, including doc cleaning, transformation, and secure storage.
Excellent problem-solving skills and the ability to work effectively in a team.
In depth understanding of Machine Learning / Deep Learning / NLP concepts.
Ways to stand out from the crowd:
You crafted & developed production quality microservices
Strong technical background in cloud/distributed infrastructure
An excellent plus if you are familiar with front-end development using React or Vue.js
Strong understanding of SQL & NoSQL Data platforms.
You will also be eligible for equity and .
These jobs might be a good fit