The point where experts and best companies meet
Share
AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machine
The team works side by side with chip architects, compiler engineers and runtime engineers to deliver performance and accuracy on Neuron devices across a range of models such as Llama 3.3 70B, 3.1 405B, DBRX, Mixtral, and so on.Key job responsibilities
Responsibilities of this role include adapting latest research in LLM optimization to Neuron chips to extract best performance from both open source as well as internally developed models. Working across teams and organizations is key.
- 5+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
- Bachelor's degree in computer science or equivalent
- 5+ years of programming using a modern programming language such as Java, C++, or C#, including object-oriented design experience
- Fundamentals of Machine learning models, their architecture, training and inference lifecycles along with work experience on some optimizations for improving the model performance.
- Master's degree in computer science or equivalent
- Hands-on experience with PyTorch or Jax - preferably involving developing and deploying LLMs in production on GPUs, Neuron, TPU or other AI acceleration hardware.
These jobs might be a good fit