Finding the best job has never been easier
Share
AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machine
This role is responsible for development, enablement and performance tuning of a wide variety of ML model families, including massive scale large language models like Llama2, GPT2, GPT3 and beyond, as well as stable diffusion, Vision Transformers and many more.The ML Apps team works side by side with compiler engineers and runtime engineers to create, build and tune distributed inference solutions with Trn1. Experience optimizing inference performance for both latency and throughput on these large models using Python, JAX is a must. Deepspeed and other distributed inference libraries are central to this and extending all of this for the Neuron based system is key.Key job responsibilities
This role will help lead the efforts building distributed inference support into Pytorch, Tensorflow using XLA and the Neuron compiler and runtime stacks. This role will help tune these models to ensure highest performance and maximize the efficiency of them running on the customer AWS Trainium and Inferentia silicon and the TRn1 , Inf1 servers. Strong software development using C++/Python and ML knowledge are both critical to this role.A day in the life
As you design and code solutions to help our team drive efficiencies in software architecture, you’ll create metrics, implement automation and other improvements, and resolve the root cause of software defects. You’ll also:
Participate in design discussions, code review, and communicate with internal and external stakeholders.Work in a startup-like development environment, where you’re always working on the most important stuff.
Diverse Experiences
AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.About AWSWork/Life Balance
Mentorship & Career Growth
We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.Hybrid Work
- 3+ years of non-internship professional software development experience
- 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience
- Experience programming with at least one software programming language
- 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
- Master's degree in computer science or equivalent
- Previous software engineer expertise with Pytorch/Jax/Tensorflow, Distributed libraries and Frameworks, End-to-end Model Training and Inference deployments. The group presents lot of opportunity for optimization and scaling large deep learning models on Trainium architecture.
These jobs might be a good fit