המקום בו המומחים והחברות הטובות ביותר נפגשים
AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machine learning accelerators and the Trn1 and Inf1 servers that use them.As the Software Development Manager for the Tools Team, you will be responsible for leading a talented team of engineers to develop and maintain high-performance monitoring and profiling tools for machine learning applications and AI accelerators. You will oversee the design, development, and deployment of the Neuron Profiler and other Neuron Tools. The profiler plays a crucial role to internal and external customers in optimizing AI workloads across hardware platforms such as Trainium and Inferentia devices, by providing deep insights into performance bottlenecks and system behavior.In this role, you will manage the full development life cycle of the Neuron Profiler/Tools toolchain, ensuring scalability, reliability, and usability. You will collaborate with cross-functional teams to ensure that the our C++ compiler and runtime generates key information so customers can understand and optimize the performance of our custom hardware. Additionally, you will drive innovations that allow the profiler to support multiple frameworks, such as PyTorch, TensorFlow, and XLA.
A day in the life
day in the life
You will work with the executive leadership and other senior management and technical leaders to define product directions and deliver them to customers. We build massive-scale distributed training and inference solutions. This organization builds the full stack of software, servers and chips to accelerate at the highest scale.
Work/Life BalanceMentorship & Career Growth
- 3+ years of engineering team management experience
- 7+ years of working directly within engineering teams experience
- 3+ years of designing or architecting (design patterns, reliability and scaling) of new and existing systems experience
- * Experience partnering with product or program management teams
- * Experience in C++, Go, and Python
- * 2+ years experience leading teams that in Machine Learning development including building and training large models, working with Pytorch and/or Tensorflow using large distributed fleets of GPU or other accelerated systems.
- * Experience with Linux distributions such as Ubuntu or CentOS, kernel development, and tooling such as perf and gdb.
- * Experience with performance profiling, tracing, and analysis of AI training/inference applications.
- * Experience with large scale, distributed AI training/inference applications, including libfabric, MPI, slurm, and EKS.
- * Experience with fleet monitoring, debugging, and reliability.
- * Knowledge of AI-powered optimization suggestions for profiling would be an advantage for this position.
משרות נוספות שיכולות לעניין אותך