The point where experts and best companies meet
Share
AWS AI Research & Engineering (AIRE) is looking for scientists and engineers to work on optimizing foundation models for inference in Tuebingen, Germany. At AIRE, we actively work on applying compiler, high-performance computing, and computer architecture techniques, amongst others, to optimize the performance of foundation model execution, including training and inference. Join us to work as an integral part of a team that has diverse experiences in this space. You will invent, implement, and deploy state of the art machine learning algorithms and systems to improve the inference of foundation models. To this end, you will interact closely with our customers---product orgs---and with the academic and research communities.About AWSDiverse Experiences
AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.Mentorship & Career Growth
We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.Work/Life Balance
- PhD, or a Master's degree and experience in CS, CE, ML or related field
- Experience in patents or publications at top-tier peer-reviewed conferences or journals
- Experience programming in Java, C++, Python or related language
- Experience with Machine Learning
- Experience with mathematical optimization, parallel and distributed computing, high-performance computing
- Solid technical understanding of modern deep learning architectures like Transformers
- Experience with with deep learning frameworks like PyTorch
- Experience with programming hardware accelerators (e.g., GPU / TPU / Neuron)
- Experience with inference optimization of foundation models (e.g., model compression techniques like distillation/pruning/sparsification/quantization, architectural optimization like mixture of experts, decoding optimization like speculative decoding/adaptive inference, system-level optimization like distributed inference/persisting KV caching/dynamic batching)
- Experience with inference engines (e.g., vLLM, TGI)
These jobs might be a good fit