The point where experts and best companies meet
Share
You will be working directly with the most important customers (across segments) in the GenAI model training and inference space helping them adopt and scale large-scale workloads (e.g., foundation models) on AWS, model performance evaluations, develop demos and proof-of-concepts, developing GTM plans, external/internal evangelism, and developing demos and proof-of-concepts.Key job responsibilities
You will help develop the industry’s best cloud-based solutions to grow the GenAI business. Working closely with our engineering teams, you will help enable new capabilities for our customers to develop and deploy GenAI workloads on AWS. You will facilitate the enablement of AWS technical community, solution architects and, sales with specific customer centric value proposition and demos about end-to-end GenAI on AWS cloud.You will possess a technical and business background that enables you to drive an engagement and interact at the highest levels with startups, Enterprises, and AWS partners. You will have the technical depth and business experience to easily articulate the potential and challenges of GenAI models and applications to engineering teams and C-Level executives. This requires deep familiarity across the stack – compute infrastructure (Amazon EC2, Lustre), ML frameworks PyTorch, JAX, orchestration layers Kubernetes and Slurm, parallel computing (NCCL, MPI), MLOPs, as well as target use cases in the cloud.You will drive the development of the GTM plan for building and scaling GenAI on AWS, interact with customers directly to understand their business problems, and help them with defining and implementing scalable GenAI solutions to solve them (often via proof-of-concepts). You will also work closely with account teams, research scientists, and product teams to drive model implementations and new solutions.This is an opportunity to be at the forefront of technological transformations, as a key technical leader. Additionally, you will work with the AWS ML and EC2 product teams to shape product vision and prioritize features for AI/ML Frameworks and applications. A keen sense of ownership, drive, and being scrappy is a must.
Diverse Experiences
Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.Why AWS
Work/Life BalanceMentorship and Career Growth
We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
- Bachelor's degree in computer science, engineering, mathematics or equivalent
- 8+ years of specific technology domain areas (e.g. software development, cloud computing, systems engineering, infrastructure, security, networking, data & analytics) experience
- 3+ years of design, implementation, or consulting in applications and infrastructures experience
- 5+ years building or optimizing computational applications for large scale HPC systems (e.g. physics based simulations) to take advantage of high performance networking (e.g. Amazon EFA, Infiniband, RoCE), distributed parallel filesystems (e.g. Lustre, BeeGFS, GPFS) and accelerators (e.g. GPUs, custom-silicon)
- Understanding of deep learning training and inference workloads and requirements for high performance compute, network and storage
- 5+ years of infrastructure architecture, database architecture and networking experience
- Experience working with end user or developer communities
These jobs might be a good fit