מציאת משרת הייטק בחברות הטובות ביותר מעולם לא הייתה קלה יותר
What You’ll Be Doing:
Primary responsibilities will include building robust AI/HPC infrastructure for new and existing customers.
Support operational and reliability aspects of large-scale AI clusters, focusing on performance at scale, training stability, real-time monitoring, logging, and alerting.
Engage in and improve the whole lifecycle of services from inception and design through deployment, operation, and refinement.
Your primary focus would be on understanding the AI workload and how it interacts with other parts of the system like networking, storage, deep learning frameworks, data cleaning tools, etc.
Help maintain services once they are live by measuring and monitoring progress of AI jobs and helping engineering design solutions for more robust training at scale.
Provide feedback to internal teams such as opening bugs, documenting workarounds, and suggesting improvements.
Regional travel is required for on-site visits with customers.
What We Need to See:
BS/MS/PhD or equivalent experience in Computer Science, Data Science, Electrical/Computer Engineering, Physics, Mathematics, other Engineering fields
At least 8 years work or research experience with Python/ C++ / other software development.
Track record of medium to large scale AI training and understanding of key libraries used for NLP/LLM/VLA training (NeMo Framework, DeepSpeed etc.)
You are excited to work with multiple levels and teams across organizations (Engineering, Product, Sales and Marketing team) Capable of working in a constantly evolving environment without losing focus. Ability to multitask in a fast-paced environment.
Driven with strong analytical and problem-solving skills. Strong time-management and organization skills for coordinating multiple initiatives, priorities and implementations of new technology and products into very sophisticated projects.
You are a self-starter with demeanor for growth, passion for continuous learning and sharing findings across the team.
Excellent verbal, written communication, and technical presentation skills in English.
Ways to Stand Out from The Crowd:
Experience working with large transformer-based architectures for NLP, CV, ASR or other. Experience running large scale distributed DL training.
Understanding of HPC systems: data center design, high speed interconnect InfiniBand, Cluster Storage and Scheduling related design and/or management experience.
Proven experience with one or more Tier-1 Clouds (AWS, Azure, GCP or OCI) and cloud-native architectures and software.
Technical leadership and strong understanding of NVIDIA technologies, and success in working with customers.
Expertise with parallel filesystems (e.g. Lustre, GPFS, BeeGFS, WekaIO) and high-speed interconnects (InfiniBand, Omni Path, and Gig-E).
Strong coding and debugging skills, and demonstrated expertise in one or more of the following areas: Machine Learning, Deep Learning, Slurm, Docker/Kubernetes, Kubernetes, Singularity, MPI, MLOps, LLMOps, Ansible, Terraform, and other high-performance AI cluster solutions.
Proficient in deploying GPU applications in Slurm and Kubernetes. Experience with high performance or large scale computing environments.
Hands-on experience with DGX Cloud, NVIDIA AI Enterprise AI Software, Base Command Manager, NEMO and NVIDIA Inference Microservices.
Experience with integration and deployment of software products in production enterprise environments, and microservices software architecture.
משרות נוספות שיכולות לעניין אותך