

What you'll be doing:
Develop new Deep Learning models for automatic speech recognition, speech synthesis, neural machine translation and natural language
Design new large scale training algorithm
Open-source models using NeMo conversational AI frameworks
Mentor interns
Publish research papers on top speech and NLP conferences
Collaborate with universities and research teams.
What we need to see:
PhD in Computer Science or Electrical Engineering (or equivalent experience)
Proven understanding of Deep Learning for Natural Language Processing or Speech Recognition
At least 5 years of research experience in speech recognition or NLP
Excellent Python programming skills
Experience with PyTorch
Strong publications record
Ways to stand out from the crowd:
Contribution to open-source projects
Being reviewers for one of the top speech conferences
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך

observability systems fordata centersenabling EDA workflowsEDA workloads.You will develop, deploy, andability solutions for multipleCPU and
Be Doing:
Collaborate with HW, and SW engineering teams to deliver observability solutions that meet their needs in EDA clusters.
Develop, test, and deploy data collectors, pipelines, visualization and retrieval services.
Define data collection and retention policies to balance network bandwidth, system load, and storage capacity costs with data analysis requirements.
Work in a diverse team to provide operational and strategic data to empower our engineers and researchers to improve performance, productivity, and efficiency.
Continuously improve quality, workloads, and processes through better observability.
What We Need to See:
Experience developing large scale, distributed observability systems.
Ability to collaborate with data scientists, researchers, and engineering teams to identify high value data for collection and analysis.
Experience with turning raw data into actionable reports
Experience with observability platforms such as Apache Spark, Elastic/Open Search, Grafana, Prometheus, and other similar open-source tools
Python programming experience and use of API calls
Passion for improving the productivity of others
Excellent planning and interpersonal skills
Flexibility/adaptabilityworking in a dynamic environment with changing requirements
MS (preferred) or BS in Computer Science, Electrical Engineering, or related field or equivalent experience.
8+ years of proven experience.
Ways To Stand Out from The Crowd:
Background in computer science, EDA software, open-source software, infrastructure technologies, and GPU technology.
Prior experience in infrastructure software, production application software development, software development, release and support methodology and DevOps
Experience in the management of datacenters and large-scale distributed computing
Experience working with EDA developers
Consistent track record of driving process improvements and measuring efficiency and a passion for sharing knowledge and experience driving complex projects end-to-end.
You will also be eligible for equity and .

What you'll be doing:
Use AI to solve product challenges in gaming and other interactive experiences.
Build upon the latest research to create world-class conversational pipelines for AI assistants and agents.
Improve and fine-tune language models and retrieval-augmented generation solutions for accuracy and performance.
Build prototypes to demonstrate real-life applications of your ideas and to accelerate productization.
Collaborate with NVIDIA's internal and external teams, including AI/DL researchers, hardware architects, and software engineers.
Participate in technology transfers to and from teams across NVIDIA.
What we need to see:
PhD or Master’s degree in Computer Science/Engineering, Machine Learning, AI, or related fields; or equivalent experience.
12+ years of work experience with last 5+ years focused on language models, AI assistants, and agents.
Proficiency in C, C++, and Python, with the ability to write high-performance production code.
Experience with GPU programming, CUDA, and system optimizations is a significant plus.
A track record of proven research excellence, demonstrated through presentations, demos, or publications at leading venues such as GDC, ICCV/ECCV, SIGGRAPH, or other research artifacts such as software projects or significant product development.
AI-powered machines can learn, reason, and interact with people, thanks to GPU deep learning. We offer competitive salaries and great benefits as a top tech employer with leading talent.
You will also be eligible for equity and .

What you'll be doing:
Lead documentation planning and prioritization sessions with cross-functional partners, embedding documentation requirements into Product Sprint Goal PRDs from day one
Manage documentation workflow using Kanban, maintaining clear ownership, dependencies, and status visibility while tracking delivery through sprint cycles and product releases
Champion Context Kits (structured prompts and guidelines) that help distributed teams build quality documentation with AI assistance, streamlining reporting and revealing operational insights
Report on critical metrics including coverage, cycle time, sprint predictability, and developer satisfaction, identifying blockers and resolving delivery impediments
Work closely with Technical Program Managers to integrate documentation checkpoints into release trains, facilitating backlog refinement, stand-ups, and retrospectives
What we need to see:
Bachelors Degree (or equivalent experience) with 8+ years in program management, technical operations, or agile delivery with strong proficiency in Jira, Confluence, and agile tracking tools
Proven track record coordinating work across matrixed organizations with clear communication style—leading effective meetings, writing streamlined updates, and aligning collaborators
Active AI tool user (ChatGPT, Claude, Copilot, or similar) who demonstrates data-driven decision-making and can influence without authority across Product, Engineering, Marketing, and Customer Success
Ways to stand out from the crowd:
Experience coordinating developer documentation in platform or SaaS companies, working alongside Technical Program Managers in complex product organizations
Hands-on experience with Agile/Scrum/Kanban, continuous delivery, and docs-as-code workflows (Git, Markdown, static site generators)
Demonstrated process improvements that measurably boosted team efficiency, with knowledge of developer platforms, SDKs, APIs, or simulation technologies
Background in gaming, graphics, AI, or high-performance computing with proven AI workflow optimization—status reports, meeting summaries, workflow analysis, documentation reviews
Developed tailored GPTs, prompt libraries, Context Kits, or reusable templates that optimized team efficiency and content quality
You will also be eligible for equity and .

Are you a rare mix of technical depth, ecosystem savvy, and
What You’ll Be Doing:
Engage and support ISVs, system integrators, and manufacturers using AI to transform industrial operations.
Partner with developers to help them integrate NVIDIA’s latest vision AI technologies into scalable industrial solutions.
Collaborate with product, engineering, and marketing teams to amplify developer enablement and ecosystem growth.
Drive early adoption of NVIDIA Metropolis and related SDKs, ensuring partner success through hands-on guidance and technical onboarding.
Identify and elevate lighthouse partners demonstrating best-in-class industrial AI use cases.
What We Need To See:
8yrs of proven ability in a technical or developer-facing role, ideally within AI, industrial automation, or OT systems integration.
Bachelor’s or advanced degree in computer science, engineering, or related field, or equivalent experience.
Proven success building and supporting developer ecosystems or partner networks.
Strong technical understanding of AI, machine learning, video analytics, or related technologies.
Excellent communication andrelationship-buildingskills, with the ability to convey complex technical concepts clearly across technical and business audiences.
Ability to collaborate multi-functionally to accelerate adoption and scale developer success.
Ways To Stand Out From The Crowd:
Experience applying AI in manufacturing or industrial automation, including computer vision, robotics, or digital-twin workflows.
Experience in manufacturing operations or adjacent industrial fields, with a deep understanding of the unique challenges, risk tolerance, and change-management dynamics that shape technology adoption in these traditionally conservative industries.
Hands-on familiarity with AI for computer vision, robotics, or NVIDIA platforms (Metropolis, Omniverse, CUDA-X)
Proven success enabling ISVs, manufacturers, or system integrators in sectors such as manufacturing, logistics, or energy, guiding them from pilot to scaled deployment.
Passion for helping developers and operators bridge the gap between innovation and production reality through intelligent, AI-powered systems.
You will also be eligible for equity and .

What you'll be doing:
Lead, mentor, and scale a high-performing engineering team focused on deep learning inference and GPU-accelerated software.
Drive the strategy, roadmap, and execution of NVIDIA’s inference frameworks engineering, focusing on SGLang.
Partner with internal compiler, libraries, and research teams to deliver end-to-end optimized inference pipelines across NVIDIA accelerators.
Oversee performance tuning, profiling, and optimization of large-scale models for LLM, multimodal, and generative AI applications.
Guide engineers in adopting best practices for CUDA, Triton, CUTLASS, and multi-GPU communications (NIXL, NCCL, NVSHMEM).
Represent the team in roadmap and planning discussions, ensuring alignment with NVIDIA’s broader AI and software strategies.
Foster a culture of technical excellence, open collaboration, and continuous innovation.
What we need to see:
MS, PhD, or equivalent experience in Computer Science, Electrical/Computer Engineering, or a related field.
6+ years of software development experience, including 3+ years in technical leadership or engineering management.
Strong background in C/C++ software design and development; proficiency in Python is a plus.
Hands-on experience with GPU programming (CUDA, Triton, CUTLASS) and performance optimization.
Proven record of deploying or optimizing deep learning models in production environments.
Experience leading teams using Agile or collaborative software development practices.
Ways to Stand out from The Crowd
Significant open-source contributions to deep learning or inference frameworks such as PyTorch, vLLM, SGLang, Triton, or TensorRT-LLM.
Deep understanding of multi-GPU communications (NIXL, NCCL, NVSHMEM) and distributed inference architectures.
Expertise in performance modeling, profiling, and system-level optimization across CPU and GPU platforms.
Proven ability to mentor engineers, guide architectural decisions, and deliver complex projects with measurable impact.
Publications, patents, or talks on LLM serving, model optimization, or GPU performance engineering.
You will also be eligible for equity and .

What You’ll Be Doing:
Take charge of the technical integration of quantum hardware (neutral atom, trapped ion, superconducting) with HPC systems via APIs, middleware, and orchestration layers like CUDA-Q.
Formulate and refine hybrid workflows to enable seamless task distribution between GPU clusters and quantum devices.
Partner closely with quantum hardware suppliers to set up connectivity, control interfaces, and co-design specifications to improve performance, decrease latency, and enable data exchange.
Partner with internal scientists and engineers to install & optimize applications, deploy hybrid workloads, and evaluate system performance.
Work with control systems engineers to ensure environmental, timing, and data interfaces meet quantum hardware requirements.
Prototype and benchmark hybrid applications in materials science, chemistry, optimization, and machine learning to showcase platform capabilities.
Contribute to roadmap planning for adding new quantum modalities (superconducting, photonic) and integrating emerging SDKs.
Represent NVIDIA at technical conferences, workshops, and industry forums, showcasing our advancements and groundbreaking efforts.
Develop comprehensive user documentation and integration guides for internal use and cross-team collaboration.
Drive continuous improvement across software stacks, orchestration layers, and data pipelines connecting quantum and HPC domains.
What We Need to See:
12+ years of experience in HPC system administration, Linux, Slurm, application support, and data management.
Experience with quantum programming frameworks like CUDA-Q, Qiskit, PennyLane, Cirq, Braket, and more.
Proficiency in Python, C++, or Rust for API integration and workflow automation.
Strong understanding of HPC systems, Slurm orchestration, and GPU-accelerated computing environments.
Understanding of quantum hardware systems encompassing neutral-atom, trapped-ion, superconducting, or photonic technologies.
Bachelor’s or Master’s degree or equivalent experience in Physics, Electrical/Computer Engineering, or Computer Science (PhD preferred).
Outstanding communication and collaborator management skills, with the ability to engage both experimental scientists and systems engineers.
Ways to Stand Out from the Crowd:
Demonstrated track record collaborating with quantum hardware providers.
Deep understanding of quantum-classical orchestration frameworks and low-latency data transfer architectures.
Familiarity with cloud-based quantum services and HPC integration standards.
Contributions to open-source quantum frameworks or involvement in academic collaborations.
Success in bridging experimental physics and HPC engineering teams.
Experience representing an organization in technical standards bodies or research consortia.
You will also be eligible for equity and .

What you'll be doing:
Develop new Deep Learning models for automatic speech recognition, speech synthesis, neural machine translation and natural language
Design new large scale training algorithm
Open-source models using NeMo conversational AI frameworks
Mentor interns
Publish research papers on top speech and NLP conferences
Collaborate with universities and research teams.
What we need to see:
PhD in Computer Science or Electrical Engineering (or equivalent experience)
Proven understanding of Deep Learning for Natural Language Processing or Speech Recognition
At least 5 years of research experience in speech recognition or NLP
Excellent Python programming skills
Experience with PyTorch
Strong publications record
Ways to stand out from the crowd:
Contribution to open-source projects
Being reviewers for one of the top speech conferences
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך