

What You’ll Be Doing:
Lead and grow a high-impact engineering team focused on AI-enabled signal processing for the Radio Access Network (RAN).
Guide the development of deep learning models for tasks such as channel estimation, beamforming, link adaptation, and CSI compression.
Collaborate with global teams across architecture, research, and systems to drive proof-of-concepts and production-quality AI-RAN components.
Oversee integration of AI models into full-stack simulations and/or testbeds using frameworks such as PyTorch, TensorFlow, and NVIDIA Sionna.
Align project priorities with hardware-software co-design constraints and deployment scenarios on NVIDIA platforms.
Mentor team members, ensure technical excellence, and contribute to strategic direction.
What We Need to See:
MS or PhD in Electrical Engineering, Computer Engineering, or related field.
10+ overall years of experience in wireless communications, signal processing, or AI/ML, with at least 3+ years of technical leadership experience.
Deep understanding of wireless PHY/MAC systems, including MIMO, OFDM, and adaptive filtering.
Proven experience developing or deploying neural network architectures (e.g., CNNs, Transformers) in real-world AI or signal processing applications.
Proficiency in Python and deep learning frameworks such as PyTorch or TensorFlow.
Strong collaboration and communication skills across multi-disciplinary teams and geographies.
Ways to Stand Out from the Crowd:
Experience with AI for 5G/6G systems, AI-for-RAN architecture, or telecom-grade deployments.
Knowledge of channel estimation by AI, model compression, real-time inference, or GPU optimization
Familiarity with RIS, massive MIMO, or THz communication challenges.
Track record of research, publications, or open-source contributions in AI-for-wireless.
משרות נוספות שיכולות לעניין אותך

What You’ll Be Doing:
Design and prototype deep learning models for wireless signal processing tasks such as channel estimation, beam alignment, link adaptation, and scheduling.
Work with simulation tools and real-world datasets to build models that generalize across diverse wireless scenarios.
Implement, train, and validate neural networks (e.g., CNNs, Transformers, GNNs) using PyTorch or TensorFlow.
Collaborate with researchers and system engineers to integrate models into full-stack RAN.
Optimize model performance for real-time inference and hardware acceleration.
Contribute to model evaluation, benchmarking, and deployment readiness on GPU platforms.
What We Need to See:
MS or PhD in Electrical Engineering, Computer Engineering, or a related field (or equivalent experience).
12+ years of experience in wireless communications, signal processing, or AI/ML.
Deep understanding of communication systems (e.g., MIMO, OFDM, fading channels) and DSP fundamentals.
Strong experience in training and deploying deep learning models for time-series or signal-based tasks.
Proficiency in Python and experience with DL frameworks like PyTorch or TensorFlow.
Familiarity with tools such as MATLAB, GNU Radio, or NVIDIA Sionna for wireless simulation.
Ways to Stand Out from the Crowd:
Experience with AI for 5G/6G systems, AI-for-RAN architecture, or telecom-grade deployments.
Knowledge of channel estimation by AI, model compression, real-time inference, or GPU optimization
Exposure to CUDA, Triton, or real-time inference pipelines.
Contributions to research publications or open-source wireless/AI projects.
משרות נוספות שיכולות לעניין אותך

What you'll be doing:
Develop and fine-tune multi-modal AI models using NVIDIA’s TAO Toolkit and deep learning frameworks.
Contribute to the design and implementation of vision-language models (VLMs) and universal segmentation systems.
Conduct experiments and benchmarking to evaluate model accuracy, robustness, and scalability.
Collaborate with cross-functional teams to integrate your research into production-level pipelines and NVIDIA SDKs.
Participate in research discussions, code reviews, and technical documentation to share insights and improve methodologies.
What we need to see:
Currently pursuing a degree in Computer Science, Computer Engineering, or a related field.
Proven experience with machine learning, deep learning, or computer vision model development.
Strong Python programming skills and proficiency with PyTorch or similar frameworks.
Solid understanding of neural network architectures, transformers, and multi-modal learning techniques.
Excellent problem-solving abilities, attention to detail, and a collaborative mindset.
Familiarity with vision-language models, image segmentation, or large-scale pretraining is a strong plus.
משרות נוספות שיכולות לעניין אותך

What you'll be doing:
You will develop generative models for protein backbone/structure generation, prediction, and molecular docking using NVIDIA technology.
Collaborate with AI experts to implement and refine algorithms.
Conduct research and experimentation to enhance model performance.
Work closely with cross-functional teams to integrate your models into existing platforms.
Participate in regular team meetings to discuss progress and determine next research steps.
What we need to see:
Currently pursuing a degree in Computer Science, Computer Engineering, or a related field.
Proven experience with machine learning, and generative AI model development.
Strong programming skills in Python, its libraries, and deep learning frameworks like PyTorch.
Outstanding problem-solving abilities and strong attention to detail.
Ability to work collaboratively in a diverse and inclusive environment.
Familiarity with protein engineering, biotechnology, and molecular biology is a strong plus.
משרות נוספות שיכולות לעניין אותך

What You’ll Be Doing:
Build, lead and scale world-class engineering teams in Vietnam, collaborating with global counterparts across system software, data science, and AI platforms.
Drive the design, architecture, and delivery of high-performance system software platforms that power NVIDIA’s AI products and services.
Partner with global teams across Machine Learning, Inference Services, and Hardware/Software integration to ensure performance, reliability, and scalability.
Oversee the development and optimization of AI delivery platforms in Vietnam, including NIMs, Blueprints, and other flagship NVIDIA services.
Engage with open-source and enterprise data and workflow ecosystems (e.g., Temporal, Gitlab DevOps Platform, RAPIDS, NeMo Curator, Morpheus) to advance accelerated AI factory, data science and data engineering workloads.
Champion continuous integration, continuous delivery, and engineering best practices across multi-site R&D Centers.
Collaborate with product management and cross-functional stakeholders to ensure enterprise readiness and customer impact.
Develop and deploy standard processes for large-scale, distributed system testing, encompassing stress, scale, failover, and resiliency testing.
Ensure security and compliance testing aligns with industry standards for cloud and data center products.
Mentor and develop talent within the organization, fostering a culture of quality and continuous improvement.
What We Need to See:
Bachelor’s, Master’s, or PhD in Computer Science, Computer Engineering, or related field.
15+ overall years of software engineering experience with 6+ years in senior leadership roles.
Proven record of managing large, high-performing software teams and delivering complex AI/ML or data-driven products.
Expertise in cloud, data, and accelerated computing technologies (e.g., Spark, Kubernetes, Dask, Python ecosystem, CUDA).
Experience collaborating with open-source communities and enterprise partners.
Strong leadership, communication, and cross-functional coordination skills.
Strategic mindset with hands-on technical depth in AI, system software, or large-scale data platforms.
Ways to Stand Out from the Crowd:
Experience building and scaling AI/ML Inferencing platforms from concept to production.
Background in GPU programming, CUDA optimization, or system performance engineering.
Deep understanding of microservices, distributed systems, and high-performance data architectures.
Contributions to open-source projects or developer ecosystems.
Knowledge of deep learning, RAG, embeddings, or modern text search frameworks.
משרות נוספות שיכולות לעניין אותך

NVIDIA Vietnam R&D Center is an integral part of NVIDIA global network of world class Engineers and Researchers. To help push the boundary of Accelerated Computing, we’re seeking a hands-on technical leader to architect, build, and operate a platform for AI inference and agentic applications. You’ll focus on heterogeneous compute (with a strong GPU emphasis), reliability, security, and developer experience across cloud and hybrid environments.
What you will do:
Build and operate the platform for AI: multi-tenant services, identity/policy, configuration, quotas, cost controls, and paved paths for teams.
Lead inference platforms at scale, including model-serving routing, autoscaling, rollout safety (canary/A-B), ensuring reliability, and maintaining end-to-end observability.
Operate GPUs in Kubernetes: lead NVIDIA device plugins, GPU Feature Discovery, time-slicing, MPS, and MIG partitioning; implement topology-aware scheduling and bin-packing.
Lead GPU lifecycle:driver/firmware/Runtime
Enable virtualization strategies: vGPU (e.g., on vSphere/KVM), PCIe passthrough, mediated devices, and pool-based GPU sharing; define placement, isolation, and preemption policies.
Build secure traffic and networking: API gateways, service mesh, rate limiting, authN/authZ, multi-region routing, and DR/failover.
Improve observability and operations through metrics, tracing, and logging for DCGM/GPUs, runbooks, incident response, performance, and cost optimization.
Establish platform blueprints: reusable templates, SDKs/CLIs, golden CI/CD pipelines, andinfrastructure-as-codestandards.
Lead through influence: write design docs, conduct reviews, mentor engineers, and shape platform roadmaps aligned to AI product needs.
What we need to see:
15+ years building/operating large-scale distributed systems or platform infrastructure; strong record of shipping production services.
Proficiency in one or more of Python/Go/Java/C++; deep understanding of concurrency, networking, and systems design.
Containers/orchestration/Kubernetesexpertise, cloudnetworking/storage/IAM,andinfrastructure-as-code.
Practical GPU platform experience: Kubernetes GPU operations (device plugin, GPU Operator, feature discovery),scheduling/bin-packing,isolation, preemption, utilization tuning.
Virtualization background: deploying and operating vGPU, PCIe pass-through, and/or mediated devices in production.
SRE or equivalent experience: SLOs/error budgets, incident management, performance tuning, resource management, and financial oversight.
Security-first mentality: TLS/mTLS, RBAC, secrets, policy-as-code, and secure multi-tenant architectures.
Ways to stand out from a crowd:
Deep GPU ops: MIG partitioning, MPS sharing, NUMA/topology awareness, DCGM telemetry, GPUDirect RDMA/Storage.
Inference platform exposure: serving runtimes, caching/batching, autoscaling patterns, continuous delivery (agnostic to specific stacks).
Agentic platform exposure: workflow engines, tool orchestration, policy/guardrails for tool access and data boundaries.
Traffic/data plane: gRPC/HTTP/Protobuf performance, service mesh, API gateways, CDN/caching, global traffic management.
Tooling:Terraform/Helm/GitOps,Prometheus/Grafana/OpenTelemetry,policy engines; bare-metal provisioning experience is a plus.
משרות נוספות שיכולות לעניין אותך

What you'll be doing:
Be responsible for the design and delivery of the most reliable, performing and efficient system software platform for AI products and services.
Define, develop and design process, manage teams of junior and experienced System SW engineers
Work with continuous integration, continuous delivery of system software
Be responsible for NVIDIA System Software Platform, work closely with the testing team, support team and stakeholders across time zones
Innovate! Bring NVIDIA's AI software and services to shine in customer's view
What we need to see:
B.Sc. in SW / Computer or equivalent experience
8+ years of overall experience
5+ years of experience in managing a team
Knowledge of AI Products and Services, System Platform to deliver AI Products and Services such as MLOps and/or Continuous Delivery of AI/ML Products and Services
Experience in team management
Creative, motivated, and value driven person
Ways to stand out from the crowd:
Experience with setting up a delivery platform of complex AI/ML Products and Services, from conception to final delivery
Background with low level GPU programming, performance analysis and improvement.
Experience in CUDA programming
Ability to be hands on and guide others how to develop embedded SW
משרות נוספות שיכולות לעניין אותך

What You’ll Be Doing:
Lead and grow a high-impact engineering team focused on AI-enabled signal processing for the Radio Access Network (RAN).
Guide the development of deep learning models for tasks such as channel estimation, beamforming, link adaptation, and CSI compression.
Collaborate with global teams across architecture, research, and systems to drive proof-of-concepts and production-quality AI-RAN components.
Oversee integration of AI models into full-stack simulations and/or testbeds using frameworks such as PyTorch, TensorFlow, and NVIDIA Sionna.
Align project priorities with hardware-software co-design constraints and deployment scenarios on NVIDIA platforms.
Mentor team members, ensure technical excellence, and contribute to strategic direction.
What We Need to See:
MS or PhD in Electrical Engineering, Computer Engineering, or related field.
10+ overall years of experience in wireless communications, signal processing, or AI/ML, with at least 3+ years of technical leadership experience.
Deep understanding of wireless PHY/MAC systems, including MIMO, OFDM, and adaptive filtering.
Proven experience developing or deploying neural network architectures (e.g., CNNs, Transformers) in real-world AI or signal processing applications.
Proficiency in Python and deep learning frameworks such as PyTorch or TensorFlow.
Strong collaboration and communication skills across multi-disciplinary teams and geographies.
Ways to Stand Out from the Crowd:
Experience with AI for 5G/6G systems, AI-for-RAN architecture, or telecom-grade deployments.
Knowledge of channel estimation by AI, model compression, real-time inference, or GPU optimization
Familiarity with RIS, massive MIMO, or THz communication challenges.
Track record of research, publications, or open-source contributions in AI-for-wireless.
משרות נוספות שיכולות לעניין אותך