Expoint – all jobs in one place
מציאת משרת הייטק בחברות הטובות ביותר מעולם לא הייתה קלה יותר
Limitless High-tech career opportunities - Expoint

Cisco High 
United States, California, San Jose 
4434649

28.07.2025

As
High-performance AI compute engineer, you will be instrumental in defining and delivering the next generation of enterprise-grade AI infrastructure. As a principal engineer within our GPU and CUDA Runtime team, you will play a critical role in shaping the future of high-performance compute infrastructure. Your contributions will directly influence the performance, reliability, and scalability of large-scale GPU-accelerated workloads, powering mission-critical applications across AI/ML, scientific computing, and real-time simulation.

You will be responsible for developing low-level components that bridge user space and kernel space, optimizing memory and data transfer paths, and enabling cutting-edge interconnect technologies like NVLink and RDMA. Your work will ensure that systems efficiently utilize GPU hardware to its full potential, minimizing latency, maximizing throughput, and improving developer experience at scale.

KEY RESPONSIBILITIES

  • Design, develop, and maintain device drivers and runtime components for GPU and network components of the systems.
  • Working with kernel and platform components to build efficient memory management paths using pinned memory, peer-to-peer transfers, and unified memory.
  • Optimize data movement using high-speed interconnects such as RDMA, InfiniBand, NVLink, and PCIe, with a focus on reducing latency and increasing bandwidth.
  • Implement and fine-tune GPU memory copy paths with awareness of NUMA topologies and hardware coherency.
  • Develop instrumentation and telemetry collection mechanisms to monitor GPU and memory performance without impacting runtime workloads.
  • Contribute to internal tools and libraries for GPU system introspection, profiling, and debugging.
  • Provide technical mentorship and peer reviews, and guide junior engineers on best practices for low-level GPU development.
  • Stay current with evolving GPU architectures, memory technologies, and industry standards.

Minimum Qualifications :

  • 10+ years of experience in systems programming, ideally with 5+ years focused on CUDA/GPU driver and runtime internals.
  • Minimum of 5+ years of experience with kernel-space development, ideally in Linux kernel modules, device drivers, or GPU runtime libraries (e.g., CUDA, ROCm, or OpenCL runtimes).
  • Experience working with NVIDIA GPU architecture, CUDA toolchains, and performance tools (Nsight, CUPTI, etc.).
  • Experience optimizing for NVLink, PCIe, Unified Memory (UM), and NUMA architectures.
  • Strong grasp of RDMA, InfiniBand, and GPUDirect technologies and their using in frameworks like UCX.
  • Minimum of 8+ years of experience programming within C/C++ with low-level systems proficiency (memory management, synchronization, cache coherence).
  • Bachelor' degree in STEM related field

Preferred Qualifications

  • Deep understanding of HPC workloads, performance bottlenecks, and compute/memory tradeoffs.
  • Expertise in zero-copy memory access, pinned memory, peer-to-peer memory copy, and device memory lifetimes.
  • Strong understanding of multi-threaded and asynchronous programming models.
  • Familiarity with python and AI framework like pytorch.
  • Familiarity with assembly or PTX/SASS for debugging or optimizing CUDA kernels.
  • Familiarity with NVMe storage offloads, IOAT/DPDK, or other DMA-based acceleration methods.
  • Familiarity with Valgrind, cuda-memcheck, gdb, and profiling with Nsight Compute/Systems.
  • Proficiency with perf, ftrace, eBPF, and other Linux tracing tools.
  • PhD is a plus, especially with research in GPU systems, compilers, or HPC.