Expoint – all jobs in one place
המקום בו המומחים והחברות הטובות ביותר נפגשים
Limitless High-tech career opportunities - Expoint

Intel AI Compiler Engineer 
United States, Texas 
302622704

Today

Compiler Development and Optimization

  • Design and implement MLIR-based compiler passes for lowering, optimization, and code generation
  • Build domain-specific dialects to represent compute kernels at multiple abstraction levels
  • Develop performance-tuned transformation pipelines targeting vectorization, parallelization, and memory locality

High-Performance Kernel Generation

  • Generate and optimize kernels for linear algebra, convolution, and other math-intensive primitives
  • Ensure cross-target portability while achieving near hand-tuned performance
  • Collaborate with hardware teams to integrate backend-specific optimizations

Performance Engineering

  • Profile generated code and identify performance bottlenecks across architectures
  • Implement optimizations for cache utilization, prefetching, and scheduling
  • Contribute to auto-tuning strategies for workload-specific performance

Collaboration and Research

  • Work closely with ML researchers, system architects, and runtime engineers to co-design kernel generation strategies
  • Stay up to date with developments in MLIR, LLVM, and compiler technologies

Publish or contribute to open-source MLIR/LLVM communities where appropriate

Qualifications:

Minimum qualifications are required to be initially considered for this position. Preferred qualifications are in addition to the minimum requirements and are considered a plus factor in identifying top candidates.

Minimum Qualifications:

Bachelor's and 7+ years of experienceORMaster’s degree and 4+ years of experienceORPhD degree and 2+ years of experience. The degree should be in Computer Science, Computer Engineering, Software Engineering, or related field

The experience must include experience in/with:

  • Compiler design and optimization (MLIR, LLVM, or equivalent)
  • Code generation and transformation passes
  • High-performance computing techniques: vectorization, loop optimizations, polyhedral transformations, and memory hierarchy optimization
  • Familiarity with machine learning workloads (e.g., matrix multiplications, convolutions)

Preferred Qualifications:

  • Hands-on experience extending MLIR dialects or contributing to the MLIR ecosystem
  • Background in GPU programming models (CUDA, ROCm, SYCL) or AI accelerators
  • Knowledge of numerical linear algebra libraries (BLAS, cuDNN, MKL) and their performance characteristics
  • Experience with auto-tuning frameworks (e.g., TVM, Halide, Triton)

Track record of publications, patents, or contributions to open-source compiler projects

Experienced HireShift 1 (United States of America)US, Oregon, HillsboroUS, California, San Jose
Position of Trust

offer a total compensation package that ranks among the best in the industry. It consists of competitive pay, stock, bonuses, as well as, benefit programs which include health, retirement, and vacation. Find more information about all of our Amazing Benefits here:

Annual Salary Range for jobs which could be performed in the US:

This role will require an on-site presence. * Job posting details (such as work model, location or time type) are subject to change.