What You Will Do
Implement new features and solutions for Red Hat AI and Edge products.
Explore deep code integration into various Red Hat products, ensuring optimal integration between the Red Hat portfolio, hardware accelerators and partners.
Integrate software that leverages hardware accelerators (e.g., DPUs, GPUs, AIUs) and perform performance analysis and optimization of AI workloads with accelerators.
Work with major AI and hardware partners such as NVIDIA, AMD, Dell, and others on building joint integrations and products.
Collaborate closely with UX, UI, QE, and cross-functional teams to deliver a great experience to Red Hat partners and customers.
Coordinate with team leads, architects, and other engineers on the design and architecture of our offerings.
Become responsible for the quality of our offerings, participate in peer code reviews and continuous integration (CI), and respond to security threats.
What You Will Bring
4+ years of relevant technical experience in software development.
Advanced experience working in a Linux environment with at least one language like Golang, Rust, Java, C, or C++.
Expereince with the container orchestration ecosystem like Kubernetes, or Red Hat OpenShift.
Expereince with microservices architectures and concepts including APIs, versioning, monitoring, etc.
Experience with AI/ML technologies, including foundational frameworks, large language models (LLMs), Retrieval Augmented Generation (RAG) paradigms, vector databases, and LLM orchestration tools.
Ability to quickly learn and guide others on using new tools and technologies.
Proven ability to innovate and a passion for staying at the forefront of technology.
Excellent system understanding and troubleshooting capabilities.
Autonomous work ethic, thriving in a dynamic, fast-paced environment.
Technical leadership acumen in a global team environment.
Proficient written and verbal communication skills in English.
The Following is Considered a Plus
Experience with cloud development for public cloud services (AWS, GCE, Azure).
Familiarity with virtualization, networking, or storage.
Background in DevOps or site reliability engineering (SRE).
Experience with hardware accelerators (e.g., GPUs, FPGAs) for AI workloads.
Recent hands-on experience with distributed computation, either at the end-user or infrastructure provider level.
Experience with performance analysis tools.
Experience with Linux kernel development.
משרות נוספות שיכולות לעניין אותך