

Job Summary
You’ll help accelerate the development of AI prototypes by ensuring seamless platform integration, CI/CD pipelines, and other critical infrastructure to enable high-speed experimentation and iteration.
What you’ll do
Platform Support and Optimization: Design and maintain scalable, secure, and efficient platforms to support AI Catalyst team initiatives, ensuring smooth integration of AI models and workflows.
Infrastructure Management: Provide expertise in Kubernetes and cloud platforms (GCP, AWS, Azure) for container orchestration, scalable deployments, and real-time operations.
Partner with the AI Catalyst team to identify bottlenecks, remove blockers, and optimize workflows for faster delivery of AI prototypes.
Technical Leadership: Lead the implementation of critical systems (APIs, orchestration, observability, deployment) to ensure speed, reliability, and maintainability.
Cross-Functional Collaboration: Work closely with engineering, product, and design teams to align technical priorities and drive impactful AI initiatives.
Mentorship: Guide and mentor engineers, fostering a culture of technical excellence, collaboration, and rapid execution.
Demonstrate proficiency in Kubernetes for container orchestration and scalable deployments.
Mentor senior engineers and contribute to a culture of technical excellence, velocity, and pragmatic decision-making
Proactively utilize AI-assisted development tools (e.g., GitHub Copilot, Cursor, Claude Code) for code generation, auto-completion, and intelligent suggestions to accelerate development cycles and enhance code quality.
Explore and experiment with emerging AI technologies relevant to software development, proactively identifying opportunities to incorporate new AI capabilities into existing workflows and tooling.
What you’ll bring
10+ years of software engineering experience
Strong background in Python and background in C, C++, Go or Rust.
Proficiency in RHEL or other Linux distributions.
Communication Skills: Strong ability to communicate technical tradeoffs and bring clarity to ambiguous situations
Passion for AI Innovation: Enthusiasm for enabling AI initiatives that drive real-world impact and accelerate prototyping efforts.
Ability to move fast without compromising quality, thriving in environments where rapid iteration and high ownership are the norm
PoC Experience: Proven ability to work on and deliver successful Proof of Concepts or initiatives, showcasing the ability to rapidly prototype and validate ideas.
Nice to have
Experience with cloud platforms such as GCP, AWS, or Azure.
experience with building and packaging Python projects, package managers (dnf, pip), and build systems (cmake, meson)
experience in working with upstream projects and Open Source communities.
Experience in early-stage product incubation or 0→1 product delivery
Contributions to internal AI platforms, model evaluation frameworks, or observability for AI systems
This is a rare opportunity to help shape how our company brings AI innovation to life - bridging research and real-world usage at a moment when speed, safety, and product quality matter most. If you're energized by rapid iteration, high autonomy, and making AI tangible for millions, we’d love to talk.
Previous experience with hardware acceleration, either generic GPU experience or specific ones, such as CUDA and ROCm
Knowledge of AI frameworks, such as PyTorch and/or TensorFlow
Familiarity with containerization and orchestration
Understanding of Open Source development models
Experience with test-base development and agile/scrum methodologies
משרות נוספות שיכולות לעניין אותך

What you will do:
Examine new project opportunities, identify the right approach to meeting or exceeding the requirements for these projects and develop solutions with an eye toward quality, security, maintainability, supportability, performance and resilience
Work closely with Engineering, Product Management and Support stakeholders to prioritize features and bugs during all phases of development
Participate in the interaction with relevant hardware partners with a focus on getting key functionality included in their roadmap
Communicate architectural concepts and decisions to various audiences
Be a leader and mentor for more junior members of the team and help expand their skill sets
Participate in upstream AI/ML communities with a focus on learning more about the various technologies and how they might be used within our offerings
What you will bring:
Strong experience with RHEL or other Linux distributions
Strong experience with software development with programming languages such as Python, Go or similar
Problem solving and troubleshooting skills with a focus on root cause analysis
Experience with container technologies, such as Kubernetes/OpenShift and Podman
Hands-on learning and demonstrable experience with implementing and owning complex features individually and in collaboration with others
Nice to have:
Previous experience with hardware acceleration, either generic GPU experience or specific ones, such as CUDA and ROCm
Knowledge of AI frameworks, such as PyTorch and/or TensorFlow
Familiarity with containerization and orchestration
Understanding of Open Source development models
Experience with test-base development and agile/scrum methodologies
משרות נוספות שיכולות לעניין אותך

What you will do:
Serve as the direct support for IBM customer inquiries about the Red Hat OpenShift Container Platform handed over from IBM product support teams.
Use IBM and Red Hat ticketing systems and support tools to assist customers directly.
Analyze issues to identify problems and communicate corrective actions and resolutions to customers.
Collaborate with support engineers, technical account managers, internal teams, and external parties during problem resolution.
Deliver exceptional customer experience by troubleshooting various issues and recommending solutions professionally and courteously.
Document diagnostic steps and create reusable solutions for future incidents
Perform weekend and holiday shift duties on a rotational schedule when needed.
What you will bring:
3+ years of experience working as a support or development engineer for a Platform-as-a-Service (PaaS) provider or hosting service
3+ years of experience working with Linux or Unix operating systems, including system installation, configuration, and maintenance; Red Hat Certified Engineer (RHCE) qualification is a big plus
Familiarity with technologies like Red Hat OpenShift Container Platform, Kubernetes, containers, IT automation, and cloud management
Experience working with hosted applications or large-scale application deployments
Good understanding of Linux tools with an emphasis on cURL, Git, Strace, and Wireshark
Troubleshooting skills and a passion for problem-solving and investigation
Outstanding communication skills in English with the ability to communicate courteously and effectively with customers, colleagues, and third-party vendors
Ability to handle multiple priorities and work under pressure
Commitment to providing the best experience possible for Red Hat’s customers
The following are considered a plus:
Bachelor's degree in a technical field, preferably engineering or computer science
Experience with technologies like Open vSwitch, JBoss, Apache Tomcat, Go, Angular.js, Node.js, Ruby, Python web frameworks, and .NET framework
Experience with source code management tools
Knowledge of technical support systems and tools
Familiarity with Red Hat’s solutions portfolio and open-source software development
Participation in open-source projects, including patches submitted for upstream inclusion
משרות נוספות שיכולות לעניין אותך

About the Job
You will be responsible for evolving and delivering our product roadmap by collaborating with customers, engineering, marketing, support, and field teams. As the accountable party for driving solution delivery, you will oversee timelines, roadmap execution, and ongoing stakeholder engagement. You will also engage with open source communities that support our container initiatives including Kubernetes, Knative, KEDA, Shipwright, Cloud-Native Buildpacks and other Cloud Native Computing Foundation (CNCF) projects.
As a Principal Product Manager, you will have strong communication, teamwork and persuasion skills. This is a great opportunity to work on a fast-growing offering alongside some of the brightest minds in open source.
What You'll Do
Collect and document input from Red Hat OpenShift users, customers, community members and partners to understand customers’ needs; develop strategy and roadmap for OpenShift Serverless
Research competitive solutions, commercial and do-it-yourself alternatives, documenting their relative strengths and weakness to develop competitive positioning and collect input for new releases
Prioritize and document requirements, epics and user stories for new releases of our offerings
Guide major enhancements of our offerings by working cross-functionally with core teams across our Engineering team and the upstream open source community
Work with the OpenShift Engineering team and the overall Product team to manage releases and updates of our offerings and bring new Red Hat OpenShift solutions to market
Work with our Sales teams to respond to customer inquiries; deliver customer presentations and demos and support the overall sales process
Support sales and marketing activities including creating presentations, blogs, demos and other technical collateral for our offerings
Review and provide feedback on the documentation for our offerings
Participate in technical aspects of go-to-market engagements like live streams, roadshows, workshops, webinars, demos of offerings and solutions, industry and partner events, sales enablement and training
What You'll Bring
5+ years of enterprise software industry experience working in product management, technical marketing or a similar technical product or customer-facing role
Strong understanding of serverless landscape and cloud-native application architectures, deployment and operations
Strong understanding of developer experience and design principles
Familiarity with the Kubernetes and the Cloud-Native Computing Foundation (CNCF)
Understanding of and experience with open source projects and communities
Proven track record of leading cross-functional teams to deliver impactful products
Ability to think strategically, influence cross-functional teams and manage without direct authority
Capacity to handle multiple competing priorities in a fast-paced environment
Excellent written and verbal communication skills
Ability to work in cross-functional environment with a distributed remote and global workforce
Understanding of technical challenges to drive well-informed decisions with the development team
The salary range for this position is $151,170.00 - $249,390.00. Actual offer will be based on your qualifications.
Pay Transparency
● Comprehensive medical, dental, and vision coverage
● Flexible Spending Account - healthcare and dependent care
● Health Savings Account - high deductible medical plan
● Retirement 401(k) with employer match
● Paid time off and holidays
● Paid parental leave plans for all new parents
● Leave benefits including disability, paid family medical leave, and paid military leave
משרות נוספות שיכולות לעניין אותך

We are looking for a Senior Machine Learning Research Engineer with a strong research background and hands-on experience in building and optimizing deep learning models. In this role, you will explore and develop cutting-edge techniques in model compression, including pruning, quantization, knowledge distillation, and speculative decoding. You will help design and evaluate novel algorithms that bridge theory and real-world deployment.
Your Role and Responsibilities
As a core member of our ML research team, you will:
Design and conduct experiments to evaluate model compression strategies for large-scale deep learning models.
Develop scalable and modular research code in Python.
Work closely with software engineers and product teams to translate research into deployable systems.
Explore emerging techniques in efficient inference and help define future directions for model optimization.
Collaborate on publications in top-tier ML/AI conferences and contribute to open-source initiatives.
Benchmark models across hardware configurations, contributing to the broader understanding of how model optimizations affect performance in real-world deployment scenarios.
Participate in reading groups, internal workshops, and mentoring activities.
Required Qualifications
PhD in Machine Learning, Computer Science, Electric Engineering, Applied Mathematics, or a related field.
Strong foundation in machine learning algorithms and numerical optimization.
Proficiency in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
Strong analytical and problem-solving skills.
Experience with experimental design and empirical research, including model evaluation and benchmarking.
Excellent written and verbal communication skills, including the ability to explain complex ideas to a technical audience.
Preferred Qualifications
Familiarity with model compression techniques such as quantization, pruning, knowledge distillation, or speculative decoding.
Experience contributing to open-source machine learning projects.
Experience optimizing model performance for inference efficiency, particularly on GPUs or specialized accelerators.
Publication record in top-tier conferences (e.g., NeurIPS, ICML, ICLR, CVPR).
Comfortable navigating large codebases and collaborating in a research-oriented engineering team.
What We Offer
A dynamic and intellectually stimulating environment with opportunities to shape the future of efficient ML systems.
A collaborative team that values curiosity, creativity, and impact.
Support for academic engagement (publishing, conference travel, workshops).
Access to high-performance computing resources and state-of-the-art ML infrastructure.
Comprehensive benefits, flexible work arrangements, and opportunities for career growth.
The salary range for this position is $170,770.00 - $281,770.00. Actual offer will be based on your qualifications.
Pay Transparency
● Comprehensive medical, dental, and vision coverage
● Flexible Spending Account - healthcare and dependent care
● Health Savings Account - high deductible medical plan
● Retirement 401(k) with employer match
● Paid time off and holidays
● Paid parental leave plans for all new parents
● Leave benefits including disability, paid family medical leave, and paid military leave
משרות נוספות שיכולות לעניין אותך

Job Responsibilities
Work with Red Hat engineers and research project teams to develop, test, deploy and operate software for distributed research environments built with OpenShift, OpenStack, OpenShift AI, InstructLab and other open source software.
Work with Red Hat product development teams to explore and help transition selected new functionality into supported products
Develop, deploy, upgrade, monitor and troubleshoot software in research environments such as the Mass Open Cloud Alliance, as well as other university research computing environments in North America
Identify, track and resolve issues as part of a worldwide development team analyzing distributed systems and data using GitOps techniques and tools
Contribute software to open source projects to help advance research computing
As part of the CTO office, write, speak and promote software development research projects, as well as student-oriented development and education activities such as hackathons, tutorials and independent student projects.
Requirements
Software development experience with multiple programming languages (C++, Python, Go)
Experience with software development for distributed systems and AI systems, particularly accelerators, virtual machines and containers
Deep expertise in at least one broad technical area (e.g. operating systems), with a demonstrated understanding of subsystems and their interactions in real-world use
Ability to decompose large complex systems and development tasks and work as a technical leader in a distributed team to release new functionality and resolve issues with deployed systems
Experience maintaining and contributing to linux software (Red Hat Enterprise Linux (RHEL), CentOS, or Fedora preferred)
Detailed understanding of Agile software development processes
Detailed knowledge of development tools, repository management, and CI/CD platforms such as Ansible
Experience working with users and design engineers in a research or production computing environment
Demonstrated ability to work with independence on software design and implementation, while providing technical leadership and some mentoring to a larger team of developers and system engineers
Good oral and written communications
PhD, Master’s or Bachelor’s degree, with work or academic project experience
The salary range for this position is $111,260.00 - $183,580.00. Actual offer will be based on your qualifications.
Pay Transparency
● Comprehensive medical, dental, and vision coverage
● Flexible Spending Account - healthcare and dependent care
● Health Savings Account - high deductible medical plan
● Retirement 401(k) with employer match
● Paid time off and holidays
● Paid parental leave plans for all new parents
● Leave benefits including disability, paid family medical leave, and paid military leave
משרות נוספות שיכולות לעניין אותך

What you will do
As a software engineer in this role, you will
Develop and implement best practices for AI/ML model lifecycle management, including pre-processing, model training, serving, monitoring etc.
Deploy AI/ML models on OpenShift AI (RHOAI), ensuring scalability, reliability, and performance.
Work with upstream AI/ML communities to evaluate new AI/ML-related technologies from partners and create examples of integrations between their technology and RHOAI
Build multi product demos and AI/ML workflows using Predictive and Generative AI leveraging RH product and AI stack
Collaborate with multiple stakeholders including cross product team, AI/ML partners etc. to adjust their AI strategies, address their specific use cases, and drive value through the adoption of the RHOAI
What you will bring
Experience in development in Python and Go
Understanding of fundamental AI/ML concepts, algorithms, techniques and implementation of workflows
Familiarity with DevOps/MLOps practices and tools for managing the AI/ML lifecycle in production environments.
Interest in learning new technologies and tools around the AI/ML landscape; problem-solving skills.
Good written and verbal communication skills
The following are considered a plus
Knowledge of containers and OpenShift or Kubernetes and cloud platforms (AWS, Azure, Google Cloud)
Previous code contributions to or participation in open source projects or code samples on GitHub.
Understanding of fundamental AI/ML concepts, algorithms, and techniques.
Basic knowledge of data preprocessing, feature engineering, and model evaluation.
Knowledge of AI frameworks and libraries (e.g. OpenDataHub, TensorFlow, PyTorch, Kueue, KubeRay, KubeFlow, CodeFlare, Feast etc)
You’re willing to wear a lot of red -OR- You look good in a red t-shirt
The salary range for this position is $163,420.00 - $269,640.00. Actual offer will be based on your qualifications.
Pay Transparency
● Comprehensive medical, dental, and vision coverage
● Flexible Spending Account - healthcare and dependent care
● Health Savings Account - high deductible medical plan
● Retirement 401(k) with employer match
● Paid time off and holidays
● Paid parental leave plans for all new parents
● Leave benefits including disability, paid family medical leave, and paid military leave
משרות נוספות שיכולות לעניין אותך

Job Summary
You’ll help accelerate the development of AI prototypes by ensuring seamless platform integration, CI/CD pipelines, and other critical infrastructure to enable high-speed experimentation and iteration.
What you’ll do
Platform Support and Optimization: Design and maintain scalable, secure, and efficient platforms to support AI Catalyst team initiatives, ensuring smooth integration of AI models and workflows.
Infrastructure Management: Provide expertise in Kubernetes and cloud platforms (GCP, AWS, Azure) for container orchestration, scalable deployments, and real-time operations.
Partner with the AI Catalyst team to identify bottlenecks, remove blockers, and optimize workflows for faster delivery of AI prototypes.
Technical Leadership: Lead the implementation of critical systems (APIs, orchestration, observability, deployment) to ensure speed, reliability, and maintainability.
Cross-Functional Collaboration: Work closely with engineering, product, and design teams to align technical priorities and drive impactful AI initiatives.
Mentorship: Guide and mentor engineers, fostering a culture of technical excellence, collaboration, and rapid execution.
Demonstrate proficiency in Kubernetes for container orchestration and scalable deployments.
Mentor senior engineers and contribute to a culture of technical excellence, velocity, and pragmatic decision-making
Proactively utilize AI-assisted development tools (e.g., GitHub Copilot, Cursor, Claude Code) for code generation, auto-completion, and intelligent suggestions to accelerate development cycles and enhance code quality.
Explore and experiment with emerging AI technologies relevant to software development, proactively identifying opportunities to incorporate new AI capabilities into existing workflows and tooling.
What you’ll bring
10+ years of software engineering experience
Strong background in Python and background in C, C++, Go or Rust.
Proficiency in RHEL or other Linux distributions.
Communication Skills: Strong ability to communicate technical tradeoffs and bring clarity to ambiguous situations
Passion for AI Innovation: Enthusiasm for enabling AI initiatives that drive real-world impact and accelerate prototyping efforts.
Ability to move fast without compromising quality, thriving in environments where rapid iteration and high ownership are the norm
PoC Experience: Proven ability to work on and deliver successful Proof of Concepts or initiatives, showcasing the ability to rapidly prototype and validate ideas.
Nice to have
Experience with cloud platforms such as GCP, AWS, or Azure.
experience with building and packaging Python projects, package managers (dnf, pip), and build systems (cmake, meson)
experience in working with upstream projects and Open Source communities.
Experience in early-stage product incubation or 0→1 product delivery
Contributions to internal AI platforms, model evaluation frameworks, or observability for AI systems
This is a rare opportunity to help shape how our company brings AI innovation to life - bridging research and real-world usage at a moment when speed, safety, and product quality matter most. If you're energized by rapid iteration, high autonomy, and making AI tangible for millions, we’d love to talk.
Previous experience with hardware acceleration, either generic GPU experience or specific ones, such as CUDA and ROCm
Knowledge of AI frameworks, such as PyTorch and/or TensorFlow
Familiarity with containerization and orchestration
Understanding of Open Source development models
Experience with test-base development and agile/scrum methodologies
משרות נוספות שיכולות לעניין אותך