

Share
You will contribute to building robust, scalable systems that support a wide range of applied AI initiatives. These may include integrating LLMs via APIs, supporting internal ML models, working with AI agent frameworks, building data pipelines, and developing tools to improve AI reliability, observability, and deployment workflows. This is not a research or model training role. It is a practical engineering position focused on enabling and scaling real-world AI applications through solid infrastructure and backend development.
What you’ll be doing:
Build AI-powered tools that enhance operational excellence by leveraging diverse operational data, supporting incident, change, and problem management workflows
Collaborate closely with Incident Commanders, incident response, and SRE teams to integrate AI-driven automation and analytics into operational workflows
Design, develop, and maintain backend systems and infrastructure using Go and Python to support internal AI tools and intelligent agents
Build and maintain data pipelines using PySpark and related tools to support AI and analytics workflows
Operate and scale infrastructure using Kubernetes, managing containerized AI services and automating pipeline deployments
Work with vector databases to enable semantic search and retrieval-augmented generation use cases
Integrate large language models, agent systems, and classical ML models into internal services and automation workflows
Improve observability, deployment automation, and system reliability for AI-driven services
What we need to see:
8+ years of software engineering experience, with deep expertise in backend systems and infrastructure.
Bachelor's degree or equivalent experience.
Strong proficiency in Python and Go, with a track record of delivering reliable, scalable software solutions
Experience designing scalable, maintainable backend systems and writing clear design documentation
Deep experience with Kubernetes and cloud-native infrastructure
Experience working with building, deploying, and maintaining ML models in production systems
Familiarity with AI agent frameworks or orchestration tools
Solid understanding of system observability, monitoring, and performance optimization
Strong collaboration and communication skills to work across remote teams
Ways to stand out from the crowd:
Proven experience delivering AI-driven features end-to-end, from infrastructure design to production deployment
Experience with Temporal, Argo, or other workflow engines for multi-step or async job orchestration. Familiarity with vector databases and their applications in AI and LLM-powered systems
Contributions to open-source projects related to AI infrastructure, backend development, or developer tooling
You will also be eligible for equity and .
These jobs might be a good fit