About the RoleYou’ll join the broader Search and AI Platform organization and collaborate with ML researchers and engineers from our Voyage.ai acquisition. Together, we’re building infrastructure for real-time, low-latency, and high-scale inference — fully integrated with Atlas and designed for developer-first experiences.
As a Senior Engineer, you'll focus on building core systems and services that power model inference at scale. You'll own key components of the infrastructure, work across teams to ensure tight integration with Atlas, and contribute to a platform designed for reliability, performance, and ease of use.
What You’ll Do- Design and build components of a multi-tenant inference platform integrated directly with MongoDB Atlas, supporting semantic search and hybrid retrieval
- Collaborate with AI engineers and researchers to productionize inference for embedding models and rerankers — enabling both batch and real-time use cases
- Contribute to platform capabilities such as latency-aware routing, model versioning, health monitoring, and observability
- Improve performance, autoscaling, GPU utilization, and resource efficiency in a cloud-native environment.Work across product, infrastructure, and ML teams to ensure the inference platform meets the scale, reliability, and latency demands of Atlas users
- Gain hands-on experience with tools like vLLM and container orchestration with Kubernetes
Who You Are- 5+ years of experience building backend or infrastructure systems at scale
- Strong software engineering skills in languages such as Go, Rust, Python, or C++, with an emphasis on performance and reliability
- Experienced in cloud-native architectures, distributed systems, and multi-tenant service design
- Familiar with concepts in ML model serving and inference runtimes, even if not directly deploying models
- Knowledge of vector search systems (e.g., Faiss, HNSW, ScaNN) is a plus
- Comfortable working across functional teams, including ML researchers, backend engineers, and platform teams
- Motivated to work on systems integrated into MongoDB Atlas and used by thousands of developers
Nice to Have- Experience integrating infrastructure with production ML workloads
Understanding of hybrid retrieval, prompt-driven systems, or retrieval-augmented generation (RAG)
Contributions to open-source infrastructure for ML serving or search
Why Join Us- Be part of building the AI foundation of the world’s most popular developer data platform
- Collaborate with ML researchers from Voyage.ai to bring novel ideas into scalable systems
- Tackle challenging problems in inference, observability, and distributed infrastructure
- Work in a culture that emphasizes mentorship, ownership, and technical excellence