Share
We're looking for candidates who combine strong software engineering fundamentals with practical system development experience. You'll need to demonstrate expertise in building scalable, fault-tolerant distributed systems, with a track record of shipping production services that handle large-scale workloads. We prioritize candidates who understand professional software engineering practices across the full development lifecycle - from system design and coding standards to testing, deployment, and operational excellence.
Key job responsibilities
- Develop efficient data processing pipelines to handle large-scale training and inference data
- Support experimentation and A/B testing infrastructure to evaluate model improvements
- Participate in code reviews, technical design discussions, and sprint planning to ensure high quality software delivery
- Develop and optimize LLM-assisted tools that revolutionize knowledge graph creation, from automated ontology generation to real-time fact extraction and verification- Release and maintain ML model infrastructure to enable high-throughput, low-latency inference in production environments
- 3+ years of non-internship professional software development experience
- Bachelor's degree in computer science or equivalent
- 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience
- Knowledge of professional software engineering & best practices for full software development life cycle, including coding standards, software architectures, code reviews, source control management, continuous deployments, testing, and operational excellence
- Experience with data processing and ETL pipelines at scale (Spark, GlueJob, Kafka)
These jobs might be a good fit