The point where experts and best companies meet
Share
What you'll be doing:
Work in an agile and fast-paced global environment to gather requirements, architect, design, implement, test, deploy, release, and support large scale distributed systems infrastructure with monitoring, logging, visualization, and alerting capabilities with promised uptime
Build internal profiling tools for real world ML/DL applications running on HPC GPU clusters for failure and efficiency analysis to help improve current and future generation of GPU clusters and associated HWs
Understand state of the art improvements in ML/DL domain, and work with various application owners and research teams to add / improve profiling needs for current and potential future supported features
What we need to see:
BS+ in Computer Science or related (or equivalent experience) and 5+ years of software development (in Python)
Experience with Gitlab (or another source code management) branch/release, CI/CD pipeline, etc.
Solid understanding of algorithms, data structures, and runtime/space complexity
Experience working with distributed system software architecture
Basic understanding of HPC GPU cluster, slurm
Basic understanding of Machine learning concepts and terminologies
Background with databases - SQL and NoSQL (prometheus, elasticsearch, opensearch, redis, etc.)
Experience with distributed Data Pipeline, Telemetry, Visualizations (Kibana, Grafana, etc.), Alerting (pagerduty, etc.)
Ways to stand out from crowd:
Experience debugging functional and performance issues in HPC GPU clusters
Background in running and instrumenting distributed LLM training on a multi gpu HPC cluster
Knowledge of LLM training features and libraries - Checkpointing, Parallelism, Pytorch, Megatron-LM, NCCL
Experience with HPC schedulers such as Slurm
Background with Opentelemetry
You will also be eligible for equity and .
These jobs might be a good fit