The point where experts and best companies meet
Share
What you'll be doing:
Explore high-level, undefined ideas and solve real-life problems using structured and unstructured data.
Craft proof-of-concept rooted in first principles that apply modern data science techniques to operation use cases.
Collaborate in a multi-disciplinary environment with domain experts in various fields such as networking, high performance computing for AI, telemetry etc.
Develop a strategic vision for Nvidia networking together with adjacent architects and research groups.
Define the data pipelines and ML architecture for SaaS for handling hyper scale data problems.
Support software developers to migrate prototyped to end-to-end pipelines that are suitable for deployment in production environments.
What we need to see:
M.Sc. or PhD. in Science or Engineering
12+ years of relevant experience
Validated excellent and industry experience in data science or machine learning with a variety of ML/DL algorithms and their application
Consistent record of staying ahead of technology envelope, understand pioneering research, dabble into new technologies to develop practical applications and generate innovative ideas.
Great motivation, with strong interpersonal skills and the ability to communicate highly technical concepts with non-technical audiences
"Can do attitude" - ability to succeed in ambiguous settings where part of the challenge is to define it.
Strong programming skills in Python (including unit-tests, CI&CD etc), as well as comfort using Linux and typical development tools (e.g., GitHub, Docker)
Experience in large scale data systems (on-prem and/or cloud).
Proficiency in deep learning frameworks.
Ways to stand out from the crowd:
Past senior technical roles such as principle data scientist, team leader, tech lead, head of ML in a startup. Publications in peer-reviewed journals or conferences. Previous real-world experiencing developing models for anomaly detection, predictive forecasting, root-cause-analysis use cases.
Experience in developing and deploying ML pipelines at large scales (TB+). Beyond supervised learning: optimization using Reinforcement learning and adaptive experimentation. Experience with ML deployment lifecycle including model monitoring and retraining.
Experienced with networking, cloud, data-center, edge computing technologies.
These jobs might be a good fit