The point where experts and best companies meet
Share
Key job responsibilities
- Design, develop, and implement advanced algorithms and machine learning models to improve the intelligence and effectiveness of our web crawler and content processing pipelines.- Analyze and optimize crawling strategies to maximize coverage, freshness, and quality of acquired data while minimizing operational costs as well as dive deep into data to select the highest quality data for LLM model training and grounding.
- Conduct in-depth research to stay at the forefront of web acquisition and processing.- Monitor and analyze performance metrics, identifying opportunities for improvement and implementing data-driven optimizations
- 3+ years of building machine learning models for business application experience
- PhD, or Master's degree and 6+ years of applied research experience
- Experience programming in Java, C++, Python or related language
- Experience with neural deep learning methods and machine learning
- Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc.
- Experience with large scale distributed systems such as Hadoop, Spark etc.
These jobs might be a good fit