מציאת משרת הייטק בחברות הטובות ביותר מעולם לא הייתה קלה יותר
Key job responsibilities
- Design, develop, and implement advanced algorithms and machine learning models to improve the intelligence and effectiveness of our web crawler and content processing pipelines.- Analyze and optimize crawling strategies to maximize coverage, freshness, and quality of acquired data while minimizing operational costs as well as dive deep into data to select the highest quality data for LLM model training and grounding.
- Conduct in-depth research to stay at the forefront of web acquisition and processing.- Monitor and analyze performance metrics, identifying opportunities for improvement and implementing data-driven optimizations
- 3+ years of building models for business application experience
- PhD, or Master's degree and 4+ years of CS, CE, ML or related field experience
- Experience programming in Java, C++, Python or related language
- Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing
- Experience using Unix/Linux
- Experience in professional software development
- 2+ years experience in building web crawler/ search and/or natural language processing systems.
משרות נוספות שיכולות לעניין אותך