מציאת משרת הייטק בחברות הטובות ביותר מעולם לא הייתה קלה יותר
You will have track record of success in delivering new products, solving problems, and learning new technologies quickly. A commitment to teamwork, proactive approach to solving problems, and strong verbal and written communication skills are essential. Creating reliable, scalable, and high-performance products requires technical expertise, understanding of computer science fundamentals, and practical experience building efficient large-scale systems. This person is comfortable delivering quality solutions in a fast-growing environment where priorities may change rapidly.
Key job responsibilities
- Design and implement high-throughput, cost-effective data pipelines to extract, transform, and load (ETL) data and facts from structured and semi-structured knowledge sources.
- Write high quality, well-tested production code in languages like Java and Python. Knowledge of Spark and Scala is a plus.
- Work closely with scientists and other engineers to develop state-of-the-art streaming algorithms to process large datasets in real-time, including tasks such as deduplication, topic clustering, and entity resolution.
- Build, extend, and maintain an existing codebase while also designing and developing new software components.
- Participate in prioritization, estimation, and sprint planning. Work in an Agile/Scrum environment to deliver high quality software against aggressive schedules.
- 3+ years of non-internship professional software development experience
- 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience
- Experience programming with at least one software programming language
- 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
- Bachelor's degree in computer science or equivalent
משרות נוספות שיכולות לעניין אותך