Finding the best job has never been easier
Share
Key job responsibilities
Design and develop scalable web crawling and data extraction systems to acquire structured data from websites
Build automation data pipelines and insights using big data frameworks(e.g. spark) to acquire petabytes of data visualize important KPI's which would enable technical direction.
Optimize web crawling architecture for performance, scale, resilience and cost efficiency
Implement robust systems to process high volumes of web content and extract meaning
Develop data pipelines and infrastructure to support petabyte-scale datasets
Work closely with scientists and other engineers to rapidly prototype and deploy new algorithms
Write high quality, well-tested production code in languages like Python, spark, Java, Scala.
3+ years of non-internship professional software development experience
2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience
Experience programming with at least one software programming language
3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
Bachelor's degree in computer science or equivalentPursuant to the Los Angeles Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
These jobs might be a good fit