Finding the best job has never been easier
Share
What you’ll do:
• Build data-driven platforms and capabilities to power Personalization experiences across site, app, stores, voice commerce.
• Build systems and workflows to process and manage petabyte scales of features data.
• Collaborate with member of technical staff to deliver end-to-end scalable systems for cross-functional projects.
• Work closely with business and product stakeholders to deliver on strategy, vision and roadmap for top initiatives in Personalization and Recommendations.
• Actively keep pace with new developing technologies in the data space and present technical solutions including architecture, design, implementation details, and customer and business impacting KPIs.
• Actively contribute to research community through participation at conferences, seminar and workshops.
What you’ll bring:
• You have experience in building large-scale distributed systems that process large volume of data focusing on scalability, latency, and fault-tolerance.
• You have knowledge of complex software design, distributed system design, design patterns, data structures and algorithms.
• You have experience in building systems that orchestrate and execute complex workflows in big-data leveraging Apache Spark, Apache Kafka, and Hadoop stack preferably in Google Cloud Platform.
• You have experience in evaluating and fine-tuning systems for speed, robustness, and cost efficiency.
• You have experience in designing features and models from structured and unstructured data.
• You have experience in building datasets, tools, and services supporting big data and analytics operations.
• You have experience in relational SQL and NoSQL databases like Cassandra, AzureSQL, Cosmos.
• You are proficient in Java or Scala, Python, shell scripts, HQL, SQL.
• You have experience with distributed version control like Git or similar.
• You are familiar with continuous integration/deployment processes and tools such as Jenkins and Maven.
• Bachelor's degree in Computer Science or related field and 6+ years industry experience, or master’s degree in Computer Science or related field and 2+ years industry experience.
• 2+ years of experience in building of large-scale data pipelines using big data technologies like Apache Spark, Apache Kafka, Cascading, Apache Hive, and Hadoop stack.
• 2+ years of hands-on experience in Java or Scala, Python, Bash, and SQL.
Preferred Qualifications:
• Experience in building large scale distributed systems with scalability and fault tolerance.
• Experience in building systems leveraging tool available in Google Cloud Platform.
• Experience in multi-cloud production environment deployment and maintenance.
• Experience in debugging, performance tuning, automation, and optimization for scalability and high availability.
Benefits: Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include 401(k) match, stock purchase plan, paid maternity and parental leave, PTO, multiple health plans, and much more.
These jobs might be a good fit