Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field.
4 years of experience in developing and troubleshooting data processing algorithms.
Experience coding with one or more programming languages (e.g., Java, Python) and Bigdata technologies such as Scala, Spark and hadoop frameworks.
Experience with one public cloud provider, such as GCP.
Preferred qualifications:
Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments.
Experience in Big Data, information retrieval, data mining, or Machine Learning.
Experience with data warehouses, technical architectures, infrastructure components, Extract Transform and Load/Extract, Load and Transform and reporting/analytic tools, environments, and data structures.
Experience in building multi-tier applications with modern technologies such as NoSQL, MongoDB, SparkML, and TensorFlow.
Experience with Infrastructure as Code and Continuous Integration / Continuous Deployment tools like Terraform, Ansible, Jenkins.
Understanding one database type, with the ability to write complex SQL queries.