Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field.
3 years of experience in developing and troubleshooting data processing algorithms.
Experience coding with one or more programming languages (e.g.,Java, Python) and Bigdata technologies such as Scala, Spark and Hadoop frameworks.
Preferred qualifications:
Experience in working with/on data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools, environments, and data structures.
Experience with IaC and CICD tools like Terraform, Ansible, Jenkins etc.
Experience in building multi-tier, high availability applications with modern technologies such as NoSQL, MongoDB, Spark ML, and TensorFlow.
Experience in Big Data, information retrieval, data mining, or Machine Learning.
Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments.
Experience in Database type with ability to write complex SQLs and one public cloud provider, (e.g.,Google Cloud Platform (GCP) ).