Bachelor's degree in Science, Technology, Engineering, Mathematics, or equivalent practical experience.
Experience in solution engineering, and in stakeholder management, professional services, or technical consulting.
Experience in developing and troubleshooting data processing algorithms and software using Python, Java, Scala, Spark and Hadoop frameworks.
Experience in migrating databases using methodologies, leveraging distributed data processing frameworks.
Experience in database and SQL.
Preferred qualifications:
Experience in working with/on data warehouses including data warehouse technical architectures, infrastructure components, ETL/ ELT, reporting/analytic tools and environments, and data structures.
Experience in Big Data, information retrieval, data mining, or Machine Learning.
Experience architecting and developing software or internet scale production-grade Big Data solutions in virtualized environments.
Experience in building multi-tier high availability applications with modern web technologies such as NoSQL, MongoDB, SparkML, and TensorFlow.
Experience with IaC and CICD tools like Terraform, Ansible, Jenkins, etc.