Bachelor's degree in Computer Science, Mathematics, related technical field, or equivalent practical experience.
6 years of experience in developing and troubleshooting data processing algorithms and software using Python, Java, Scala, Spark and hadoop frameworks.
Experience with SQL coding across standard commercial databases (e.g., Teradata, MySQL, subqueries, multiple table joining, multiple join types).
Experience with distributed data processing frameworks and modern investigative and transactional data stores.
Preferred qualifications:
Experience with data warehouses, including technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools, environments, and data structures.
Experience in Big Data, information retrieval, data mining, or Machine Learning with building applications with NoSQL, MongoDB, SparkML, and TensorFlow.
Experience architecting, developing software, internet scale production-grade Big Data solutions in environments.
Experience with techniques like symmetric, asymmetric, HSMs, envelopes, and implement secure key storage using Key Management System.
Experience with IaC and CICD tools like Terraform, Ansible, Jenkins etc.
Knowledge in data processing frameworks and modern age investigative and transactional data stores with ability to write complex SQLs.