Bachelor's degree in Computer Science, Mathematics, a related field, or equivalent practical experience.
3 years of experience with data processing software (e.g., Hadoop, Spark, Pig, Hive) and algorithms (e.g., MapReduce, Flume).
Experience managing client-facing projects, troubleshooting technical issues, and working with Engineering and Sales Services teams.
Experience with database administration techniques or data engineering, as well as writing software in Java, C++, Python, Go, or JavaScript.
Preferred qualifications:
Experience working with data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT, and reporting/analytic tools and environments.
Experience working with Big Data, information retrieval, data mining, or machine learning.
Experience in building multi-tier high availability applications with modern web technologies (e.g., NoSQL, MongoDB, SparkML, TensorFlow).
Experience in technical consulting.
Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments.