Basic Qualifications
- BS or MS in Computer Science or a related technical discipline, or equivalent experience.
- Experience coding with C++, Java, Python, or Go
- Systematic problem solving approach and knowledge of algorithms, data structures and complexity analysis.
- At least five years of software engineering experience.
Preferred Qualifications
- Under the hood experience with open source big data analytics projects such as Apache Hadoop (HDFS and YARN), Spark, Hive, Parquet, Presto is a plus.
- Under the hood experience with fault tolerant, large scale data processing, or storage systems, or cloud and container-based cluster orchestration such as Vertica, Apache Impala, Drill, Google Borg, Google BigQuery, Amazon RedShift, Kubernetes, Mesos etc. is also a plus.
- Incorporation of Machine Learning algorithms into products.
- Extra bonus points for those with a deep expertise in or an aptitude for Spark internals, including but not limited to core resource management, data source, SQL optimization, multi-language support, machine learning or deep learning integration.
- Experience with security and/or privacy enhancing technologies (PETs),
- Experience with various encryption algorithms, anonymization and data minimization techniques.
* Accommodations may be available based on religious and/or medical conditions, or as required by applicable law. To request an accommodation, please reach out to .