Minimum qualifications to apply this role:
- 5+ years of experience in data and backend engineering.
- Strong knowledge of data architecture, data schematizing and ETL processes.
- Experience with cloud-based data infrastructure, such as AWS, GCP or Azure.
- Experience with SQL and NoSQL databases.
- Experience with Data Warehouse technologies such as Snowflake and BigQuery.
- Experience working with Python.
- Excellent communication and collaboration skills.
Preferred qualifications (If you have those, we see it as advantage, but it's not a must)
- Experience working with Typescript is a plus.
- Experience building infrastructure for Machine Learning is a big plus.
- Experience administering and managing Apache Spark clusters is a plus.
- Experience with streaming technologies such as Apache Kafka and Amazon Kinesis is a big plus.
- Experience working with serverless products such as AWS Lambda is a plus.
How your day is going to look, what you will be doing?
The ideal candidate will have a strong background in data and backend engineering and a track record of designing and implementing scalable data solutions. The Data Infrastructure Engineer will collaborate closely with software engineers, data scientists, data engineers and other stakeholders to understand requirements, design solutions and implement data-related features and functionalities.
- Design and build data storage, processing, and access systems that are scalable, reliable, and secure.
- Design and implement data pipelines, data warehousing and data architecture to support our business needs.
- Design and implement CI/CD processes for training, releasing, serving and monitoring machine learning models.
- Collaborate with other teams to integrate data and machine learning models into our products and services.
- Ensure that our data systems are compliant with regulatory requirements and industry best practices.
About the hiring department:
Read more about our Engineering department