Technologies we use: Java, Kotlin, Kubernetes, Apache Kafka, GCP, BigQuery, Spark
While we’re looking for professional skills, culture is just as important to us. We understand that
everyone's unique – and that diversity of thought, experience and background is what makes aeveryone and truly reflect the communities we serve. This way, there's scope for you to make aworld.
Job responsibilities:
- Build infrastructure to support financial products at scale.
- Setting up the data platform to complement the application platform which will provide modern data services for the applications running on it (ingestion, querying, governance, etc.).
- Use open source products whenever we can, and roll our own solutions when that makes sense.
- Help teams to identify their data needs and help them to leverage the platform in the best possible way.
- Be a point of contact for all other teams also on regulatory/control aspects of data as we tailor our solutions to accommodate those.
Required qualifications, capabilities and skills:
- Formal training or certification on problem-solving concepts and proficient applied experience
- Being a problem solver: you can independently analyze a problem and come up with options on how to solve it.
- Flexibility regarding tools and languages: for example you have to be open to debug an SSO issue one day in a python service and dig into some Java/Kotlin out-of-memory issue the other day (of course we take into account your expertise and you will have team members to help you out!).
- Knowledge of data structures.
- Experience with either Kubernetes or Docker.
Preferred qualifications, capabilities and skills:
- Experience with at least one cloud platform.
- Experience with message brokers (Kafka, RabbitMQ, Pulsar etc.).
- Preferably experience in setting up data platforms, setting standards - not just pipelines.
- Preferably experience in a distributed data processing environment/framework (e.g. Spark or Flink).