Create highly scalable and fault tolerant technical designs working with team members.
Work on developing various backend low latency, high availability services.
Develop and implement data pipelines that extracts, transforms and loads data into an information product that helps to inform the organization in reaching strategic goals.
Write high-quality code, conduct and participate in code reviews, and follow strong engineering principles and standards.
Research the technical feasibility of new ideas and actively suggest technology improvements.
Quickly develop a thorough understanding of the product, architecting the system and shipping production ready code.
Write maintainable code that can scale fast.
Support and contribute to our amazing work culture.
About you:
Big Data related (Plus):
Profound experience and understanding of object-oriented design, design patterns, micro services architecture, data structure, algorithms and their complexities, systems architecture.
Skilled in writing and automating tests for your code.
Proven working experience with AWS or GCP
Working experience with OLTP databases, specifically MySQL, understands day to day challenges related to query execution and optimization (e.g. indexing, cascading).
Working experience with key, value caches, stores like Redis or Aerospike is a plus.
Experience with streaming frameworks like kafka and rabbitMQ.
Experience working in an agile environment.
Experience with NodeJs or any front-end framework is a plus.
Excellent verbal and written communication skills in English.
3 - 5 years of experience writing scalable C++ applications on Linux environments.
Expert in C++ STL libraries and Boost.
Expert in C++ testing frameworks like Google test or Boost test.
Experience with different build systems e.g. make, cmake.
Good knowledge of TCP or IP protocol and multi-threaded programming.
Experience with writing code in Scala.
Experience with Scala testing frameworks.
Proven experience with writing code for spark data processing.
Familiarity with various OLAP data stores (Druid, Clickhouse etc) and their insights.
Experience with at least one columnar OLAP databases.
Familiarity with industry standard analytics and visualisation tools.