Create highly scalable and fault tolerant technical designs working with team members (up to 5 people).
Develop and implement data pipelines that extracts, transforms and loads data into an information product that helps to inform the organization in reaching strategic goals.
Write high-quality code, conduct and participate in code reviews, and follow strong engineering principles and standards.
Research the technical feasibility of new ideas and actively suggest technology improvements.
Quickly develop a thorough understanding of the product, architecting the system and shipping production ready code.
Write maintainable code that can scale fast.
Support and contribute to our amazing work culture.
About You:
Deep experience & understanding of object-oriented design, design patterns, micro services architecture, data structure, algorithms and their complexities, and systems architecture
Skilled in writing and automating tests for your code
Proven working experience with cloud platforms.
Demonstrated experience with OLTP databases, specifically MySQL (understanding of day to day challenges related to query execution and optimization, e.g. indexing and cascading)
Working experience with big data aggregation frameworks (Spark)
Experience with streaming platforms like kafka and rabbitMQ
Experience working in an agile environment
Experience with C++ or any front-end framework is a plus
Excellent spoken and written communication skills in English
Big Data Skills:
3+ years of experience with writing code in Scala
Experience with Python, Spark, Airflow
Experience with Scala testing frameworks
Understanding of data-warehousing and data-modeling techniques
Proven experience with writing code for spark data processing
Familiarity with various OLAP data stores (Druid, Clickhouse etc) and their insights
Experience with at least one columnar OLAP databases
Familiarity with industry-standard analytics and visualization tools