Create highly scalable and fault tolerant technical designs working with team members (up to 5 people)
Develop and implement data pipelines that extracts, transforms and loads data into an information product that helps to inform the organization in reaching strategic goals
Write high-quality code, conduct and participate in code reviews, and follow strong engineering principles and standards
Research the technical feasibility of new ideas and actively suggest technology improvements
Quickly develop a thorough understanding of the product, architecting the system and shipping production ready code
Write maintainable code that can scale fast
Support and contribute to our amazing work culture
Qualifications of the Data Engineer role:
General:
Profound experience & understanding of object-oriented design, design patterns, micro services architecture, data structure, algorithms and their complexities, systems architecture
Skilled in writing and automating tests for your code
Proven working experience with cloud platforms
Working experience with OLTP databases, specifically MySQL, understands day to day challenges related to query execution and optimization (e.g. indexing, cascading)
Working experience with big data aggregation frameworks (spark)
Experience with streaming platforms like Kafka and rabbitMQ
Experience working in an agile environment
Experience with C++ or any front-end framework is a plus
Excellent verbal and written communication skills in English
Big Data related :
3+ years of experience with writing code in Scala
Experience with Python, Spark, Airflow
Experience with Scala testing frameworks
Understanding of data-warehousing and data-modeling techniques
Proven experience with writing code for spark data processing
Familiarity with various OLAP data stores (Druid, ClickHouse etc) and their insights
Experience with at least one columnar OLAP databases
Familiarity with industry-standard analytics and visualization tools