The point where experts and best companies meet
Share
Job Responsibilites:
Analyze and model data
Develop, debug and optimize data pipelines and continuesly raise their quality
Technical ownership and be an SME of an area of our data assets
Build strong documentation and high quality code
Work and collaborate with stakeholders and a global team on high quality deliverables
Job Requirements:
Typically 2+ years of experience in software or data engineering.
Extensive experience in data modeling, data integration and processing of structured and unstructured data.
Proficient in one or more programming languages (Python preferred).
SQL proficiency (experience with NoSQL – advantage).
Experienced with Apache Spark
Excellent communication skills; mastery in English and local language.
Ability to effectively communicate product architectures, design and change proposals.
Nice to have:
Worked in Databricks / Databricks certified
Pandas
Familiar with best practices of the data and software engineering lifecycle and/or best practices of the above platforms and tools.
Familiar with scrum, JIRA and GitHub
Please be assured that you will not be subject to any adverse treatment if you choose to disclose the information requested. This information is provided voluntarily. The information obtained will be kept in strict confidence.
These jobs might be a good fit