Finding the best job has never been easier
Share
Responsibilities
Actively contributes to the implementation of critical features and complex technical solutions. Write clean, efficient, and maintainable code that meets the highest standards of quality.
Provide guidance on scalable, robust, and efficient solutions that align with business requirements and industry best practices.
10+ years’ experienceof implementing data-intensive solutions using agile methodologies.
Proficient in one or more programming languages commonly used in data engineering such asScala or Pyspark
Experience with Hadoop for data storage and processing is valuable, as is exposure to modern data platforms such as Snowflake and Databricks.
Experience of modelling data for analytical consumers
Strong proficiency in working with relational databases and using SQL for data querying, transformation, and manipulation.
Clear understanding of Data Structures and Object-Oriented Principles.
Multiple years of experience with software engineering best practices (unit testing, automation, design patterns, peer review, etc.)
Experience in cloud native technologies and patterns (AWS, Google Cloud)
Multiple years of experience architecting and building horizontally scalable, highly available, highly resilient, and low latency applications
Multiple years of experience with Cloud-native development and Container Orchestration tools (Serverless, Docker, Kubernetes, OpenShift, etc.)
Ability to automate and streamline the build, test and deployment of data pipelines.
BA/BS degree or equivalent work experience.
Preferred Qualifications
Familiarity with open-source data engineering tools and frameworks (e.g. Spark, Kafka, Beam, Flink, Trino, Airflow, DBT) is a valuable asset
Exposure to a range of table and file formats including Iceberg, Hive, Avro, Parquet and JSON
Exposure to Infrastructure as Code tools (i.e., Terraform, Cloudformation, etc.)
Applications Development
Time Type:
View the " " poster. View the .
View the .
View the
These jobs might be a good fit