Finding the best job has never been easier
Share
About the Role:
Responsibilities:
* Design, develop, and maintain robust and scalable data pipelines using Java and related technologies (e.g., Apache Spark, Apache Flink, Kafka).
* Build and optimize real-time and batch data processing applications to support low-latency requirements.
* Implement data integration solutions between various data sources and targets, including databases, APIs, and streaming platforms.
* Work with MPP platforms like Trino (Presto) and Snowflake to process and analyze large datasets.
* Contribute to the design and development of event-driven architectures.
* Write clean, well-documented, and testable code.
* Collaborate effectively with other engineers, product managers, and stakeholders throughout the software development lifecycle (SDLC), adhering to Agile methodologies.
* Stay up-to-date with the latest trends and technologies in the data engineering space.
Qualifications:
* Bachelor’s degree in Computer Science, Engineering, or a related field.
* Minimum 5 years of experience developing and deploying production-ready Java applications in a data engineering context.
* Strong experience with core Java (version 11 or higher), SQL, and database APIs.
* Proven experience working with distributed stream processing frameworks like Apache Flink, Spark Streaming, or Kafka Streams.
* Experience with event-driven architectures and real-time data processing.
* Solid understanding of OOP concepts, multithreading, and thread pools.
* Familiarity with containerization technologies like Docker and deployment platforms like Openshift, ECS, or Kubernetes is a plus.
* Excellent communication and collaboration skills.
Preferred Skills and Qualifications:
* Master’s degree in a relevant field.
* Contributions to open-source projects.
* Experience working in a cloud environment (AWS, GCP)
Time Type:
View the " " poster. View the .
View the .
View the
These jobs might be a good fit