Expoint – all jobs in one place
מציאת משרת הייטק בחברות הטובות ביותר מעולם לא הייתה קלה יותר
Limitless High-tech career opportunities - Expoint

EY Associate - Data Engineer 
Malaysia, Kuala Lumpur 
814664186

19.11.2025

We are seeking a professional looking to build a career into design, build, and maintain robust data pipelines and architectures for large-scale analytics and AI workloads. You will work directly with Senior Data Engineers on data infrastructure, ensuring that data from multiple sources is efficiently collected, transformed, and made available for downstream applications. This role is fully hands-on and focused on technical execution


Your key responsibilities

To work under supervision of a Senior Data Engineer to deliver below task:

  • Design, develop ETL/ELT pipelines for structured, semi-structured, and unstructured data.
  • Build data storage solutions (data lakes, warehouses, and streaming platforms).
  • Implement data quality, validation, and monitoring processes.
  • Optimize query performance and resource utilization in data processing systems.
  • Collaborate with data scientists, ML engineers, and application developers to ensure seamless data availability.
  • Automate workflows for data ingestion, transformation, and delivery.

Skills and attributes for success

  • Possess some experience as a Data Engineer or ETL Developer.
  • Basic proficiency in SQL and at least one programming language (Python, Java, or Scala).

To qualify for the role, you must have

  • Minimum qualifications of Diploma/ Bachelor’s degree in Computer Science/ IT/ Machine Learning/ AI.

Ideally, you’ll also have

  • Experience with big data frameworks such as Apache Spark, Flink, or Hadoop.
  • Proficiency with ETL tools (Informatica, Talend, Microsoft SSIS) or open-source integration tools (Kafka, NiFi, Airflow).
  • Understanding of data modeling, warehousing concepts, and schema design.
  • Familiarity with cloud data platforms (AWS Redshift, GCP BigQuery, Azure Synapse).
  • Experience with streaming data processing and real-time analytics pipelines.

What we look for

  • Knowledge of containerization (Docker, Kubernetes) for data workloads.
  • Exposure to DevOps/DataOps practices for CI/CD of data pipelines.
  • Experience with version control (Git) and infrastructure-as-code tools (Terraform).