Expoint - all jobs in one place

The point where experts and best companies meet

Limitless High-tech career opportunities - Expoint

EY GMS-D-A-Staff 
India, Telangana, Hyderabad 
684053296

25.06.2024

Responsibilities:

  • Partner with business stakeholders and solution architects to understand data pipeline requirements and challenges specific to Oracle ERP, SAP and Treasury systems, and within the context of big data processing using Databricks, including ETL development needs.
  • Design and implement data engineering solutions, including data ingestion, cleansing, transformation, orchestration, and validation processes for data originating from Oracle ERP, SAP, and processed within Databricks, utilizing PySpark for building and maintaining ETL pipelines.
  • Develop, enforce and maintain data architecture & modeling standards, policies, and procedures specifically tailored to Oracle ERP, SAP, and Databricks data integration, processing, and ETL development.
  • Monitor and assess data quality metrics within Oracle ERP, SAP, and Databricks pipelines, including those related to ETL performance and data quality within ETL processes.
  • Develop and implement data governance and security frameworks to ensure data quality and compliance, focusing on Oracle ERP, SAP, and Databricks data integration, processing, and ETL development.
  • Collaborate with data analysts and scientists to ensure data used in analysis from Oracle ERP, SAP, and processed through Databricks, including data transformed through ETL pipelines, is high-quality and reliable while maintaining right data access controls and separation.
  • Document data dictionary, mappings, processes and solutions for future reference, specifically regarding Oracle ERP, SAP, and Databricks data integration, processing, and ETL development.
  • Stay up to date on the latest trends and innovations around data and platform engineering technologies and best practices, particularly those relevant to Oracle ERP, SAP, Databricks, and PySpark for ETL pipeline development and automation.

Qualifications:

  • Bachelor’s degree in computer science, Information Technology, or a related field (Master’s degree preferred).
  • 3+ years of experience as a Data Engineer or similar role.
  • Extensive knowledge of data architecture and modeling principles and methodologies as applied to Oracle ERP, SAP systems, and big data processing in Databricks.
  • Experience in designing and implementing Cloud based data engineering solutions using various tools and technologies compatible with Oracle ERP, SAP, and Databricks (e.g., data profiling, data cleansing tools, data quality monitoring tools within Databricks).
  • Strong understanding of Oracle ERP and SAP data models, functionalities, and integrations.
  • Experience working with Oracle and SAP data extraction, transformation, and loading (ETL) processes.
  • Experience and expertise in utilizing Databricks for data processing, transformation, and analytics workflows, including building and maintaining ETL pipelines using PySpark.
  • Familiarity with data engineering best practices within the context of enterprise resource planning (ERP) systems and big data processing.
  • Self-driven and able to deliver with minimal Supervision.
  • Execute with the mind-set for Standard Work, Simplification, Modernization, and Operational Experience.
  • Strong analytical and problem-solving skills.
  • Excellent communication and collaboration skills.



EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets.