Expoint – all jobs in one place
מציאת משרת הייטק בחברות הטובות ביותר מעולם לא הייתה קלה יותר

דרושים Data Engineer-business Intelligence ב-Ibm ב-India, Mumbai

מצאו את ההתאמה המושלמת עבורכם עם אקספוינט! חפשו הזדמנויות עבודה בתור Data Engineer-business Intelligence ב-India, Mumbai והצטרפו לרשת החברות המובילות בתעשיית ההייטק, כמו Ibm. הירשמו עכשיו ומצאו את עבודת החלומות שלך עם אקספוינט!
חברה (1)
אופי המשרה
קטגוריות תפקיד
שם תפקיד (1)
India
Mumbai
נמצאו 6 משרות
04.09.2025
IBM

IBM Data Engineer-Data Platforms-Azure India, Maharashtra, Mumbai

Limitless High-tech career opportunities - Expoint
Develop ETL/ELT pipelines in Databricks using PySpark, Spark SQL, Delta Lake . Use Delta Live Tables for simplified pipeline orchestration. Implement Databricks Auto Loader for real-time/batch data ingestion. Build Databricks...
תיאור:
Your role and responsibilities

Key Responsibilities

  • Develop ETL/ELT pipelines in Databricks using PySpark, Spark SQL, Delta Lake .
  • Use Delta Live Tables for simplified pipeline orchestration.
  • Implement Databricks Auto Loader for real-time/batch data ingestion.
  • Build Databricks SQL dashboards and queries for reporting and analytics.
  • Manage Databricks clusters, jobs, and workflows ensuring cost efficiency.
  • Work with cloud-native services ( ADF, Synapse, ADLS or AWS Glue, S3, Redshift ) for data integration.
  • Apply Unity Catalog for role-based access and lineage tracking.
  • Collaborate with data scientists to support ML workloads using MLflow .
Required education
Bachelor's Degree
Required technical and professional expertise

Mandatory Skills

  • Strong Databricks expertise : PySpark, Spark SQL, Delta Lake (ACID, schema evolution, time travel).
  • Exposure to Delta Live Tables, Auto Loader, Unity Catalog, MLflow .
  • Hands-on with Azure or AWS data services .
  • Strong SQL and Python programming for data pipelines.
  • Knowledge of data modeling (star/snowflake, lakehouse) .
Preferred technical and professional experience

Good to Have

  • Streaming data experience (Kafka, Event Hub, Kinesis).
  • Familiarity with Databricks REST APIs .
  • Certification: Databricks Data Engineer Associate , Azure DP-203 / AWS Analytics Specialty.

Being an IBMer means you’ll be able to learn and develop yourself and your career, you’ll be encouraged to be courageous and experiment everyday, all whilst having continuous trust and support in an environment where everyone can thrive whatever their personal or professional background.

OTHER RELEVANT JOB DETAILS

When applying to jobs of your interest, we recommend that you do so for those that match your experience and expertise. Our recruiters advise that you apply to not more than 3 roles in a year for the best candidate experience. For additional information about location requirements, please discuss with the recruiter following submission of your application.

Show more
03.09.2025
IBM

IBM Data Engineer-Business Intelligence India, Maharashtra, Mumbai

Limitless High-tech career opportunities - Expoint
Provide expertise in analysis, requirements gathering, design, coordination, customization, testing and support of reports, in client’s environment. Develop and maintain a strong working relationship with business and technical members of...
תיאור:

In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.

In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.

Your role and responsibilities
  • Provide expertise in analysis, requirements gathering, design, coordination, customization, testing and support of reports, in client’s environment
  • Develop and maintain a strong working relationship with business and technical members of the team
  • Relentless focus on quality and continuous improvement
  • Perform root cause analysis of reports issues
  • Development / evolutionary maintenance of the environment, performance, capability and availability.
  • Assisting in defining technical requirements and developing solutions
  • Effective content and source-code management, troubleshooting and debugging.
Required education
Bachelor's Degree
Preferred education
Master's Degree
Required technical and professional expertise
  • Tableau Desktop Specialist, SQL -Strong understanding of SQL for Querying database Good to have - Python ; Snowflake, Statistics, ETL experience.
  • Extensive knowledge on using creating impactful visualization using Tableau.
  • Must have thorough understanding of SQL & advance SQL (Joining & Relationships).
  • Must have experience in working with different databases and how to blend & create relationships in Tableau.
  • Must have extensive knowledge to creating Custom SQL to pull desired data from databases. Troubleshooting capabilities to debug Data controls.
Preferred technical and professional experience
  • Troubleshooting capabilities to debug Data controls Capable of converting business requirements into workable model.
  • Good communication skills, willingness to learn new technologies, Team Player, Self-Motivated, Positive Attitude.
  • Must have thorough understanding of SQL & advance SQL (Joining & Relationships).

Being an IBMer means you’ll be able to learn and develop yourself and your career, you’ll be encouraged to be courageous and experiment everyday, all whilst having continuous trust and support in an environment where everyone can thrive whatever their personal or professional background.

OTHER RELEVANT JOB DETAILS

When applying to jobs of your interest, we recommend that you do so for those that match your experience and expertise. Our recruiters advise that you apply to not more than 3 roles in a year for the best candidate experience. For additional information about location requirements, please discuss with the recruiter following submission of your application.

Show more

משרות נוספות שיכולות לעניין אותך

03.09.2025
IBM

IBM Data Engineer-Business Intelligence India, Maharashtra, Mumbai

Limitless High-tech career opportunities - Expoint
Provide expertise in analysis, requirements gathering, design, coordination, customization, testing and support of reports, in client’s environment. Develop and maintain a strong working relationship with business and technical members of...
תיאור:

In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.

In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.

Your role and responsibilities
  • Provide expertise in analysis, requirements gathering, design, coordination, customization, testing and support of reports, in client’s environment
  • Develop and maintain a strong working relationship with business and technical members of the team
  • Relentless focus on quality and continuous improvement
  • Perform root cause analysis of reports issues
  • Development / evolutionary maintenance of the environment, performance, capability and availability.
  • Assisting in defining technical requirements and developing solutions
  • Effective content and source-code management, troubleshooting and debugging.
Required education
Bachelor's Degree
Preferred education
Master's Degree
Required technical and professional expertise
  • Tableau Desktop Specialist, SQL -Strong understanding of SQL for Querying database Good to have - Python ; Snowflake, Statistics, ETL experience.
  • Extensive knowledge on using creating impactful visualization using Tableau.
  • Must have thorough understanding of SQL & advance SQL (Joining & Relationships).
  • Must have experience in working with different databases and how to blend & create relationships in Tableau.
  • Must have extensive knowledge to creating Custom SQL to pull desired data from databases. Troubleshooting capabilities to debug Data controls.
Preferred technical and professional experience
  • Troubleshooting capabilities to debug Data controls Capable of converting business requirements into workable model.
  • Good communication skills, willingness to learn new technologies, Team Player, Self-Motivated, Positive Attitude.
  • Must have thorough understanding of SQL & advance SQL (Joining & Relationships).

Being an IBMer means you’ll be able to learn and develop yourself and your career, you’ll be encouraged to be courageous and experiment everyday, all whilst having continuous trust and support in an environment where everyone can thrive whatever their personal or professional background.

OTHER RELEVANT JOB DETAILS

When applying to jobs of your interest, we recommend that you do so for those that match your experience and expertise. Our recruiters advise that you apply to not more than 3 roles in a year for the best candidate experience. For additional information about location requirements, please discuss with the recruiter following submission of your application.

Show more

משרות נוספות שיכולות לעניין אותך

11.05.2025
IBM

IBM Data Analytics - Advanced India, Maharashtra, Mumbai

Limitless High-tech career opportunities - Expoint
Developing AI/ML models for predictive analytics, fraud detection, and customer segmentation. Implementing time-series forecasting, anomaly detection, and optimization models. Working with deep learning (DL) and Natural Language Processing (NLP) for...
תיאור:
Your role and responsibilities

Who you are: A senior Data Scientist specializing in Advanced Analytics, with expertise in machine learning (ML), predictive modeling, and statistical analysis. Sound experience in leveraging Big-data technologies, AI, and automation to solve complex business problems and enhance decision-making.
Have experience working with Cloudera Data Platform, Apache Spark, Kafka, and Iceberg tables, and you understand how to design and deploy scalable AI/ML models. Your role will be instrumental in data modernization efforts, applying AI-driven insights to enhance efficiency, optimize operations, and mitigate risks.What you’ll do: As a Data Scientist – Advanced Analytics, your responsibilities include:
AI & Machine Learning Model Development
• Developing AI/ML models for predictive analytics, fraud detection, and customer segmentation.
• Implementing time-series forecasting, anomaly detection, and optimization models.
• Working with deep learning (DL) and Natural Language Processing (NLP) for AI-driven automation.
Big Data & Scalable AI Pipelines
• Processing and analyzing large datasets using Apache Spark, PySpark, and Iceberg tables.
• Deploying real-time models and streaming analytics with Kafka.
• Supporting AI model training and deployment on Cloudera Machine Learning (CML).
Advanced Analytics & Business Impact
• Performing exploratory data analysis (EDA) and statistical modelling.
• Delivering AI-driven insights to improve business decision-making.
• Supporting data quality and governance initiatives using Talend DQ.
Data Governance & Security
• Ensuring AI models comply with Bank’s data governance and security policies.
• Implementing AI-driven anomaly detection and metadata management.
• Utilizing Thales CipherTrust for data encryption and compliance.
Collaboration & Thought Leadership
• Working closely with data engineers, analysts, and business teams to integrate AI-driven solutions.
• Presenting AI insights and recommendations to stakeholders and leadership teams.
• Contributing to the development of best practices for AI and analytics.

Required education
Bachelor's Degree
Preferred education
Master's Degree
Required technical and professional expertise

3-7 years of experience in AI, ML, and Advanced Analytics.
• Proficiency in Python, R, SQL, and ML frameworks (Scikit-learn, TensorFlow, PyTorch).
• Hands-on experience with Big-data technologies (Cloudera, Apache Spark, Kafka, Iceberg table format).
• Strong knowledge of statistical modelling, optimization, and feature engineering.
• Understanding of MLOps practices and AI model deployment.

Preferred technical and professional experience

Develop and implement advanced analytics models, including predictive, prescriptive, and diagnostic analytics to solve business challenges and optimize decision-making processes. Utilize tools and technologies to work with Large and complex datasets to derive analytical solutions.
• Build and deploy machine learning models (supervised and unsupervised), statistical models, and data-driven algorithms for forecasting, segmentation, classification, and anomaly detection.
• Should have strong hands-on experience in Python, Spark and cloud computing.
• Should be independently working and be able to deploy deep learning models using various architectures.
• Should be able to perform exploratory data analysis (EDA) to uncover trends, relationships, and outliers in large, complex datasets. Design and create features that improve model accuracy and business relevance.
• Should create insightful visualizations and dashboards that communicate findings to stakeholders. Effectively translate complex data insights into clear and actionable recommendations.
• Work closely with business leaders, engineers, and analysts to understand business requirements and translate them into analytical solutions that address strategic goals.
• Exposure to Graph AI using DGraph Enterprise.
• Knowledge of cloud-based AI platforms (AWS SageMaker, Azure ML, GCP Vertex AI).

Being an IBMer means you’ll be able to learn and develop yourself and your career, you’ll be encouraged to be courageous and experiment everyday, all whilst having continuous trust and support in an environment where everyone can thrive whatever their personal or professional background.

OTHER RELEVANT JOB DETAILS

When applying to jobs of your interest, we recommend that you do so for those that match your experience and expertise. Our recruiters advise that you apply to not more than 3 roles in a year for the best candidate experience. For additional information about location requirements, please discuss with the recruiter following submission of your application.

Show more

משרות נוספות שיכולות לעניין אותך

10.05.2025
IBM

IBM Data Engineer-Data Platforms India, Maharashtra, Mumbai

Limitless High-tech career opportunities - Expoint
Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake. Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark). Working with IBM CDC...
תיאור:
Your role and responsibilities

Who you are: A Data Engineer specializing in enterprise data platforms, experienced in building, managing, and optimizing data pipelines for large-scale environments. Having expertise in big data technologies, distributed computing, data ingestion, and transformation frameworks.
Proficient in Apache Spark, PySpark, Kafka, and Iceberg tables, and understand how to design and implement scalable, high-performance data processing solutions.What you’ll do: As a Data Engineer – Data Platform Services, responsibilities include:
Data Ingestion & Processing
• Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake.
• Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark).
• Working with IBM CDC and Universal Data Mover to manage data replication and movement.
Big Data & Data Lakehouse Management
• Implementing Apache Iceberg tables for efficient data storage and retrieval.
• Managing distributed data processing with Cloudera Data Platform (CDP).
• Ensuring data lineage, cataloging, and governance for compliance with Bank/regulatory policies.
Optimization & Performance Tuning
• Optimizing Spark and PySpark jobs for performance and scalability.
• Implementing data partitioning, indexing, and caching to enhance query performance.
• Monitoring and troubleshooting pipeline failures and performance bottlenecks.
Security & Compliance
• Ensuring secure data access, encryption, and masking using Thales CipherTrust.
• Implementing role-based access controls (RBAC) and data governance policies.
• Supporting metadata management and data quality initiatives.
Collaboration & Automation
• Working closely with Data Scientists, Analysts, and DevOps teams to integrate data solutions.
• Automating data workflows using Airflow and implementing CI/CD pipelines with GitLab and Sonatype Nexus.
• Supporting Denodo-based data virtualization for seamless data access.

Required education
Bachelor's Degree
Preferred education
Master's Degree
Required technical and professional expertise

• 4-7 years of experience in big data engineering, data integration, and distributed computing.
• Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP).
• Proficiency in Python or Scala for data processing.
• Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM).
• Understanding of data security, encryption, and compliance frameworks.

Preferred technical and professional experience

Experience in banking or financial services data platforms.
• Exposure to Denodo for data virtualization and DGraph for graph-based insights.
• Familiarity with cloud data platforms (AWS, Azure, GCP).

• Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics..

Being an IBMer means you’ll be able to learn and develop yourself and your career, you’ll be encouraged to be courageous and experiment everyday, all whilst having continuous trust and support in an environment where everyone can thrive whatever their personal or professional background.

OTHER RELEVANT JOB DETAILS

When applying to jobs of your interest, we recommend that you do so for those that match your experience and expertise. Our recruiters advise that you apply to not more than 3 roles in a year for the best candidate experience. For additional information about location requirements, please discuss with the recruiter following submission of your application.

Show more

משרות נוספות שיכולות לעניין אותך

09.05.2025
IBM

IBM Data Engineer-Data Platforms India, Maharashtra, Mumbai

Limitless High-tech career opportunities - Expoint
Leading the migration of ETL workflows from IBM DataStage to PySpark, ensuring performance optimization and cost efficiency. Designing and implementing data ingestion frameworks using Kafka and PySpark, replacing legacy ETL...
תיאור:
Your role and responsibilities

What you’ll do: As a Data Engineer – Data Platform Services, you will be responsible for:


Data Migration & Modernization
• Leading the migration of ETL workflows from IBM DataStage to PySpark, ensuring performance optimization and cost efficiency.
• Designing and implementing data ingestion frameworks using Kafka and PySpark, replacing legacy ETL Pipeline using DataStage.
• Migrating the analytical platform from IBM Integrated Analytics System (IIAS) to Cloudera Data Lake on CDP.
Data Engineering & Pipeline Development
• Developing and maintaining scalable, fault-tolerant, and optimized data pipelines on Cloudera Data Platform.
• Implementing data transformations, enrichment, and quality checks to ensure accuracy and reliability.
• Leveraging Denodo for data virtualization and enabling seamless access to distributed datasets.
Performance Tuning & Optimization
• Optimizing PySpark jobs for efficiency, scalability, and reduced cost on Cloudera.
• Fine-tuning query performance on Iceberg tables and ensuring efficient data storage and retrieval.
• Collaborating with Cloudera ML engineers to integrate machine learning workloads into data pipelines.
Security & Compliance
• Implementing Thales CipherTrust encryption and tokenization mechanisms for secure data processing.
• Ensuring compliance with Bank/regulatory body security guidelines, data governance policies, and best practices.
Collaboration & Leadership
• Working closely with business stakeholders, architects, and data scientists to align solutions with business goals.
• Leading and mentoring junior data engineers, conducting code reviews, and promoting best practices.
• Collaborating with DevOps teams to streamline CI/CD pipelines, using GitLab and Nexus Repository for efficient deployments.

Required education
Bachelor's Degree
Preferred education
Master's Degree
Required technical and professional expertise

12+ years of experience in Data Engineering, ETL, and Data Platform Modernization.
• Hands-on experience in IBM DataStage and PySpark, with a track record of migrating legacy ETL workloads.
• Expertise in Apache Iceberg, Cloudera Data Platform, and Big-data processing frameworks.
• Strong knowledge of Kafka, Airflow, and cloud-native data processing solutions.
• Experience with Denodo for data virtualization and Talend DQ for data quality.
• Proficiency in SQL, NoSQL, and Graph DBs (DGraph Enterprise).
• Strong understanding of data security, encryption, and compliance standards (Thales CipherTrust).
• Experience with DevOps, CI/CD pipelines, GitLab, and Sonatype Nexus Repository.
• Excellent problem-solving, analytical, and communication skills.

Preferred technical and professional experience

Experience with Cloudera migration projects in Banking or financial domains.

• Experience working with Banking Data model.

• Knowledge of Cloudera ML, Qlik Sense/Tableau reporting, and integration with data lakes.

• Hands-on experience with QuerySurge for automated data testing.

• Understanding of code quality and security best practices using CheckMarx.

• IBM, Cloudera, or AWS/GCP certifications in Data Engineering, Cloud, or Security.

• “Meghdoot” Cloud platform knowledge.

• Architectural designing and recommendations the best possible solutions.

Being an IBMer means you’ll be able to learn and develop yourself and your career, you’ll be encouraged to be courageous and experiment everyday, all whilst having continuous trust and support in an environment where everyone can thrive whatever their personal or professional background.

OTHER RELEVANT JOB DETAILS

When applying to jobs of your interest, we recommend that you do so for those that match your experience and expertise. Our recruiters advise that you apply to not more than 3 roles in a year for the best candidate experience. For additional information about location requirements, please discuss with the recruiter following submission of your application.

Show more

משרות נוספות שיכולות לעניין אותך

Limitless High-tech career opportunities - Expoint
Develop ETL/ELT pipelines in Databricks using PySpark, Spark SQL, Delta Lake . Use Delta Live Tables for simplified pipeline orchestration. Implement Databricks Auto Loader for real-time/batch data ingestion. Build Databricks...
תיאור:
Your role and responsibilities

Key Responsibilities

  • Develop ETL/ELT pipelines in Databricks using PySpark, Spark SQL, Delta Lake .
  • Use Delta Live Tables for simplified pipeline orchestration.
  • Implement Databricks Auto Loader for real-time/batch data ingestion.
  • Build Databricks SQL dashboards and queries for reporting and analytics.
  • Manage Databricks clusters, jobs, and workflows ensuring cost efficiency.
  • Work with cloud-native services ( ADF, Synapse, ADLS or AWS Glue, S3, Redshift ) for data integration.
  • Apply Unity Catalog for role-based access and lineage tracking.
  • Collaborate with data scientists to support ML workloads using MLflow .
Required education
Bachelor's Degree
Required technical and professional expertise

Mandatory Skills

  • Strong Databricks expertise : PySpark, Spark SQL, Delta Lake (ACID, schema evolution, time travel).
  • Exposure to Delta Live Tables, Auto Loader, Unity Catalog, MLflow .
  • Hands-on with Azure or AWS data services .
  • Strong SQL and Python programming for data pipelines.
  • Knowledge of data modeling (star/snowflake, lakehouse) .
Preferred technical and professional experience

Good to Have

  • Streaming data experience (Kafka, Event Hub, Kinesis).
  • Familiarity with Databricks REST APIs .
  • Certification: Databricks Data Engineer Associate , Azure DP-203 / AWS Analytics Specialty.

Being an IBMer means you’ll be able to learn and develop yourself and your career, you’ll be encouraged to be courageous and experiment everyday, all whilst having continuous trust and support in an environment where everyone can thrive whatever their personal or professional background.

OTHER RELEVANT JOB DETAILS

When applying to jobs of your interest, we recommend that you do so for those that match your experience and expertise. Our recruiters advise that you apply to not more than 3 roles in a year for the best candidate experience. For additional information about location requirements, please discuss with the recruiter following submission of your application.

Show more
בואו למצוא את עבודת החלומות שלכם בהייטק עם אקספוינט. באמצעות הפלטפורמה שלנו תוכל לחפש בקלות הזדמנויות Data Engineer-business Intelligence בחברת Ibm ב-India, Mumbai. בין אם אתם מחפשים אתגר חדש ובין אם אתם רוצים לעבוד עם ארגון ספציפי בתפקיד מסוים, Expoint מקלה על מציאת התאמת העבודה המושלמת עבורכם. התחברו לחברות מובילות באזור שלכם עוד היום וקדמו את קריירת ההייטק שלכם! הירשמו היום ועשו את הצעד הבא במסע הקריירה שלכם בעזרת אקספוינט.