Expoint - all jobs in one place

Finding the best job has never been easier

Limitless High-tech career opportunities - Expoint

EY EY-GDS Consulting-AI DATA- Palantir Manager 
India, Kerala, Thiruvananthapuram 
173735398

29.01.2025

Your key responsibilities

  • Architecting big data solutions in a cloud environment using Azure Cloud services
  • ETL design, development, and deployment to Cloud Service
  • Interact with Onshore, understand their business goals, contribute to the delivery of the workstreams
  • Develop standardized practices for delivering new products and capabilities using Big Data technologies, including data acquisition, transformation, and analysis.
  • Define and develop client specific best practices around data management within a Hadoop environment on Azure cloud
  • Recommend design alternatives for data ingestion, processing, and provisioning layers

Skills and attributes for success

  • 8-11 years of experience in architecting big data solutions with proven track record in driving business success
  • Hands-on expertise in cloud services like Microsoft Azure
  • Experience with databricks, python, and ADF
  • Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Hadoop and Cassandra.
  • Experience with BI, and data analytics databases
  • Strong understanding & familiarity with all Hadoop Ecosystem components and Hadoop administrative Fundamentals
  • Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms
  • Experience in the development of Hadoop APIs and MapReduce jobs for large scale data processing.
  • Hands-on programming experience in Apache Spark using SparkSQL and Spark Streaming or Apache Storm
  • Hands on experience with major components like Hive, PIG, Spark, MapReduce
  • Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB
  • Experienced in Hadoop clustering and Auto scaling.
  • Good knowledge in apache Kafka & Apache Flume
  • Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions
  • Knowledge of Apache Oozie based workflow
  • Experience in converting business problems/challenges to technical solutions considering security, performance, scalability etc.
  • Experience in Enterprise grade solution implementations.
  • Knowledge in Big data architecture patterns [Lambda, Kappa]
  • Experience in performance bench marking enterprise applications
  • Experience in Data security [on the move, at rest] and knowledge of data standards like APRA, BASEL etc
  • Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop technologies
  • Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies
  • Strong UNIX operating system concepts and shell scripting knowledge
  • Knowledge of microservices and API development

To qualify for the role, you must have

  • Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution.
  • Excellent communicator (written and verbal formal and informal).
  • Ability to multi-task under pressure and work independently with minimal supervision.
  • Strong verbal and written communication skills.
  • Must be a team player and enjoy working in a cooperative and collaborative team environment.
  • Adaptable to new technologies and standards.
  • Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support.
  • Minimum 8 years hand-on experience in one or more of the above areas.
  • Minimum 8 years industry experience



EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets.