Expoint – all jobs in one place
Finding the best job has never been easier
Limitless High-tech career opportunities - Expoint

EY TechOps-DE-CloudOpsAMS-GWPolicy-Senior 
India, West Bengal, Kolkata 
432397421

Yesterday

We’re looking Senior Data Architect with 12+ years of progressive experience in data engineering, data warehousing, or big data roles, with at least 5 years focused specifically on data architecture. The candidate will be a technical expert in the modern big data ecosystem, with proven mastery of Hadoop, Hive, Spark, and Apache Iceberg. Candidate will be responsible for defining our strategic data architecture, setting technical standards, and leading the implementation of robust, scalable, and efficient data solutions.

Your key responsibilities

  • Work as a project manager to Lead the design and evolution of our large-scale data lakehouse architecture, ensuring it is scalable, reliable, and cost-effective.
  • Provide technical leadership and mentorship to data engineers and analysts. Collaborate closely with Software Engineers, and business stakeholders to understand requirements and deliver effective data solutions.
  • Experience in data modelling, data mapping, data profiling and meta data management.
  • Architect and tune high-performance data processing pipelines. Identify and resolve complex performance issues in distributed computing environments involving Spark execution, Hive query optimization, and Iceberg metadata management.
  • Expert Experience on Hadoop (HDFS, YARN), Hive (including LLAP, Tez), Spark (Structured Streaming, Spark SQL), and Apache Iceberg.
  • Expert-level proficiency in building and optimizing large-scale data processing pipelines using Spark (PySpark/Scala).
  • Deep understanding of Spark internals, execution plans, and tuning.
  • Extensive experience in writing, optimizing, and managing HiveQL scripts. Deep knowledge of Hive architecture, file formats (ORC, Parquet), and performance tuning.
  • Strong, hands-on experience with the core Hadoop ecosystem (HDFS, YARN, MapReduce). Understanding of cluster management and fundamentals.
  • Hands-on experience designing and implementing data lakes using Apache Iceberg as the table format. Must understand features like schema evolution, hidden partitioning, time travel, and performance benefits over Hive tables.
  • Experience in either Python (PySpark) or Scala.
  • Mastery of SQL and experience optimizing complex queries on massive datasets.
  • Experience with at least one major cloud platform (AWS (EMR, S3, Glue), Azure (Databricks, ADLS, Synapse), or GCP (Dataproc, BigQuery, GCS)).
  • Interface and communicate with the onsite teams directly to understand the requirement and determine the optimum solutions.
  • Create technical solutions as per business needs by translating their requirements and finding innovative solution options.
  • Lead and mentor a team throughout design, development and delivery phases and keep the team intact on high pressure situations.
  • Get involved in business development activities like creating proof of concepts (POCs), point of views (POVs), assist in proposal writing and service offering development, and capable of developing creative power point content for presentations.
  • Create and maintain detailed architecture diagrams, data flow maps, and other technical documentation.
  • Participate in organization-level initiatives and operational activities.

Skills and attributes for success

  • Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates
  • Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations.
  • Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint.

Ideally, you’ll also have

  • 8-10 years of experience in Banking and capital markets sector preferred
  • Cloud architect certifications
  • Experience using Agile methodologies.
  • Experience with real-time stream processing technologies (Kafka, Flink, Spark Streaming).
  • Experience with containerization and orchestration tools (Docker, Kubernetes).
  • Experience with DevOps/DataOps principles and CI/CD pipelines for data projects.

What we look for

  • A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment
  • An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide.
  • Opportunities to work with EY Advisory practices globally with leading businesses across a range of industries

You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer:

  • Support, coaching and feedback from some of the most engaging colleagues around
  • Opportunities to develop new skills and progress your career
  • The freedom and flexibility to handle your role in a way that’s right for you



EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets.