Expoint - all jobs in one place

The point where experts and best companies meet

Limitless High-tech career opportunities - Expoint

Citi Group Big Data Engineer 
India, Maharashtra, Pune 
980220764

13.09.2024

Responsibilities:

  • Responsible for design and development of big data solutions. Partner with domain experts, product managers, analyst, and data scientists to develop Big Data pipelines in Hadoop
  • Responsible for moving all legacy workloads to cloud platform
  • Work with data scientist to build Client pipelines using heterogeneous sources and provide engineering services for data science applications
  • Ensure automation through CI/CD across platforms both in cloud and on-premises
  • Define needs around maintainability, testability, performance, security, quality and usability for data platform
  • Drive implementation, consistent patterns, reusable components, and coding standards for data engineering processes
  • Convert SAS based pipelines into languages like PySpark, Scala to execute on Hadoop and non-Hadoop ecosystems
  • Tune Big data applications on Hadoop and non-Hadoop platforms for optimal performance
  • Evaluate new IT developments and evolving business requirements and recommend appropriate systems alternatives and/or enhancements to current systems by analyzing business processes, systems and industry standards.
  • Applies in-depth understanding of how data analytics collectively integrate within the sub-function as well as coordinates and contributes to the objectives of the entire function.
  • Produces detailed analysis of issues where the best course of action is not evident from the information available, but actions must be recommended/taken.
  • Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency.

QUALIFICATIONS:

  • 4+ years of total IT experience
  • 2+ years of experience with Hadoop (Cloudera)/big data technologies
  • Good knowledge of the Hadoop ecosystem and Big Data technologies Hands-on experience with the Hadoop eco-system (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr)
  • Experience on designing and developing Data Pipelines for Data Ingestion or Transformation using Java or Scala or Python.
  • Experience with Spark programming (pyspark or scala or java)
  • Good knowledge of building pipelines using Apache Spark Familiarity with core provider services from AWS, Azure or GCP, preferably having supported deployments on one or more of these platforms
  • Hands-on experience with Python/Pyspark/Scala and basic libraries for machine learning is required;
  • Exposure to containerization and related technologies (e.g. Docker, Kubernetes)
  • Exposure to aspects of DevOps (source control, continuous integration, deployments, etc.)
  • Proficient in programming in Java or Python with prior Apache Beam/Spark experience a plus.
  • System level understanding - Data structures, algorithms, distributed storage & compute
  • Can-do attitude on solving complex business problems, good interpersonal and teamwork skills
  • Experience in Snowflake is a plus.


Education:

  • Bachelor’s degree/University degree or equivalent experience


This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.

Applications Development


Time Type:

Full time

View the " " poster. View the .

View the .

View the