Expoint - all jobs in one place

מציאת משרת הייטק בחברות הטובות ביותר מעולם לא הייתה קלה יותר

Limitless High-tech career opportunities - Expoint

Citi Group Big Data/Java Application Developer 
Canada, Ontario 
456510360

01.04.2025


Responsibilities:

  • Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establish and implement new or revised applications systems and programs to meet specific business needs or user areas
  • Monitor and control all phases of development process and analysis, design, construction, testing, and implementation as well as provide user and operational support on applications to business users
  • Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgement
  • Recommend and develop security measures in post implementation analysis of business usage to ensure successful system design and functionality
  • Consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems
  • Ensure essential procedures are followed and help define operating standards and processes
  • Serve as advisor or coach to new or lower level analysts
  • Has the ability to operate with a limited level of direct supervision.
  • Can exercise independence of judgement and autonomy.
  • Experience managing an data focused product, ML platform and or UI/UX
  • Responsible to build UI components using Angular, HTML, CSS Java, Spring boot, Oracle, NoSQL OR Design, develop, and optimize scalable distributed data processing pipelines using Apache Spark and Scala..
  • Acts as SME to senior stakeholders and /or other team members.
  • Experience with large-scale distributed web services and the processes around testing, monitoring, and SLAs to ensure high product quality.
  • Design and Develop Scalable Data Pipelines:Lead the design, development, and maintenance of high-performance, scalable data pipelines using Apache Spark, Scala, and Spark to handle large-scale datasets in the financial industry.
  • ETL Process Implementation:Implement ETL (Extract, Transform, Load) processes for data integration, transforming complex data from multiple sources into structured, actionable insights.
  • Data Optimization and Performance Tuning:Monitor, troubleshoot, and optimize the performance of data pipelines and applications, ensuring high availability, low-latency, and efficient resource usage.
  • Data Workflow Orchestration:Use Apache Airflow to orchestrate and automate complex data workflows, ensuring seamless integration and efficient execution of tasks across systems.
  • Real-Time Data Processing:Integrate real-time data streaming solutions using Apache Kafka for processing and managing large volumes of data in real-time.
  • Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency.


Qualifications:

  • 5+ years of relevant experience
  • Experience in systems analysis and programming of software applications
  • Experience in managing and implementing successful projects
  • Working knowledge of consulting/project management techniques/methods
  • Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements
  • Hands on relevant experience in Angular, HTML, CSS Java, Spring boot, Oracle, NoSQL OR Design, develop, and optimize scalable distributed data processing pipelines using Apache Spark and Scala.
  • Proficiency in Functional Programming:High proficiency in Scala-based functional programming for developing robust and efficient data processing pipelines.
  • Proficiency in Big Data Technologies:Strong experience with Apache Spark, Hadoop ecosystem tools such as Hive, HDFS, and YARN.
  • Programming and Scripting:Advanced knowledge of Scala and a good understanding of Python for data engineering tasks.
  • Data Modeling and ETL Processes:Solid understanding of data modeling principles and ETL processes in big data environments.
  • Analytical and Problem-Solving Skills:Strong ability to analyze and solve performance issues in Spark jobs and distributed systems.
  • Version Control and CI/CD:Familiarity with Git, Jenkins, and other CI/CD tools for automating the deployment of big data applications.

Desirable Experience:

  • Real-Time Data Streaming:Experience with streaming platforms such as Apache Kafka or Spark Streaming.
  • Financial Services Context:Familiarity with financial data processing, ensuring scalability, security, and compliance requirements.
  • Leadership in Data Engineering:


Education:

  • Bachelor’s degree/University degree or equivalent experience


This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.

Applications Development


Time Type:

Full time

View the " " poster. View the .

View the .

View the