Expoint - all jobs in one place

מציאת משרת הייטק בחברות הטובות ביותר מעולם לא הייתה קלה יותר

Limitless High-tech career opportunities - Expoint

JPMorgan Lead Software Engineer 
United Kingdom, England, London 
772982694

Yesterday

As a Lead Software Engineer at JPMorgan Chase within the Data Pipeline Technology Engineering team, you will have the opportunity to significantly influence your career trajectory and embark on a journey where you can redefine the boundaries of possibility. You will be instrumental in the development of our data pipeline applications on AWS, working closely with business and product teams to create robust data transformation (ETL) functionalities that cater to intricate business requirements. Your proficiency will steer our shift to the public cloud, with a focus on AWS product usage, authentication and authorization best practices, certificate management, performance enhancement, and database migration tools.

Job Responsibilities:

  • Advise and make custom configuration changes in one to two tools to generate a product at the business or customer request.
  • Update logical or physical data models based on new use cases.
  • Produce architecture and design artifacts for complex applications while ensuring design constraints are met by software code development.
  • Gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets for continuous improvement of software applications and systems.
  • Support business engineering leads through solution design and build out technical architectures to enable key business benefits.
  • Build complex distributed systems using Java (11/17) on AWS.
  • Build out real-world architectures that business engineering teams buy into and build their applications around.

Required Qualifications, Capabilities, and Skills:

  • Experience across the data lifecycle with Spark-based frameworks for end-to-end ETL, ELT & reporting solutions using key components like Spark SQL & Spark Streaming.
  • Strong knowledge of multi-threading and high-volume batch processing.
  • Proficiency in performance tuning for Java/Python and Spark.
  • Deep knowledge of AWS products/services and Kubernetes/container technologies and their optimal use for specific workloads.
  • Real-world experience in building applications on AWS across multi-AZ, multi-region, and multi-cloud vendor environments.
  • Excellent understanding of modern engineering practices to leverage key benefits of Public Cloud (e.g., auto-scaling).
  • A mindset geared towards a fantastic end-to-end engineering experience supported by excellent tooling and automation.

Preferred Qualifications, Capabilities, and Skills:

  • Good understanding of the Big Data stack (Spark/Iceberg).
  • Ability to learn new technologies and patterns on the job and apply them effectively.
  • Good understanding of established patterns, such as stability patterns/anti-patterns, event-based architecture, CQRS, and process orchestration.
  • Experience in building out technical architectures that align with business engineering requirements.