Expoint - all jobs in one place

The point where experts and best companies meet

Limitless High-tech career opportunities - Expoint

JPMorgan Software Engineer III - Big Data 
United States, New Jersey, Jersey City 
403516024

08.04.2025

Job responsibilities

  • Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Creates secure and high-quality production code
  • Produces architecture and design artifacts for complex data applications while being accountable for ensuring design constraints are met by software code development
  • Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems
  • Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture
  • Contributes to software engineering communities of practice and events that explore new and emerging technologies in data space
  • Adds to team culture of diversity, equity, inclusion, and respect

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and 3+ years applied experience
  • Hands-on practical experience in system design, application development, testing, and operational stability
  • Strong development experience in Java and Python, with expertise in developing, debugging, and maintaining code in a large corporate environment using modern programming and database querying languages
  • Experience in AWS technologies, including EKS, EMR, ECS, Lambdas, APIs, Redshift, S3, Athena, RDS, and OpenSearch
  • Experience in Continuous Integration & Continuous Deployment processes using tools such as Jenkins and spinnaker
  • Demonstrated knowledge of software applications and technical processes within Cloud technologies
  • Proficient in designing and implementing scalable Big Data solutions using Hadoop, Spark, and NoSQL databases for efficient data processing and analysis
  • Skilled in developing robust ETL pipelines to ingest, transform, and load data into data warehouses or lakes, ensuring data quality and integrity
  • Experienced in handling batch and real-time data processing with tools like Apache Kafka for streaming and Apache Flink for real-time analytics
  • Proficient in using Databricks for data engineering and analytics, including working with notebooks and managing both external and managed tables
  • Knowledge of GraphQL APIs for efficient data querying and manipulation, enhancing client-server communication

Preferred qualifications, capabilities, and skills

  • Knowledge of the financial services industry and their IT systems
  • AWS Certification