Expoint - all jobs in one place

המקום בו המומחים והחברות הטובות ביותר נפגשים

Limitless High-tech career opportunities - Expoint

JPMorgan Software Engineer III - AWS Data 
India, Karnataka, Bengaluru 
783097486

31.07.2024

Job responsibilities

  • Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Creates secure and high-quality production code following AWS best practices on cloud systems and responsible to deploy in efficient manner using CICD pipeline.
  • Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development
  • Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture
  • Contributes to software engineering communities of practice and events that explore new and emerging technologies, adds to team culture of diversity, equity, inclusion, and respect
  • Participate in scrum team stand-ups, code reviews and other ceremonies, contribute to task completion and blocker resolution within your team​
  • Handle critical and time sensitive concurrent tasks with supervision and properly escalate situations as appropriate​
  • Write test cases, leverage unit and integration testing, develop functionality and automation​
  • Maintain technical acumen by pursuing formal and informal learning opportunities about technology, JPMorgan Chase products, and financial services​
  • Identify and implement continuous improvement opportunities, to improve delivery flow across product and technology​

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and 3+ years applied experience
  • 5+ years of professional work experience designing and implementing data pipelines in a cloud environment is required
  • 2+ years of experience migrating/developing data solutions in the AWS cloud is required
  • 1+ years of experience building/implementing data pipelines using PySpark on Databricks or similar cloud database
  • Expert level knowledge of using SQL to write complex, highly-optimized queries across large volumes of data
  • Hands-on object-oriented programming experience using Python is required
  • Professional work experience building real-time data streams using Spark and Experience in Spark
  • Knowledge or experience in architectural best practices in building data lakes
  • Solid understanding of Agile methodologies such as CI/CD, Applicant Resiliency and Security​

Preferred qualifications, capabilities, and skills

  • Familiarity with Ni-Fi, Snowflake
  • Exposure to Terraform and AWS services such as Glue, SQS, SNS,S3 etc