Expoint - all jobs in one place

Finding the best job has never been easier

Limitless High-tech career opportunities - Expoint

JPMorgan Lead Software Engineer - Data AWS Databricks 
India, Karnataka, Bengaluru 
645075419

Yesterday

Job Responsibilities:

  • Design and implement scalable data architectures using Databricks at an enterprise-scale.
  • Design and implement Databricks integration and interoperability with other cloud providers such as AWS, Snowflake, Immuta, and OpenAI.
  • Collaborate with data scientists, analysts and business stakeholders to understand requirements and deliver solutions.
  • Develop and maintain data architecture standards, including data product interfaces, data contracts, and governance frameworks.
  • Implement data governance and security measures to ensure data quality and compliance with industry and regulatory standards.
  • Monitor and optimize the performance and scalability of data products and infrastructure.
  • Provide training and support to domain teams on data mesh principles and cloud data technologies.
  • Stay up-to-date with industry trends and emerging technologies in data mesh and cloud computing.

Required qualifications, capabilities, and skills:

  • Formal training or certification on software engineering concepts and 5+ years applied experience
  • 12+ years applied experience in Data Engineering space using enterprise tools, home grown frameworks and 5+ years of speciality in Databricks implementation from start to end.
  • 5+ years of experience in AWS cloud environment & Databricks.
  • Experience as a Databricks solution architect or tech lead or similar role in an enterprise environment.
  • Hands-on practical experience delivering system design, application development, testing, and operational stability
  • Influencer with a proven record of successfully driving change and transforming across organizational boundaries
  • Ability to present and effectively communicate to Senior Leaders and Executives.
  • Experience in Python, Spark & Streaming (Spark Streaming or KAFKA or Kinesis) is a must
  • Deep understanding of Apache Spark, Delta Lake, DLT and other big data technologies

Preferred qualifications, capabilities, and skills:

  • Databricks and AWS certification
  • Experience of working in a development teams, using agile techniques and Object Oriented development and scripting languages, is preferred.
  • Experience with LLM & AI/ML is preferred.