Expoint - all jobs in one place

The point where experts and best companies meet

Limitless High-tech career opportunities - Expoint

JPMorgan Databricks Cloud Big Data Engineer III 
United States, New Jersey, Jersey City 
862720496

07.09.2024

Job responsibilities

  • Supports review of controls to ensure sufficient protection of enterprise data
  • Being responsible for advising and making custom configuration changes in one to two tools to generate a product at the business or customer request
  • Updating logical or physical data models based on new use cases
  • Frequently uses SQL and understands NoSQL databases and their niche in the marketplace
  • Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems.
  • Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems.
  • Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development.
  • Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems.
  • Identifies hidden problems and patterns in data proactively and uses these insights to drive improvements to coding hygiene and system architecture.
  • Contributes to software engineering communities of practice and events that explore new and emerging technologies.
  • Adds to team culture of diversity, equity, inclusion, and respect.

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and 3+ years applied experience.
  • Experience with AWS EMR, Apache Spark - Scala, Python, Kafka and Infrastructure as Code (IaC) tools like Terraform.
  • Knowledge on AWS Lake Formation, AWS Glue Data Catalog and fine-grained access control.
  • Hands-on practical experience in system design, application development, testing, and operational stability.
  • Experience across the data lifecycle
  • Significant experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis
  • Proficient in coding in one or more languages (Python, Pyspark.).
  • Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages.
  • Overall knowledge of the Software Development Life Cycle.
  • Solid understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security.

Preferred qualifications, capabilities, and skills

  • AWS & Terraform Certification.
  • Data Mesh Architecture.
  • 1+ years of experience building/implementing data pipelines using Databricks such as Unity Catalog, Databricks workflow, Databricks Live Table etc.
  • Experience with Data Orchestrator tool like Airflow.
  • Familiarity with data governance and metadata management.