Job Responsibilities:
- Collaborate with all of JPMorgan’s lines of business and functions to delivery software solutions.
- Experiment, Architect, develop and productionize efficient Data pipelines, Data services and Data platforms contributing to the Business.
- Design and implement highly scalable, efficient and reliable data processing pipelines and perform analysis and insights to drive and optimize business result.
- Acts on previously identified opportunities to converge physical, IT, and data security architecture to manage access
- Mentors and coaches junior engineers and technologists.
- Champions the firm’s culture of diversity, equity, inclusion, and respect
Required qualifications, capabilities and skills:
- Formal training or certification on large scale technology program concepts and 10+ years applied experience in Data Technologies. In addition, 5+ years of experience leading technologists to manage, anticipate and solve complex technical items within your domain of expertise
- Solid programming skills with Java, Python or other equivalent languages.
- Experience across the data lifecycle, building Data frameworks, working with Data lakes.
- Experience with Batch and Real time Data processing with Spark or Flink
- Working knowledge of AWS Glue and EMR usage for Data processing
- Experience working with Databricks
- Experience working with Python/Java, PySpark etc.
- Working experience with both relational and NoSQL databases
- Experience in ETL data pipelines both batch and real-time data processing, Data warehousing, NoSQL DB.
- Strong analytical and critical thinking skills.
- Self-motivation, great communication skills and team player.
Preferred qualifications, capabilities and skills :
- Cloud computing: Google Cloud, Amazon Web Service, Azure, Docker, Kubernetes.
- Experience in big data technologies: Hadoop, Hive, Spark, Kafka.
- Experience in distributed system design and development