Expoint - all jobs in one place

המקום בו המומחים והחברות הטובות ביותר נפגשים

Limitless High-tech career opportunities - Expoint

JPMorgan Data Domain Architect Vice President 
United States, Ohio, Columbus 
678102174

26.06.2024

As a Data Domain Architect - Vice President in the Finance Decision Optimization group, you will actively collaborate with various stakeholders and functional teams to determine data and model requirements for constructing data pipelines and compiling complex predictive and optimization routines into executable Python packages for prototype QA testing and production deployment. This role provides an opportunity to apply your data science, data engineering, and Python machine learning application development skills in a dynamic and competitive environment. Our team values precision and effective execution, and we look forward to your valuable contributions.

Job responsibilities

  • Build and compile data pipelines, complex predictive models and optimization routines into executable packages for prototype QA testing and production deployment: This involves creating data processing systems, integrating predictive models, and packaging them for testing and deployment. This requires knowledge of data science, data engineering, machine learning, and software development.
  • Assist in solution back testing exercises for Fair Lending and other key stakeholders: This involves testing predictive models using historical data to see how well the model would have performed in the past. This is often done in financial contexts to ensure models are robust and reliable.
  • Conduct transaction data analyses with big data technologies on Cloud platforms and turn massive amounts of data into actionable insights that drive business value: This involves analyzing large datasets using big data technologies like Spark on AWS EMR, Databricks, and Snowflake in the cloud, and transforming vast volumes of data into actionable insights that can inform business decisions.
  • Find opportunities to create and automate repeatable analysis for the business: This involves identifying areas where automated data analysis can improve efficiency and provide consistent, reliable insights.
  • Work on cross functional teams and collaborate with internal and external stakeholders: This involves working with different teams within the organization and coordinating with external partners to achieve project goals.
  • Raise critical issues proactively to the business and technology partners: This involves identifying potential problems or issues and bringing them to the attention of relevant stakeholders.
  • Keep abreast of industry trends and provide recommendations for testing new and emerging technology: This involves staying up-to-date with the latest developments in the field and suggesting new technologies that could benefit the organization.
  • Support ongoing technology evaluation process and proof of concept projects: This involves assisting in the assessment of new technologies and contributing to early-stage projects that test these technologies.
  • Be a thought leader, mentor, and train junior staff: This involves providing guidance and support to less experienced team members and helping to develop their skills and knowledge.
  • Ensure project delivery within timelines and meet critical business needs: This involves managing projects effectively to ensure they are completed on time and meet the needs of the business.

Required qualifications, capabilities and skills

  • A minimum of 8 years of relevant professional experience as a software developer, data/ML engineer, data scientist, or business intelligence engineer.
  • A Bachelor's degree in Computer Science, Financial Engineering, MIS, Mathematics, Statistics, or another quantitative field.
  • Practical knowledge of the banking sector, specifically in areas of retail deposits, auto, card, and mortgage lending.
  • Exceptional problem-solving abilities, coupled with a clear understanding of business requirements. Must be able to effectively communicate complex information to a wide range of audiences.
  • Must be highly detail-oriented, with a proven track record of delivering tasks on schedule.
  • Excellent team player with strong interpersonal skills.
  • Ability to multitask and manage multiple priorities efficiently.
  • Capable of working in a fast-paced environment and collaborating with various teams using a consultative approach.
  • Self-motivated individual with a strong work ethic.
  • A proven track record of success, as demonstrated by professional or educational achievements.
  • Eagerness to stay updated with the latest advancements in cloud data technologies and machine learning.

Preferred qualifications, skills and capabilities

  • Proficiency in Python programming, with a strong understanding of object-oriented and functional programming concepts, and its application in data processing and machine learning.
  • Expertise in Linux bash shell command environment and Git for version control and collaborative coding.
  • Advanced SQL skills for complex query writing, data manipulation, and analysis, coupled with strong experience in data engineering, including ETL processes.
  • Proficiency with the Anaconda ecosystem, including Pandas, NumPy, and SciPy, and practical experience integrating and implementing machine learning algorithms using TensorFlow and XGBoost.
  • Extensive knowledge of Apache Spark, with experience optimizing Spark jobs for performance and scalability within Databricks, and hands-on experience with AWS EC2, EMR environments, and S3/EFS storage.
  • Ability to analyze data using tools such as Tableau and Alteryx to develop and automate reports or analyses that lead to actionable business insights, along with experience in data analysis, cleansing, modeling (including machine learning, time series, NLP), and visualization.
  • Knowledge of data modeling, data governance, and compliance standards relevant to handling sensitive data and familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes)