המקום בו המומחים והחברות הטובות ביותר נפגשים
Responsibilities:
Apply quantitative methods to develop capabilities that meet line of business, risk management and regulatory requirements.
Maintain and continuously enhance capabilities over time to respond to the changing nature of portfolios, economic conditions and emerging risks.
Document and effectively communicate quantitative methods as part of ongoing engagement with key stakeholders, including the lines of business, risk managers, model validation, and technology.
Develop new models, analytic processes or systems approaches.
Create documentation for all activities and works with technology staff in design of any system to run models developed.
Design and develop an end-to-end model using Clustering, Regression and Deep learning methods in a distributed cloud computing environment to detect the anomaly patterns present in the large-scale data and extract useful information to improve customer segmentation process.
Create a data pipeline in cloud using Terraform, Pytorch, ski-kit-learn and NumPy (training, assessing, and deploying models) and use a natural language processing model to identify the names and dates in a document to verify the customer information automatically.
Use feature selection to identify the important features in the unstructured data using correlational, experimental and survey research to develop models that meet business and regulatory requirements.
Create ETL data pipelines to streamline the unstructured data from multiple sources using API’s which connects multiple sources as a bridge and convert the data to structured data using PySpark, Python and Hadoop environment.
Work on applications and database migration projects and resolved complex issues involved in the migration process through collaboration with other teams involved in the migration process.
Remote work may be permitted within a commutable distance from the worksite.
Required Skills & Experience:
Master's degree or equivalent in Mathematics, Statistics, Computer Science, Management Information Systems, or related; and
3 years of experience in the job offered or a related quantitative occupation.
Must include 3 years of experience in each of the following:
Designing and developing an end-to-end model using Clustering, Regression and Deep learning methods in a distributed cloud computing environment to detect the anomaly patterns present in the large-scale data and extract useful information to improve customer segmentation process;
Creating a data pipeline in cloud using Terraform, Pytorch, ski-kit-learn and NumPy (training, assessing, and deploying models) and using a natural language processing model to identify the names and dates in a document to verify the customer information automatically;
Using feature selection to identify the important features in the unstructured data using correlational, experimental and survey research to develop models that meet business and regulatory requirements;
Creating ETL data pipelines to streamline the unstructured data from multiple sources using API’s which connects multiple sources as a bridge and convert the data to structured data using PySpark, Python and Hadoop environment; and,
Working on applications and database migration projects and resolved complex issues involved in the migration process through collaboration with other teams involved in the migration process.
If interested apply online ator email your resume toand reference the job title of the role and requisition number.
1st shift (United States of America)משרות נוספות שיכולות לעניין אותך