Expoint - all jobs in one place

מציאת משרת הייטק בחברות הטובות ביותר מעולם לא הייתה קלה יותר

Limitless High-tech career opportunities - Expoint

Capital One Applied Researcher 
United States, New York, New York 
260614233

04.05.2024
NYC 299 Park Avenue (22957), United States of America, New York, New York Applied Researcher I


In this role, you will:

  • Partner with a cross-functional team of data scientists, software engineers, machine learning engineers and product managers to deliver AI-powered products that change how customers interact with their money.

  • Leverage a broad stack of technologies — Pytorch, AWS Ultraclusters, Huggingface, Lightning, VectorDBs, and more — to reveal the insights hidden within huge volumes of numeric and textual data.

  • Build AI foundation models through all phases of development, from design through training, evaluation, validation, and implementation.

  • Engage in high impact applied research to take the latest AI developments and push them into the next generation of customer experiences.

  • Flex your interpersonal skills to translate the complexity of your work into tangible business goals.

The Ideal Candidate:

  • You love the process of analyzing and creating, but also share our passion to do the right thing. You know at the end of the day it’s about making the right decision for our customers.

  • Innovative. You continually research and evaluate emerging technologies. You stay current on published state-of-the-art methods, technologies, and applications and seek out opportunities to apply them.

  • Creative. You thrive on bringing definition to big, undefined problems. You love asking questions and pushing hard to find answers. You’re not afraid to share a new idea.

  • A leader. You challenge conventional thinking and work with stakeholders to identify and improve the status quo. You’re passionate about talent development for your own team and beyond.

  • Technical. You’re comfortable with open-source languages and are passionate about developing further. You have hands-on experience developing AI foundation models and solutions using open-source tools and cloud computing platforms.

  • Has a deep understanding of the foundations of AI methodologies.

  • Experience building large deep learning models, whether on language, images, events, or graphs, as well as expertise in one or more of the following: training optimization, self-supervised learning, robustness, explainability, RLHF.

  • An engineering mindset as shown by a track record of delivering models at scale both in terms of training data and inference volumes.

  • Experience in delivering libraries, platform level code or solution level code to existing products.

  • A professional with a track record of coming up with high quality ideas or improving upon existing ideas in machine learning, demonstrated by accomplishments such as first author publications or projects.

  • Possess the ability to own and pursue a research agenda, including choosing impactful research problems and autonomously carrying out long-running projects.

Basic Qualifications:

  • Currently has, or is in the process of obtaining, a PhD, with an expectation that required degree will be obtained on or before the scheduled start date or M.S. with at least 2 years of experience in Applied Research

Preferred Qualifications:

  • PhD in Computer Science, Machine Learning, Computer Engineering, Applied Mathematics, Electrical Engineering or related fields

  • LLM

    • PhD focus on NLP or Masters with 5 years of industrial NLP research experience

    • Multiple publications on topics related to the pre-training of large language models (e.g. technical reports of pre-trained LLMs, SSL techniques, model pre-training optimization)

    • Member of team that has trained a large language model from scratch (10B + parameters, 500B+ tokens)

    • Publications in deep learning theory

    • Publications at ACL, NAACL and EMNLP, Neurips, ICML or ICLR

  • Optimization (Training & Inference)

    • PhD focused on topics related to optimizing training of very large deep learning models

    • Multiple years of experience and/or publications on one of the following topics: Model Sparsification, Quantization, TrainingParallelism/PartitioningDesign, Gradient Checkpointing, Model Compression

    • Experience optimizing training for a 10B+ model

    • Deep knowledge of deep learning algorithmic and/or optimizer design

    • Experience with compiler design

  • Finetuning

    • PhD focused on topics related to guiding LLMs with further tasks (Supervised Finetuning, Instruction-Tuning, Dialogue-Finetuning, Parameter Tuning)

    • Demonstrated knowledge of principles of transfer learning, model adaptation and model guidance

    • Experience deploying a fine-tuned large language model

New York City (Hybrid On-Site): $230,000 - $262,500 for Applied Researcher I San Francisco, California (Hybrid On-Site): $243,700 - $278,100 for Applied Researcher IThis role is also eligible to earn performance based incentive compensation, which may include cash bonus(es) and/or long term incentives (LTI). Incentives could be discretionary or non discretionary depending on the plan.

. Eligibility varies based on full or part-time status, exempt or non-exempt status, and management level.

If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations.