Finding the best job has never been easier
Share
In this role, you will:
Partner with a cross-functional team of data scientists, software engineers, machine learning engineers and product managers to deliver AI-powered products that change how customers interact with their money.
Leverage a broad stack of technologies — Pytorch, AWS Ultraclusters, Huggingface, Lightning, VectorDBs, and more — to reveal the insights hidden within huge volumes of numeric and textual data.
Build AI foundation models through all phases of development, from design through training, evaluation, validation, and implementation.
Engage in high impact applied research to take the latest AI developments and push them into the next generation of customer experiences.
Flex your interpersonal skills to translate the complexity of your work into tangible business goals.
The Ideal Candidate:
You love the process of analyzing and creating, but also share our passion to do the right thing. You know at the end of the day it’s about making the right decision for our customers.
Innovative. You continually research and evaluate emerging technologies. You stay current on published state-of-the-art methods, technologies, and applications and seek out opportunities to apply them.
Creative. You thrive on bringing definition to big, undefined problems. You love asking questions and pushing hard to find answers. You’re not afraid to share a new idea.
A leader. You challenge conventional thinking and work with stakeholders to identify and improve the status quo. You’re passionate about talent development for your own team and beyond.
Technical. You’re comfortable with open-source languages and are passionate about developing further. You have hands-on experience developing AI foundation models and solutions using open-source tools and cloud computing platforms.
Has a deep understanding of the foundations of AI methodologies.
Experience building large deep learning models, whether on language, images, events, or graphs, as well as expertise in one or more of the following: training optimization, self-supervised learning, robustness, explainability, RLHF.
An engineering mindset as shown by a track record of delivering models at scale both in terms of training data and inference volumes.
Experience in delivering libraries, platform level code or solution level code to existing products.
A professional with a track record of coming up with new ideas or improving upon existing ideas in machine learning, demonstrated by accomplishments such as first author publications or projects.
Possess the ability to own and pursue a research agenda, including choosing impactful research problems and autonomously carrying out long-running projects.
Basic Qualifications:
PhD plus at least 2 years of experience in Applied Research or M.S. plus at least 4 years of experience in Applied Research
Preferred Qualifications [choose correct set based on focus of role]:
PhD in Computer Science, Machine Learning, Computer Engineering, Applied Mathematics, Electrical Engineering or related fields
LLM
PhD focus on NLP or Masters with 5 years of industrial NLP research experience
Multiple publications on topics related to the pre-training of large language models (e.g. technical reports of pre-trained LLMs, SSL techniques, model pre-training optimization)
Member of team that has trained a large language model from scratch (10B + parameters, 500B+ tokens)
Publications in deep learning theory
Publications at ACL, NAACL and EMNLP, Neurips, ICML or ICLR
Behavioral Models
PhD focus on topics in geometric deep learning (Graph Neural Networks, Sequential Models, Multivariate Time Series)
Multiple papers on topics relevant to training models on graph and sequential data structures at KDD, ICML, NeurIPs, ICLR
Worked on scaling graph models to greater than 50m nodes
Experience with large scale deep learning based recommender systems
Experience with production real-time and streaming environments
Contributions to common open source frameworks (pytorch-geometric, DGL)
Proposed new methods for inference or representation learning on graphs or sequences
Worked datasets with 100m+ users
Optimization (Training & Inference)
PhD focused on topics related to optimizing training of very large deep learning models
Multiple years of experience and/or publications on one of the following topics: Model Sparsification, Quantization, Training Parallelism/Partitioning Design, Gradient Checkpointing, Model Compression
Experience optimizing training for a 10B+ model
Deep knowledge of deep learning algorithmic and/or optimizer design
Experience with compiler design
Finetuning
PhD focused on topics related to guiding LLMs with further tasks (Supervised Finetuning, Instruction-Tuning, Dialogue-Finetuning, Parameter Tuning)
Demonstrated knowledge of principles of transfer learning, model adaptation and model guidance
Experience deploying a fine-tuned large language model
Data Preparation
Publications studying tokenization, data quality, dataset curation, or labeling
Contribution to a major open source corpus
Contribution to open source libraries for data quality, dataset curation, or labeling
. Eligibility varies based on full or part-time status, exempt or non-exempt status, and management level.
If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations.
These jobs might be a good fit