The point where experts and best companies meet
Share
What You’ll Do
Build and optimize data pipelines for efficient data ingestion, transformation, and loading from various sources while ensuring data quality and integrity
Support the design and development of scalable data architectures and systems that extract, store, and process large amounts of data
Collaborate with Data Scientists, Machine Learning Engineers, Business Analysts and/or Product Owners to understand their requirements and provide efficient solutions for data exploration, analysis, and modeling
Implement testing, validation and pipeline observability to ensure data pipelines are meeting customer SLAs
Use cutting edge technologies such as Python, Scala, Spark, and a variety of AWS services to develop modern data pipelines supporting Machine Learning and Artificial Intelligence
BasicQualifications:
Bachelor’s Degree
At least 4 years of experience in application development (Internship experience does not apply)
At least 1 year of experience in big data technologies
PreferredQualifications:
5+ years of experience building data pipelines using Python, Java, or Scala
2+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud)
2+ years of experience using Spark or PySpark
2+ years of data warehousing experience (Redshift or Snowflake)
3+ years of experience with UNIX/Linux including basic commands and shell scripting
2+ years of experience with Agile engineering practices
. Eligibility varies based on full or part-time status, exempt or non-exempt status, and management level.
If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations.
These jobs might be a good fit