Start dates for our internships in this posting include the following periods:
1. Winter (Starts January/February 2026)
2. Fall (Starts August/September 2026)Key job responsibilities
During a Data Engineer Co-op, you will/may:
● Design, implement, and automate deployment of our distributed system for collecting and processing log events from multiple sources.
● Design data schema and operate internal data warehouses and SQL/NoSQL database systems.
● Own the design, development, and maintenance of ongoing metrics, reports, analyses, and dashboards that engineers, analysts, and data scientists use to drive key business decisions.
● Monitor and troubleshoot operational or data issues in the data pipelines.
● Drive architectural plans and implementation for future data storage, reporting, and analytic solutions.
● Develop code based automated data pipelines able to process millions of data points.
● Improve database and data warehouse performance by tuning inefficient queries.
● Work collaboratively with Business Analysts, Data Scientists, and other internal partners to identify opportunities/problems.
● Provide assistance to the team with troubleshooting, researching the root cause, and thoroughly resolving defects in the event of a problem.A day in the life
- Are 18 years of age or older
- Work 40 hours/week minimum and commit to 12 week internship maximum
- Are enrolled in a academic program that is physically located in the United States
- Are enrolled in a co-op program at your university
- Experience with data transformation
- Experience with database, data warehouse or data lake solutions
- Experience with SQL
- Experience with one or more scripting language (e.g., Python, KornShell, Scala)
- Currently working towards a Bachelor’s Degree in Statistics, Business Analytics, Data Analytics, Data Science, Computer Science, or other equivalent discipline, with an expected conferral date between October 2026 – December 2029.
- Experience with AWS
- Experience building data pipelines or automated ETL processes
- Knowledge of writing and optimizing SQL queries in a business environment with large-scale, complex datasets
- Experience with big data processing technology (e.g., Hadoop or ApacheSpark), data warehouse technical architecture, infrastructure components, ETL, and reporting/analytic tools and environments
- Experience with data visualization software (e.g., AWS QuickSight or Tableau) or open-source project
- Enrolled in a Master’s Degree or advanced technical degree with an expected conferral date between October 2026 – December 2029.
- Previous technical internship(s), if applicable
- Can articulate the basic differences between datatypes (e.g. JSON/NoSQL, relational)
- Understand the basics of designing and implementing a data schema (e.g., normalization, relational model vs dimensional model)
משרות נוספות שיכולות לעניין אותך