מציאת משרת הייטק בחברות הטובות ביותר מעולם לא הייתה קלה יותר
Start dates for our co-ops in this posting include the following period:Winter (Starts January/February 2025)Key job responsibilities
During a Data Engineer Co-op, you will/may:● Design, implement, and automate deployment of our distributed system for collecting and processing log events from multiple sources.
● Design data schema and operate internal data warehouses and SQL/NoSQL database systems.
● Own the design, development, and maintenance of ongoing metrics, reports, analyses, and dashboards that engineers, analysts, and data scientists use to drive key business decisions.
● Monitor and troubleshoot operational or data issues in the data pipelines.
● Drive architectural plans and implementation for future data storage, reporting, and analytic solutions.
● Develop code based automated data pipelines able to process millions of data points.
● Improve database and data warehouse performance by tuning inefficient queries.
● Work collaboratively with Business Analysts, Data Scientists, and other internal partners to identify opportunities/problems.
● Provide assistance to the team with troubleshooting, researching the root cause, and thoroughly resolving defects in the event of a problem.A day in the life
- Experience with database, data warehouse or data lake solutions
- Experience with SQL
- Experience with one or more scripting language (e.g., Python, KornShell, Scala)
- Are 18 years of age or older
- Work 40 hours/week minimum and commit to 6 month internship maximum
- Experience with data transformation.
- Currently enrolled in a co-op program in the US.
- Currently enrolled in or will receive a Bachelor’s in Computer Science, Computer Engineering, Information Management, Information Systems, or an equivalent technical discipline with a conferral date between October 2025 – December 2028.
- Knowledge of basics of designing and implementing a data schema like normalization, relational model vs dimensional model
- Experience building data pipelines or automated ETL processes
- Experience writing and optimizing SQL queries with large-scale, complex datasets
- Experience with big data processing technology (e.g., Hadoop or ApacheSpark), data warehouse technical architecture, infrastructure components, ETL, and reporting/analytic tools and environments
- Experience with data visualization software (e.g., AWS QuickSight or Tableau) or open-source project
- Previous technical internship(s), if applicable
- Prior experience with AWS
- Can articulate the basic differences between datatypes (e.g. JSON/NoSQL, relational)
- Enrolled in a Master’s Degree or advanced technical degree with a conferral date between October 2025 – December 2028.
משרות נוספות שיכולות לעניין אותך