Finding the best job has never been easier
Share
Key job responsibilities
- Work with cross-functional teams to gather and analyze the data requirements for building state-of-the-art AI models.
- Design, develop, and maintain data pipelines to collect, clean, and store data from multiple diverse sources.
- Implement data quality and validation mechanisms to ensure data and model integrity.- Optimize data processing, storage, and retrieval solutions for scalability, cost, and performance tradeoffs.A day in the life
- Work with data scientists, software engineers, and data professionals to gather and clarify requirements, set goals and success metrics, and check progress against requirements.
- Dive into exploration, profiling, and cleaning to support data analysis and model building.
- Design and implement data pipelines.
- Troubleshoot data issues, and feedback your findings with stakeholders.1. Medical, Dental, and Vision Coverage
2. Maternity and Parental Leave Options
3. Paid Time Off (PTO)
4. 401(k) Plan
- 3+ years of data engineering experience
- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience
- Knowledge of distributed systems as it pertains to data storage and computing
- Experience with data modeling, warehousing and building ETL pipelines
- Experience working on and delivering end to end projects independently
- Experience programming with at least one modern language such as C++, C#, Java, Python, Golang, PowerShell, Ruby
- Experience with Redshift, Oracle, NoSQL etc.
- Master's degree in computer science, engineering, analytics, mathematics, statistics, IT or equivalent
- Familiarity and comfort with Python, SQL, Docker, and Shell scripting. Java preferred but not necessary.
These jobs might be a good fit