המקום בו המומחים והחברות הטובות ביותר נפגשים
You will participate in all phases of scientific projects, including big data analysis, designing and implementing product experiments, automating analytical metric mechanisms, and communicating results. You will earn trust from our business partners by collaborating with them to define key research questions, communicating scientific approaches and findings, listening to and incorporating their feedback, and delivering successful solutions.Key job responsibilities
- Design, implement and operate large-scale, high-volume, high-performance data structures for analytics and data science.
- Implement data ingestion routines, both real-time and batch, using best practices in data modeling, and Extract, Transform, Load (ETL)/Extract, Load, Transform (ELT) processes by leveraging AWS technologies and big data tools.
- Gather business and functional requirements and translate them into robust, scalable, and operable solutions with a flexible and adaptable data architecture.
A day in the life1. Medical, Dental, and Vision Coverage
2. Maternity and Parental Leave Options
3. Paid Time Off (PTO)
4. 401(k) Plan
- A Bachelor's degree in a quantitative/technical field (e.g. Computer Science, Statistics, Engineering) or equivalent industry experience
- 3+ years of experience with demonstrated strength in ETL/ELT, data modeling, data warehouse technical architecture, infrastructure components and reporting/analytic tools.
- 3+ years of hands-on experience in writing complex, highly-optimized SQL queries across large data sets.
- 3+ years of experience in scripting languages like Python etc.
- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
- Experience architecting data lake and cloud data warehouses
- Experience with big data technologies (Hadoop, Hive, Kafka, Spark, etc.)
משרות נוספות שיכולות לעניין אותך