The point where experts and best companies meet
Share
You will be working on developing solutions that provide some of the unique challenges of space, size and speed. You will implement data analytics using technologies that are inclusive of but not limited by various AWS Offerings – Redshift, S3 and RDS. You will work with partner teams to build analytics products for our customers. You are required to be detail-oriented and must have an aptitude for solving unstructured problems, and working in a self-directed environment, own tasks and drive them to completion.
You are required to have excellent business and communication skills to be able to work with business owners to develop and define key business questions and to build data sets that answer those questions. You will own customer relationship about data and execute tasks that are manifestations of such ownership, like ensuring high data availability, low latency, documenting data details and transformations and handling user notifications and training.Key job responsibilities
A day in the life
- 4+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Bachelor's degree
- Knowledge of batch and streaming data architectures like Kafka, Kinesis, Flink, Storm, Beam
- Knowledge of distributed systems as it pertains to data storage and computing
- Experience programming with at least one modern language such as C++, C#, Java, Python, Golang, PowerShell, Ruby
- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
These jobs might be a good fit