Finding the best job has never been easier
Share
Tools used include:
1. SQL (Redshift)
2. AWS Services - S3, Lambda, Athena, Glue, EC2, ECS, Kinesis, CDK
3. Python, Linux
4. ETL tools (internal)
5. Data visualization – QuickSight
Key job responsibilities
· Design, build, and maintain data pipelines, reporting, analytical & automation tools on AWS
· Work with a variety of data sources and large data sets, pull data using efficient query development and provide a holistic and consistent view of the data to facilitate analytical insights
· Debug report issues, unblock workflows and communicate with other teams and customers to provide status updates
· Automate existing processes where needed
· Work with our customers on incoming re-active requests to understand their needs and design reporting, analytics, and data pipeline solutions that will exceed expectations.
· Pro-actively develop flexible and scalable solutions using AWS services and Amazon internal tools
· Stay up-to-date on the latest AWS services and technologies
· Ratio of re-active, stakeholder initiated workload vs. proactive, project related workload: 50:50
- Experience writing complex SQL queries
- Experience in analyzing and interpreting data with Redshift, Oracle, NoSQL etc.
- Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with data visualization using Tableau, Quicksight, or similar tools
- Experience in the data/BI space
- Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift
- Experience in building CDK pipelines, SSAS, SSRS, SSIS implementation on EC2
These jobs might be a good fit