Share
Export Control Requirement:Key job responsibilities
* Design, implement and maintain data infrastructure including data modeling, ETL pipelines, and ongoing maintenance.
* Partner with various teams to build data pipelines from a wide variety of sources using AWS big data technologies (Lake Formation, Glue, S3, MWAA, Lambda, etc.).
* Develop automated solutions to minimize manual processes with focus on efficiency and scalability.
* Assist in the development of key metrics and performance indicators to measure overall performance and provide foundation for continuous improvement.
* Build self-service reporting platforms, establishing automated processes for large scale data analysis.
* Understand and write high quality queries to retrieve and analyze data (ongoing reporting and ad hoc requests).This position may be based in Bellevue, WA, or Arlington, VA. Some travel, domestic and international, may be required.
- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience
- Experience with data visualization using Tableau, Quicksight, or similar tools
- Experience with data modeling, warehousing and building ETL pipelines
- Experience in Statistical Analysis packages such as R, SAS and Matlab
- Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling
These jobs might be a good fit