Share
As an DE in SCOT Simulation team, you will work closely with some of the brightest software engineers, scientists, economists, and product managers, to solve highly complex supply chain challenges. You will work on one of the largest distributed systems in the world in order to support new use cases and design its future state, you will think of how we can leverage AWS technologies (specifically technologies such as ECS and EMR), that will allow our next generation system to run elastically at a much larger scale and on demand, as well as define new frameworks (data APIs, plug and play technology) that will allow supply chain teams to integrate their components in simulation in a low touch manner. You will be responsible for building and maintaining data pipelines, warehouses, and lakes across our Simulation tool set, successfully delivering reliable data infrastructure and meeting program goals. You will design, implement, and optimize ETL processes, data models, and analytical solutions for Amazon Supply Chain, using SQL, PySpark, Amazon Big Data Technologies (BDT), AWS services (such as Redshift, EMR, Glue), and data visualization tools.Key job responsibilities
- Optimize data and queries to improve simulation performance runtime and cost structure
- Develop and manage scalable, automated, and fault-tolerant data solutions using technologies such as Cradle, EMR, Redshift, Glue, Andes, and S3
- 3+ years of data engineering experience
- 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience
- Knowledge of distributed systems as it pertains to data storage and computing
- Experience with data modeling, warehousing and building ETL pipelines
- Experience working on and delivering end to end projects independently
- Experience programming with at least one modern language such as C++, C#, Java, Python, Golang, PowerShell, Ruby
- Experience with Redshift, Oracle, NoSQL etc.
These jobs might be a good fit