In this role, you will be collaborating with data scientists, data analysts, software developers, data engineers, and project managers to understand requirements and translate them into scalable, reliable, and efficient data pipelines, data processing workflows, and machine learning pipelines.You will be responsible for architecting and implementing large scale systems and data pipelines with a focus on agility, interoperability, simplicity, and reusability. You should have deep knowledge in infrastructure, warehousing, data protection, security, data collection, processing, modeling, and metadata management, and able to build an end-to-end solutions that also support metadata logging, anomaly detection, data cleaning, transformation, etc.