The role involves collaborating with engineering, research and product teams to assess and measure the impact of ASE features. Your role involves ensuring safety best practices, reviewing AI model performance, and providing guidance. You'll use your expertise in technology’s societal implications to contribute to research projects and improve the safety development ecosystem.RESPONSIBILITIES INCLUDE:- Drive and support implementation of safety policies defined by AI Safety teams.- Collaborate with AIML, Product & Design, Privacy, Human-centered AI, policy leads, and business owners to define program requirements, set priorities, and establish scope of AI or ML models.- Set objectives and key results and the long-term strategy for the Responsible AI policy measurement, including benchmarks and review of safety performance of AI models.- Develop and document standard processes through working with machine learning engineers and experts, and where appropriate, external organizations.