Required/Minimum Qualifications
- Bachelor's Degree in Business, Linguistics, Humanities, Research, Journalism, Policy development, or related field AND 4+ years experience in policy development, trust and safety, content moderation
- OR equivalent experience.
- 2+ years of journalist, research, and writing skills to create and document guidelines and procedures.
- 2+ years of experience operationalizing policy and/or research into repeatable procedures using a structured, research-based approach.
Preferred Qualifications:
- Academic or real-world experience in Generative AI, Responsible AI, Digital Safety, Security, Compliance or Risk Management.
- Experienced with internationally and geographically distributed teams.
- Experience in identifying issues, investigating, developing a solution, implementing change and monitoring results.
- Experience influencing others where support is critical to success.
- Experience working with human generated or AI generated harmful materials across multiple media types and creating objective recommendations, grounded and supported by research and data.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:Microsoft will accept applications for the role until July 29, 2025.