Required Qualifications:
- Bachelor’s degree inCommunications, Media, Journalism, Behavioral Psychology, Criminal Justice, Linguistics, Education, Business, Computer Science, or related field AND 2+ years of relevant experience (e.g., digital safety, content review, the use and creation of content moderation tools, data annotation).
- OR equivalent experience.
- Business level fluency to read, write and speak English
Preferred Experience:
- Experience working with human-generated or AI-generated harmful materials across multiple media types and creating objective recommendations, grounded and supported by research and data.
- Real-world experience in Generative AI, Responsible AI, Digital Safety, Security, Compliance, or Risk Management.
- Motivated to work with internationally and geographically distributed teams.
- Experience in proactively identifying issues, investigating, developing a solution, implementing change, and monitoring results.
- Experience problem-solving and critical thinking in a professional capacity.
- Experience in content policy, child safety issues, and reviewing online content.
- Experience influencing others, where support is critical to success.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:Microsoft will accept applications for the role until December 16, 2024.