Expoint - all jobs in one place

מציאת משרת הייטק בחברות הטובות ביותר מעולם לא הייתה קלה יותר

Limitless High-tech career opportunities - Expoint

Microsoft Principal Researcher AI Trust & Safety 
United States, Washington 
626375010

31.12.2024

Required/Minimum Qualifications

  • Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 6+ years related experience (e.g., statistics, predictive analytics, research)
    • OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 4+ years related experience (e.g., statistics, predictive analytics, research)
    • OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research)
    • OR equivalent experience.


Other Requirements:Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:

  • Microsoft Cloud Background Check: This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.

Preferred Qualifications:

  • Experience in Trust and Safety area or policy area, especially working with human-generated or AI-generated harmful materials across multiple media types and creating objective recommendations, grounded and supported by research and data.
  • Experience in national security, specifically CBRN
  • Preferred multilingual proficiency, especially in languages used in Microsoft userbase in Europe, EMEA and APAC region
  • Prior experience in red teaming events such as GRT at Defcon AI Village or other LLM CTFs
  • While extensive coding experience is not necessary, candidate should be comfortable with basic/intermediate Python programming
  • 1+ years' experience in a field related to Responsible AI including but not limited to ethics, chemistry, biology, linguistics, sociology, psychology, medicine, socio-technical safety space, online safety, privacy

Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:Microsoft will accept applications for the role until January 10, 2025.


Responsibilities
  • Discover and exploit GenAI vulnerabilities with respect to end-to-end capabilities in order to assess the safety of systems and lead communication of impact of vulnerabilities to partner stakeholders
  • Develop novel methodologies and techniques to scale and accelerate AI Red Teaming in collaboration with our research team, our tooling team, and leaders in the Microsoft AI Safety & Security ecosystem
  • Collaborate with teams to influence measurement and mitigations of these vulnerabilities in AI systems
  • Research new and emerging threats to inform the organization and craft solutions to operationalize testing within the AI Red Team function
  • Work alongside traditional offensive security engineers, adversarial ML experts, developers to land responsible AI operations -Model and coach other red teamers on functional strategies for effective delivery, communication, and prioritization in fast moving environments.
  • Embody our and