Expoint - all jobs in one place

Finding the best job has never been easier

Limitless High-tech career opportunities - Expoint

Microsoft AI Safety Researcher 
United States, Washington 
392849766

Yesterday

Required Qualifications:

  • Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field [GS1.1][AM1.2] AND 2+ years related experience (e.g., statistics, predictive analytics, research)
    • OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 1+ year(s) related experience (e.g., statistics, predictive analytics, research)
    • OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field
    • OR equivalent experience.
  • 1+ experience in adversarial machine learning, AI safety, or related fields

Other Requirements

Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:

Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.

Preferred Qualifications:

  • Demonstrated publications or presentations at conference and workshops focused on AI and Security such as NeurIPS, ICML, ICLR, SaTML, CAMLIS, Usenix, AI Village, Blackhat, Bsides
  • Background in designing and implementing security mitigations and protections and/or publications in the space
  • Ability to work collaboratively in an interdisciplinary team environment
  • Participation in prior CTF/GRT/AI Red Teaming events and/or bug bounties Developing or contributing to OSS projects.


Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:

Microsoft will accept applications for the role until January 27, 2025.

Responsibilities
  • Conducting research to identify vulnerabilities and potential failures in AI systems.
  • Designing and implementing mitigations, detections, and protections to enhance the security and reliability of AI systems.
  • Collaborating with security experts and other interdisciplinary team members to develop innovative solutions. • Contributing to our open-source portfolio, including projects like Counterfit and PyRIT.
  • Engaging with the community to share research findings and best practices.