The Security Models Training team builds and operates the large-scale AI training and adaptation engines that power Microsoft Security products, turning cutting-edge research into dependable, production-ready capabilities. As a, you will lead end-to-end model development for security scenarios, including privacy-aware data curation, continual pretraining, task-focused fine-tuning, reinforcement learning, and rigorous evaluation. You will drive training efficiency on distributed GPU systems, deepen model reasoning and tool-use skills, and embed responsible AI and compliance into every stage of the workflow. The role is hands-on and impact-focused, partnering closely with engineering and product to translate innovations into shipped experiences, designing objective benchmarks and quality gates, and mentoring scientists and engineers to scale results across globally distributed teams. You will combine strong coding and experimentation with a systems mindset to accelerate iteration cycles, improve throughput and reliability, and help shape the next generation of secure, trustworthy AI for our customers.
Required Qualifications:
Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:
Microsoft Cloud Background Check:
- This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.
Preferred Qualifications:
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:Microsoft will accept applications for the role until October 21, 2025.
משרות נוספות שיכולות לעניין אותך