- Bachelor's or Master’s degree in Computer Science, Data Science, Engineering, or a related technical field
- 10+ years of industry experience in applied AI/ML, data science, or software engineering roles
- Strong hands-on programming skills
- Proven experience designing or evaluating AI/ML systems and/or benchmarking pipelines
- At least 2 years of experience in the security domain (e.g., threat detection, anomaly detection, SOC environments)
- Familiarity with ML Ops practices – taking models from experimentation to production
- Demonstrated ability to collaborate across research, product, and engineering teams
- Experience with Generative AI, agentic systems, or LLM-based tools
- Strong understanding of data quality, validation, and governance practices
- Background in both AI and Security contexts, particularly where the two intersect
- Growth mindset, strong sense of ownership, and ability to mentor junior team members
Preferred:
- Experience working with or building evaluation datasets, test harnesses, or performance metrics for AI systems
- Familiarity with modern Generative AI benchmarks
- Familiarity with AI security vulnerabilities
- Academic publishing or prior contribution to research communities
Other RequirementsAbility to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:
Microsoft Cloud Background Check:
This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.
Understanding of secure execution environments for safe workload handling, including potential exposure to malware analysis.