Bachelor's degree or equivalent practical experience.
7 years of experience in trust and safety, risk mitigation, cybersecurity, or related fields.
7 years of experience with one or more of the following languages: SQL, R, Python, or C++.
6 years of experience in adversarial testing, red teaming, jailbreaking for trust and safety, or a related field, with a focus on AI safety.
Experience with Google infra/tech stack and tooling, Application Programming Interface (API) and web service experience, Collab deployment, SQL and data handling, Machine Learning Operations (MLOps) or other AI infrastructure experience.
Preferred qualifications:
Master's or PhD in a relevant quantitative or engineering field.
Experience in an individual contributor role within a technology company, focused on product safety or risk management.
Experience working closely with both technical and non-technical teams on dynamic solutions or automations to improve user safety.
Understanding of AI systems/architecture including specific vulnerabilities, machine learning, and AI responsibility principles.
Ability to influence cross-functionally at various levels and with the ability to effectively articulate technical concepts to both technical and non-technical stakeholders.
Excellent written and verbal communication and presentation skills.