The application window is expected to close on: August 28th, 2025.
NOTE: Job posting may be removed earlier if the position is filled or if a sufficient number of applications are received.
Your Impact
As a senior member of the Security Assurance team, you’ll be a trusted security resource driving the secure design, deployment, and operation of customer-facing applications, systems, and AI-enabled services. This role requires deep technical expertise in AI/ML, cloud security, and secure software development, alongside practical knowledge of enterprise compliance, threat modeling, and the ongoing and constantly evolving AI threat landscape.
Serve as the technical authority for AI/ML security assurance, collaborating with engineering, product, and architecture teams to ensure secure, compliant delivery of systems and services.
Define, implement, and maintain security standards for GenAI, ML pipelines, data usage, and AI applications across hybrid and cloud environments.
Conduct threat modeling, adversarial risk assessments, and AI-specific threat surface analysis on LLMs, data pipelines, and orchestration frameworks.
Drive secure-by-design practices throughout the AI development lifecycle, embedding security gates and validations.
Lead AI-specific security testing, including red teaming, prompt injection defenses, RAG hardening, model sandboxing, and privacy-preserving techniques.
Minimum Requirements
+8 years of hands-on experience in cybersecurity and GenAI security, including demonstrated work with prompt injection mitigation, adversarial robustness, and implementing AI governance controls.
At least 5 years of experience working with AI ecosystem stacks (such as TensorFlow, PyTorch, or LangChain), including securing AI/ML training pipelines and deployment workflows in production environments.
Proficiency in Python, with a minimum of 3 years using Jupyter and proven experience in securing REST APIs and microservices in enterprise-scale applications.
Expert at cloud security architectures across AWS, Azure, orGCP, and AI PaaS offerings like AWS Bedrock or Google Vertex AI.
Minimum 3 years of experience conducting automated threat modeling and red teaming activities specifically for AI/ML systems.
Preferred
Familiarity with compliance frameworks (e.g., NIST, GDPR) and their implications for AI data handling.
Knowledge of AI supply chain risks, synthetic data verification, and secure model provenance.
Background in AI-specific security middleware, firewalls, or policy layers for model-based systems.
Published work, patents, or thought leadership in AI security or trustworthy AI.
Participation in AI/ML security working groups or standards bodies (e.g., OWASP AI, NIST, IEEE).
Experience in regulated industries (e.g., healthcare, finance) with mature software assurance programs.