Expoint - all jobs in one place
The point where experts and best companies meet
Limitless High-tech career opportunities - Expoint

Cisco Technical Lead Threat Intelligence 
United States, Georgia, Atlanta 
266413323

Today

are accepted until further notice.

Your Impact

As a Lead Researcher, you will play a pivotal role in investigating, analyzing and mitigating emerging threats targeting AI / ML. You will work closely with the Director of AI Threat Intelligence to build a world-class AI threat research capability, delivering actionable intelligence, advancing the state of AI security research, and helping secure Cisco’s AI-driven products and services. You will collaborate with fellow researchers, data scientists, security engineers, and product teams to proactively identify AI-related risks, with a strong focus on securing autonomous AI agents.

Key Responsibilities
  • Lead research into AI/ML-specific threats, including adversarial attacks, prompt injection, model exploitation, data poisoning, model evasion, tool misuse and misuse of generative AI systems.
  • Conduct threat modeling of AI agents and multi-agent systems, including Model Context Protocols (MCP), A2A — ensuring safe transmission and handling of context, memory, and system metadata across model calls and between AI Agents.
  • Track evolving threat actor tactics, techniques, and procedures (TTPs) targeting AI/ML ecosystems, particularly those exploiting agentic behavior and context management flaws.
  • Produce and publish high-quality, actionable threat intelligence reports, technical analysis papers, and internal briefings to inform engineering, product, and executive teams.
  • Represent Cisco in external research communities, conferences, working groups, and standards organizations where AI security and threat intelligence are advancing.
  • Mentor junior researchers and contribute to building a center of excellence around AI threat intelligence and adversarial research.
Qualifications
  • 8+ years of experience in cybersecurity threat intelligence, adversarial research, red teaming, or offensive security, with exposure to AI/ML systems preferred.
  • Strong expertise in AI/ML technologies, particularly generative models, AI agent frameworks, memory-augmented AI, and Model Context Protocol (MCP) designs.
  • Hands-on experience analyzing vulnerabilities in AI systems, including prompt injections, agentic exploits, and context/memory transmission flaws.
  • Proficiency in Python or similar scripting languages for prototyping, simulation, and analysis.
  • Familiarity with threat frameworks such as MITRE ATLAS, ATT&CK for AI, or emerging AI-specific threat models.
Preferred Qualifications
  • Record of contributions to AI security research communities (publications, CVEs, conference presentations, open-source projects, blogs).
  • Knowledge of responsible AI practices, secure, safe deployment of autonomous and agentic AI systems.
  • Familiarity with securing agent-to-agent communication protocols, and external tool orchestration in multi-agent environments.
  • Experience working with AI/ML pipelines in cloud environments (AWS, Azure, GCP).