Finding the best job has never been easier
Share
Key job responsibilities
* Perform expert cybersecurity red-teaming on complex proprietary foundation models testing, threat model and pentest the services built by AGI.
* Manually generate the novel prompts, jailbreaks to bypass the existing model guardrails authored in house by AGI and AWS Bedrock.
* Write proof of concept code to demonstrate the severity of a potential security issue
.
* Provide clear communication on risks to ML builders/scientists that suggest and mitigate the risk.
* Partner with ML builders/scientists to drive improvement in FM models security as a result of security review engagements
.
* Provide actionable long-term risk mitigation guidance to internal and external stakeholder
.
* Conduct independent vulnerability research pertaining to GenAI technologies.A day in the life
- Bachelor’s degree in Computer Science, Engineering, or a related field; Master’s or Ph.D. preferred
- Minimum 2 years of experience in AI security, adversarial machine learning, or related fields
- Minimum of 5 years of experience in security testing (Penetration testing, Vulnerability testing, Red teaming, bug hunting, CTF experience, or related field)
- Minimum of 5 years of experience with manually auditing source code (One or more of: Java, Ruby, Python, JavaScript, Rust, C, others) to find security issues
- Minimum of 5 years of experience scripting in Python or other equivalent interpreted languages
- Solid understanding of machine learning techniques, deep learning architectures, and generative models (e.g., GANs, VAEs)
- Familiarity with security frameworks, tools, and techniques for protecting AI systems
- Knowledge of data privacy regulations (e.g., GDPR, CCPA) and their implications on AI systems is a plus
- Experience with AWS AI technologies and services (e.g. SageMaker, Code Whisperer, Bedrock, etc)
- CCSP (Certified Cloud Security Professional) or CEH (Certified Ethical Hacker) or CFR (CyberSec First Responder) or Cloud+ or CySA+ (CompTIA Cybersecurity Analyst) or GCED (GIAC Certified Enterprise Defender) or GICSP (Global Industrial Cyber Security Professional) or PenTest+
- Experience with the architecture of GenAI models, platforms, and applications
- Knowledge of common AI/ML attack techniques such as prompt injection and ability to automate testing for these vulnerabilities
- Ability to identify vulnerabilities and threats specific to GenAI and other AI/ML systems
- Background in adversarial machine learning and emerging attacks like data poisoning, model extraction, membership inference, etc
- Experience with languages commonly used in AI/ML like Python, R, Java, C++
- Meets/exceeds Amazon’s leadership principles for this role
- Meets/exceeds Amazon’s functional/technical depth and complexity expectations for this role
- Excellent communication skills to collaborate with cross-functional teams and present complex security concepts to non-technical stakeholder
These jobs might be a good fit