Share
The application window is expected to close on: June 30th, 5 PM ET 2025
Job posting may be removed earlier if the position is filled or if a sufficient number of applications are received.
Your Impact
Develop and execute sophisticated adversarial tactics, techniques, and procedures (TTPs) to emulate real-world threats targeting AI models and systems.
Drive technical investigations, analyzing AI safety risks and producing actionable insights to enhance system reliability and trustworthiness.
Design and implement frameworks for secure data pipelines, ensuring quality, scalability, and compliance with customer and regulatory requirements.
Minimum Qualifications
5+ years of experience in cybersecurity, red teaming, penetration testing, or identifying security vulnerabilities in complex systems or similar background.
Hands-on experience with adversarial testing of AI/ML systems or deep interest in AI safety and adversarial machine learning.
Strong proficiency in programming languages such as Python, with the ability to develop tools for vulnerability assessment and automation.
Preferred Qualifications
Advanced degree in Computer Science, Artificial Intelligence, or a related discipline.
Experience in designing and securing AI/ML pipelines, with expertise in data taxonomy, labeling, and safety mechanisms.
Familiarity with adversarial machine learning research and hands-on experimentation with AI models.
Ability to develop and scale analytics frameworks for data-driven decision-making in AI safety and adversarial testing.
Knowledge of regulatory and compliance standards for AI and data security.
We offer extensive employee benefits including unlimited PTO, 10 paid volunteering days, paid birthday off, 401k match with NO vesting, generous health/dental/vision benefits and much more.
These jobs might be a good fit