The AI Red Team is looking for security researchers who can combine the development of cutting-edge attack techniques, with the ability to deliver complex, time limited operations as part of a diverse team. This includes the ability to manage several priorities at once, manage stakeholders, and communicate clearly with a range of audiences.
- Understand the products & services that the AI Red Team is testing, including the technology involved and the intended users to develop plans to test them.
- Understand the risk landscape of AI Safety & Security including cybersecurity threats, Responsible AI policies, and the evolving regulatory landscape to develop new attack methodologies for these areas.
- Conduct operations against systems as part of a multi-disciplinary team, delivering against multiple priority areas within a set timeline.
- Communicate clearly and concisely with stakeholders before, during, and after operations to ensure everyone is clear on objectives, progress, and the outcomes of your work.
- Co-ordinate with your team members during ops to ensure that all areas of focus are covered and that stakeholders are clear on the status of your work.
- Partner with and support all elements of the AI Red Team and our partners, including actively contributing to tool development and long-term research efforts.