II.
SkillArea overview
Reference · liveskill-area:red-teaming-AI
AI Red Teaming overview
Adversarial testing of AI systems — jailbreak techniques, prompt injection attacks, automated red-team harnesses, and building systematic vulnerability taxonomies for language models.
Attributes
displayName
AI Red Teaming
description
Adversarial testing of AI systems — jailbreak techniques, prompt
injection attacks, automated red-team harnesses, and building
systematic vulnerability taxonomies for language models.
expertiseLevels
- intermediate
- expert
Outgoing edges
applies_to2
- domain:ml-ai·DomainML/AI
- domain:security·DomainSecurity
prerequisite_for_learning1
- skill-area:AI-safety-alignment·SkillAreaAI Safety & Alignment
Incoming edges
None.