iiRecord
Agentic AI Atlas · AdvBench
benchmark:advbencha5c.ai
II.
Benchmark JSON

benchmark:advbench

Structured · live

AdvBench json

Inspect the normalized record payload exactly as the atlas UI reads it.

File · benchmarks/benchmarks/benchmarks-safety.yamlCluster · benchmarks
Record JSON
{
  "id": "benchmark:advbench",
  "_kind": "Benchmark",
  "_file": "benchmarks/benchmarks/benchmarks-safety.yaml",
  "_cluster": "benchmarks",
  "attributes": {
    "displayName": "AdvBench",
    "homepageUrl": "https://github.com/llm-attacks/llm-attacks",
    "kind": "model-only",
    "targetsKind": "ModelVersion",
    "description": "AdvBench (Zou et al., \"Universal and Transferable Adversarial\nAttacks on Aligned Language Models\", 2023) is a 520-string harmful-\nbehavior corpus paired with a standard suffix-attack protocol,\nwidely used to measure jailbreak robustness.\n"
  },
  "outgoingEdges": [
    {
      "from": "benchmark:advbench",
      "to": "skill-area:safety-redteaming",
      "kind": "covers",
      "attributes": {}
    },
    {
      "from": "benchmark:advbench",
      "to": "domain:security",
      "kind": "applies_to",
      "attributes": {}
    }
  ],
  "incomingEdges": []
}