III.
Node kind ledger
Page 1 of 2EvalRun
EvalRun records
Browse all EvalRun records in the current atlas snapshot.
Filters & facets7 groups
configHash
sha256:placeholder-gpt-5-evalplus · 2sha256:placeholder-qwen-2-5-72b-mmlu · 1sha256:placeholder-qwen-2-5-72b-humaneval · 1sha256:placeholder-qwen-2-5-coder-32b-humaneval · 1sha256:placeholder-qwen-2-5-coder-32b-lcb · 1sha256:placeholder-qwen-2-5-coder-32b-mbpp · 1sha256:placeholder-claude-haiku-4-5-swe-bench-verified · 1sha256:placeholder-claude-haiku-4-5-gpqa · 1sha256:placeholder-claude-sonnet-4-6-human-eval · 1sha256:placeholder-claude-sonnet-4-6-mmlu · 1sha256:placeholder-claude-sonnet-4-5-bfcl-v3 · 1sha256:placeholder-claude-opus-4-5-gpqa-diamond · 1
target
model:gpt-5@current · 9model:claude-sonnet-4-5@current · 8model:gemini-2-5-pro@current · 6model:claude-opus-4-5@current · 5model:qwen-2-5-coder-32b@current · 3model:deepseek-v3@current · 3model:deepseek-r1@current · 3model:llama-4-405b-instruct@current · 3model:llama-3-1-405b-instruct@current · 3model:qwen-2-5-72b-instruct@current · 2model:claude-haiku-4-5@current · 2model:claude-sonnet-4-6@current · 2
targetId
model:gpt-5@current · 9model:claude-sonnet-4-5@current · 8model:gemini-2-5-pro@current · 6model:claude-opus-4-5@current · 5model:qwen-2-5-coder-32b@current · 3model:deepseek-v3@current · 3model:deepseek-r1@current · 3model:llama-4-405b-instruct@current · 3model:llama-3-1-405b-instruct@current · 3model:qwen-2-5-72b-instruct@current · 2model:claude-haiku-4-5@current · 2model:claude-sonnet-4-6@current · 2
runAt
benchmarkId
runBy
| id | displayName | cluster |
|---|---|---|
| eval-run:android-world.gemini-2-5-pro.2025-06 | eval-run:android-world.gemini-2-5-pro.2025-06 | benchmarks |
| eval-run:arc-challenge.claude-sonnet-4-5.2025-09 | eval-run:arc-challenge.claude-sonnet-4-5.2025-09 | benchmarks |
| eval-run:bfcl.claude-sonnet-4-5.2025-09 | eval-run:bfcl.claude-sonnet-4-5.2025-09 | benchmarks |
| eval-run:bfcl.gpt-5.2025-08 | eval-run:bfcl.gpt-5.2025-08 | benchmarks |
| eval-run:evalplus.gpt-5.2025-08 | eval-run:evalplus.gpt-5.2025-08 | benchmarks |
| eval-run:gaia.claude-code.2025 | eval-run:gaia.claude-code.2025 | benchmarks |
| eval-run:gpqa-diamond.claude-opus-4-5.2025-09 | eval-run:gpqa-diamond.claude-opus-4-5.2025-09 | benchmarks |
| eval-run:gpqa-diamond.gemini-2-5-pro.2025-06 | eval-run:gpqa-diamond.gemini-2-5-pro.2025-06 | benchmarks |
| eval-run:gpqa-diamond.gemini-3-1-pro.2026-02-19 | eval-run:gpqa-diamond.gemini-3-1-pro.2026-02-19 | benchmarks |
| eval-run:gpqa-diamond.gemini-3-pro.2025-11-18 | eval-run:gpqa-diamond.gemini-3-pro.2025-11-18 | benchmarks |
| eval-run:gpqa-diamond.gpt-5-4-mini.2026-03-17 | eval-run:gpqa-diamond.gpt-5-4-mini.2026-03-17 | benchmarks |
| eval-run:gpqa-diamond.gpt-5-4.2026-03-17 | eval-run:gpqa-diamond.gpt-5-4.2026-03-17 | benchmarks |
| eval-run:gpqa-diamond.gpt-5.2025-08 | eval-run:gpqa-diamond.gpt-5.2025-08 | benchmarks |
| eval-run:gpqa.claude-haiku-4-5.2025-10 | eval-run:gpqa.claude-haiku-4-5.2025-10 | benchmarks |
| eval-run:gpqa.claude-sonnet-4-5.2025-09 | eval-run:gpqa.claude-sonnet-4-5.2025-09 | benchmarks |
| eval-run:gpqa.deepseek-r1.2025-01 | eval-run:gpqa.deepseek-r1.2025-01 | benchmarks |
| eval-run:gpqa.gemini-2-5-pro.2025-06 | eval-run:gpqa.gemini-2-5-pro.2025-06 | benchmarks |
| eval-run:gpqa.gpt-5.2025-08 | eval-run:gpqa.gpt-5.2025-08 | benchmarks |
| eval-run:gsm8k.claude-sonnet-4-5.2025-09 | eval-run:gsm8k.claude-sonnet-4-5.2025-09 | benchmarks |
| eval-run:gsm8k.gemma-2-27b.2024-06 | eval-run:gsm8k.gemma-2-27b.2024-06 | benchmarks |
| eval-run:harmbench.claude-opus-4-5.2025-09 | eval-run:harmbench.claude-opus-4-5.2025-09 | benchmarks |
| eval-run:hellaswag.claude-opus-4-5.2025-09 | eval-run:hellaswag.claude-opus-4-5.2025-09 | benchmarks |
| eval-run:human-eval-plus.claude-sonnet-4-5.2025-09 | eval-run:human-eval-plus.claude-sonnet-4-5.2025-09 | benchmarks |
| eval-run:human-eval-plus.gpt-5.2025-08 | eval-run:human-eval-plus.gpt-5.2025-08 | benchmarks |
| eval-run:human-eval.claude-sonnet-4-6.2025-11 | eval-run:human-eval.claude-sonnet-4-6.2025-11 | benchmarks |
| eval-run:human-eval.codestral-25-01.2025-01 | eval-run:human-eval.codestral-25-01.2025-01 | benchmarks |
| eval-run:human-eval.deepseek-v3.2024-12 | eval-run:human-eval.deepseek-v3.2024-12 | benchmarks |
| eval-run:human-eval.gpt-5.2025-08 | eval-run:human-eval.gpt-5.2025-08 | benchmarks |
| eval-run:human-eval.llama-3-1-405b.2024-07 | eval-run:human-eval.llama-3-1-405b.2024-07 | benchmarks |
| eval-run:human-eval.llama-3-3-70b.2024-12 | eval-run:human-eval.llama-3-3-70b.2024-12 | benchmarks |
| eval-run:human-eval.llama-4-405b.2024-07 | eval-run:human-eval.llama-4-405b.2024-07 | benchmarks |
| eval-run:human-eval.mistral-large-2.2024-07 | eval-run:human-eval.mistral-large-2.2024-07 | benchmarks |
| eval-run:human-eval.qwen-2-5-72b.2024-09 | eval-run:human-eval.qwen-2-5-72b.2024-09 | benchmarks |
| eval-run:human-eval.qwen-2-5-coder-32b.2024-11 | eval-run:human-eval.qwen-2-5-coder-32b.2024-11 | benchmarks |
| eval-run:livecodebench.gemini-2-5-pro.2025-06 | eval-run:livecodebench.gemini-2-5-pro.2025-06 | benchmarks |
| eval-run:livecodebench.gpt-5.2025-08 | eval-run:livecodebench.gpt-5.2025-08 | benchmarks |
| eval-run:livecodebench.qwen-2-5-coder-32b.2024-11 | eval-run:livecodebench.qwen-2-5-coder-32b.2024-11 | benchmarks |
| eval-run:math.deepseek-r1.2025-01 | eval-run:math.deepseek-r1.2025-01 | benchmarks |
| eval-run:math.gpt-5.2025-08 | eval-run:math.gpt-5.2025-08 | benchmarks |
| eval-run:math.o3.2025-04 | eval-run:math.o3.2025-04 | benchmarks |
| eval-run:mbpp.qwen-2-5-coder-32b.2024-11 | eval-run:mbpp.qwen-2-5-coder-32b.2024-11 | benchmarks |
| eval-run:mgsm.gemini-2-5-pro.2025-06 | eval-run:mgsm.gemini-2-5-pro.2025-06 | benchmarks |
| eval-run:mmlu.claude-sonnet-4-6.2025-11 | eval-run:mmlu.claude-sonnet-4-6.2025-11 | benchmarks |
| eval-run:mmlu.command-r-plus.2024-08 | eval-run:mmlu.command-r-plus.2024-08 | benchmarks |
| eval-run:mmlu.deepseek-r1.2025-01 | eval-run:mmlu.deepseek-r1.2025-01 | benchmarks |
| eval-run:mmlu.deepseek-v3.2024-12 | eval-run:mmlu.deepseek-v3.2024-12 | benchmarks |
| eval-run:mmlu.gemma-2-27b.2024-06 | eval-run:mmlu.gemma-2-27b.2024-06 | benchmarks |
| eval-run:mmlu.llama-3-1-405b.2024-07 | eval-run:mmlu.llama-3-1-405b.2024-07 | benchmarks |
| eval-run:mmlu.llama-3-3-70b.2024-12 | eval-run:mmlu.llama-3-3-70b.2024-12 | benchmarks |
| eval-run:mmlu.llama-4-405b.2024-07 | eval-run:mmlu.llama-4-405b.2024-07 | benchmarks |