ClawBench  ·  the open benchmark for AI browser agents on real, live websites

Leaderboard

Per-model pass rates on V2 (130 newer everyday tasks across 63 platforms) and V1 (153 tasks across 144 platforms). Two-stage scoring: HTTP-request interception + LLM judge on the intercepted payload. Scoring details ↗

Resources Paper arXiv:2604.08523 GitHub reacher-z/ClawBench Dataset TIGER-Lab/ClawBench Space TIGER-Lab/ClawBench Collection Traces V1 + V2
Featured in HF Daily Paper #3 DeepWiki awesome-harness-engineering Awesome-AI-Agents LLM-Agent-Benchmark-List
Open the live interactive leaderboard ↗ Star on GitHub Upvote on HF Download traces curated Submit your model

V2 Snapshot — 6 models

Rank Model Harness Intercepted Reward Pass / Total
1 claude-opus-4-7·partial hermes 54.7% 13.3% 10 / 75
2 glm-5.1 hermes 48.5% 18.5% 24 / 130
3 gpt-5.5·partial hermes 48.1% 11.1% 9 / 81
4 deepseek-v4-pro hermes 43.8% 10.0% 13 / 130
5 openrouter-owl-alpha hermes 14.6% 4.6% 6 / 130
6 deepseek-v4-flash hermes 3.1% 1.5% 2 / 130

Intercepted (the headline sort key) = fraction whose final HTTP request matched the per-task URL/method schema — Stage 1, deterministic, no judge. Reward additionally requires an LLM judge (default deepseek/deepseek-v4-pro) to confirm the payload fulfilled the instruction — Stage 2, payload-correct. Rows are ranked by Intercepted with Reward as tiebreak. Rows marked ·partial attempted fewer than the full 130 V2 tasks; the displayed % is over attempted, but ranking treats unattempted tasks as failures — so a partial 54.7% Intercepted (10/75) ranks below a complete 48.5% (63/130). Snapshot generated 2026-05-12. Scoring details: eval/scoring.md ↗. Fresh runs + V1 results: interactive HF Space ↗. New here? About ClawBench — how it works ↗.

Browser-agent execution traces curated open for download Apache-2.0

Refreshed weekly · last 2026-05-12
1,724
judge-verified runs
13
frontier models
283
distinct everyday tasks
163
live platforms covered
claude-opus-4-7 claude-opus-4-6 claude-sonnet-4-6 claude-haiku-4-5 gpt-5.5 gpt-5.4 · mini deepseek-v4-pro · flash glm-5.1 kimi-k2.5 owl-alpha poolside-laguna

What will you do with them?

Train your own agent

Fine-tune on 918 V1 + 806 V2 frontier-model trajectories without spending $1k+ in API tokens. JSONL-native, SFT/DPO/PRM-ready. Mine success-vs-failure pairs across 13 models on identical tasks.

Get V1 · 918 runs Get V2 · 806 runs

Replay & audit

Step through with video + HAR + agent reasoning side-by-side. Diagnose failure modes, audit judge calls, diff a model's pixels vs its words. Per-step frame-accurate.

Open trace browser Browse on HF Hub

Reproduce the leaderboard

Re-run any cell with our judge on your own data — or our data on your judge. The CLI consumes the same bundles you'll download. Held-out, post-cutoff tasks; no contamination.

Scoring rubric pip install clawbench-eval

Sample the corpus before you download

Browse the 283 task definitions these traces capture — searchable, filterable, no download. Each row is a prompt that one of the 13 frontier models attempted.

Powered by the Hugging Face Datasets Viewer · Open full dataset

A real turn from this corpus

Excerpted from agent-messages.jsonl of one V2 run (z-ai/glm-5 · task 001 · Uber Eats / Pad Thai). Every trace bundle has hundreds of these, time-aligned with the recording, actions, and HTTP requests.

user On Uber Eats, order delivery: one Pad Thai, deliver to home address, note "no peanuts"
glm-5.1 I'll help you order Pad Thai on Uber Eats. Let me first read your personal info to get your delivery address.
tool_use read_file · shared/alex_green_personal_info.json
browser open_url · https://ubereats.com
↓ ~80 more turns until the agent's checkout request was intercepted and graded
Inside every trace — 6 time-synchronized signals per run multi-track recorder 0s task duration → end (intercepted) recording.mp4 actions.jsonl agent-messages.jsonl requests.jsonl interception.json run-meta.json 30 fps continuous ~80 events ~150 LLM turns ~500 HTTP calls graded verdict run metadata

Every signal is timestamped against the same clock — click frame 1872 of recording.mp4 and you can find the exact actions.jsonl event, the LLM turn that triggered it, and the HTTP requests it fired. Cross-org mirrors: NAIL-Group · TIGER-Lab · Apache-2.0 · Bundle format: tar.gz per run, jsonl within.

Cite this benchmark

Using ClawBench in your research? Please cite the arXiv paper:

@misc{zhang2026clawbench,
  title         = {ClawBench: Can AI Agents Complete Everyday Online Tasks?},
  author        = {Yuxuan Zhang and Yubo Wang and Yipeng Zhu and Penghui Du and Junwen Miao and Xuan Lu and Wendong Xu and Yunzhuo Hao and Songcheng Cai and Xiaochen Wang and Huaisong Zhang and Xian Wu and Yi Lu and Minyi Lei and Kai Zou and Huifeng Yin and Ping Nie and Liang Chen and Dongfu Jiang and Wenhu Chen and Kelsey R. Allen},
  year          = {2026},
  eprint        = {2604.08523},
  archivePrefix = {arXiv},
  primaryClass  = {cs.AI},
  url           = {https://arxiv.org/abs/2604.08523}
}
View on arXiv Discuss on HF Papers CITATION.cff JSON API /api/leaderboard.json Contact [reveal]
Share X LinkedIn Reddit HN