The builders of Terminal-Bench, a benchmark suite for evaluating the efficiency of autonomous AI brokers on real-world terminal-based duties, have launched model 2.0 alongside Harbor, a brand new framework for testing, enhancing and optimizing AI brokers in containerized environments.
The twin launch goals to handle long-standing ache factors in testing and optimizing AI brokers, notably these constructed to function autonomously in reasonable developer environments.
With a tougher and rigorously verified activity set, Terminal-Bench 2.0 replaces model 1.0 as the usual for assessing frontier mannequin capabilities.
Harbor, the accompanying runtime framework, permits builders and researchers to scale evaluations throughout hundreds of cloud containers and integrates with each open-source and proprietary brokers and coaching pipelines.
“Harbor is the package deal we want we had had whereas making Terminal-Bench," wrote co-creator Alex Shaw on X. "It’s for agent, mannequin, and benchmark builders and researchers who need to consider and enhance brokers and fashions."
Larger Bar, Cleaner Information
Terminal-Bench 1.0 noticed fast adoption after its launch in Might 2025, turning into a default benchmark for evaluating agent efficiency throughout the sphere of AI-powered brokers working in developer-style terminal environments. These brokers work together with techniques by the command line, mimicking how builders work behind the scenes of the graphical person interface.
Nonetheless, its broad scope got here with inconsistencies. A number of duties had been recognized by the neighborhood as poorly specified or unstable as a consequence of exterior service adjustments.
Model 2.0 addresses these points straight. The up to date suite consists of 89 duties, every subjected to a number of hours of guide and LLM-assisted validation. The emphasis is on making duties solvable, reasonable, and clearly specified, elevating the issue ceiling whereas enhancing reliability and reproducibility.
A notable instance is the download-youtube activity, which was eliminated or refactored in 2.0 as a consequence of its dependence on unstable third-party APIs.
“Astute Terminal-Bench followers might discover that SOTA efficiency is akin to TB1.0 regardless of our declare that TB2.0 is tougher,” Shaw famous on X. “We consider it is because activity high quality is considerably greater within the new benchmark.”
Harbor: Unified Rollouts at Scale
Alongside the benchmark replace, the workforce launched Harbor, a brand new framework for operating and evaluating brokers in cloud-deployed containers.
Harbor helps large-scale rollout infrastructure, with compatibility for main suppliers like Daytona and Modal.
Designed to generalize throughout agent architectures, Harbor helps:
-
Analysis of any container-installable agent
-
Scalable supervised fine-tuning (SFT) and reinforcement studying (RL) pipelines
-
Customized benchmark creation and deployment
-
Full integration with Terminal-Bench 2.
Harbor was used internally to run tens of hundreds of rollouts in the course of the creation of the brand new benchmark. It’s now publicly accessible by way of harborframework.com, with documentation for testing and submitting brokers to the general public leaderboard.
Early Outcomes: GPT-5 Leads in Activity Success
Preliminary outcomes from the Terminal-Bench 2.0 leaderboard present OpenAI's Codex CLI (command line interface), a GPT-5 powered variant, within the lead, with a 49.6% success fee — the very best amongst all brokers examined up to now.
Shut behind are different GPT-5 variants and Claude Sonnet 4.5-based brokers.
Prime 5 Agent Outcomes (Terminal-Bench 2.0):
-
Codex CLI (GPT-5) — 49.6%
-
Codex CLI (GPT-5-Codex) — 44.3%
-
OpenHands (GPT-5) — 43.8%
-
Terminus 2 (GPT-5-Codex) — 43.4%
-
Terminus 2 (Claude Sonnet 4.5) — 42.8%
The shut clustering amongst prime fashions signifies energetic competitors throughout platforms, with no single agent fixing greater than half the duties.
Submission and Use
To check or submit an agent, customers set up Harbor and run the benchmark utilizing easy CLI instructions. Submissions to the leaderboard require 5 benchmark runs, and outcomes will be emailed to the builders together with job directories for validation.
harbor run -d terminal-bench@2.0 -m "<mannequin>" -a "<agent>" –n-attempts 5 –jobs-dir <path/to/output>
Terminal-Bench 2.0 is already being built-in into analysis workflows centered on agentic reasoning, code technology, and gear use. In line with co-creator Mike Merrill, a postdoctoral researcher at Stanford, an in depth preprint is in progress overlaying the verification course of and design methodology behind the benchmark.
Aiming for Standardization
The mixed launch of Terminal-Bench 2.0 and Harbor marks a step towards extra constant and scalable agent analysis infrastructure. As LLM brokers proliferate in developer and operational environments, the necessity for managed, reproducible testing has grown.
These instruments provide a possible basis for a unified analysis stack — supporting mannequin enchancment, setting simulation, and benchmark standardization throughout the AI ecosystem.

