The Ship That Got Sued
In 1840, an English admiralty court named a ship as the defendant in a lawsuit. The vessel itself was treated as a legal person and held directly accountable for debts its owners had never authorized and essentially knew nothing about.
This radical approach of holding a ship accountable was entirely practical. Ships in the age of sail were forced to act as autonomous economic actors given there was simply no way to communicate with their owners once at the high seas. A captain could spend months sailing, buying provisions, hiring crew, entering contracts and accumulating obligations, all far beyond the knowledge or control of the shipowner back home. Communication took months, effective oversight was impossible, and yet commerce still had to continue.
So the court invented a necessary fiction, treating the ship as a person, an entity that could be sued, arrested, or seized, and by doing so it resolved a problem that had blocked global trade for centuries: how do you hold an autonomous agent accountable when its actions are beyond its principal's reach?
If it sounds bizarre, then consider that the legal treatment of corporations followed a remarkably similar arc. Medieval jurists recognized that groups of people acting collectively needed a legal identity separate from any individual member, and by the Renaissance, churches and universities chartered by governments could own property, enter contracts, and be sued independently of the people inside them. Limited liability, which took another three centuries to fully mature, completed the architecture by allowing investors to routinely fund ventures without personally overseeing them, because the entity itself bore the obligations and ownership and accountability had been formally separated.
Whether it was ships sailing the high seas or the formation of corporations, the sequence was identical: a new class of economic actor emerged that operated faster and further than existing accountability structures could handle, and every time, the law eventually caught up by inventing new forms of legal personhood that allowed autonomy and accountability to coexist. History is littered with these moments where technological progress outpaces the legal framework, uncertainty reigns for a while, and then someone invents a new category of responsibility and the economy expands again.
That sequence is in full swing right now as legislators scramble to create clarity around crypto and blockchain, but our story is bigger than crypto, much bigger than blockchain.
The Strange Economics of Intelligence
When OpenAI launched GPT-4 in March 2023, frontier intelligence cost roughly $30 per million input tokens, yet just two years later GPT-5 Nano offered comparable baseline performance at $0.05 per million input tokens, representing a 600-fold reduction in cost. The frontier models themselves are following along the same curve, as GPT-5 currently runs at $1.25 per million tokens, a 24-fold drop from GPT-4's launch price, and nothing suggests these trajectories are flattening.
To understand what these numbers actually mean, consider that the average person speaks roughly 10 million tokens per year, which means that at budget model pricing, replicating an entire year of human linguistic output in AI inference already costs about fifty cents. Scale up to a hundred million tokens, which gets closer to the volume of a person's total cognitive activity across a year including every internal thought, every deliberation, every decision, and you land at around five dollars for a year of human-equivalent cognition.
Read that again. Every thought you will have this year, every decision, every conversation, every plan, every doubt, all of it, for less than the cost of parking your car for an hour in the heart of Amsterdam.
For most of recorded history, intelligence was the scarcest resource in the economy, with empires rising and falling depending on how many capable minds they could organize. The corporation itself exists largely as a machine for coordinating limited human cognition toward a shared objective.
Many fail to grasp the significance of competent thinking at a price point that makes it disposable, simply because the prevailing opinion is that AI is unreliable, that it hallucinates, makes confident errors, and feels more like a clever toy than serious economic infrastructure. This perception confuses the intelligence with the packaging, as what the public interacts with is a single model responding to a single prompt, a chatbot, and under those restrictive conditions the limitations are obvious. Strip away the chatbot and the picture changes considerably, because the frontier models already pass bar exams, medical licensing tests, and advanced programming benchmarks, and they rank among the top forecasters in the world. Measured against conventional cognitive benchmarks, these systems perform at or above human expert level across a surprisingly wide range of domains.
What they lack is not intelligence, but rather orchestration. A model answering a single prompt is raw capability sitting idle, and this is precisely the gap that a rapidly emerging class of so-called AI agents is filling. Systems like Claude Code, Manus, and a growing number of others take the same underlying intelligence and wrap it in an operational framework with goals, persistent memory, access to external tools, and the ability to decompose complex tasks, verify their own output, and iterate until the work is actually done. Rather than waiting for a human to type the next question, these systems operate with increasing autonomy and something that begins to resemble initiative, pursuing objectives across multiple steps and adapting when things go wrong.
Think of it as the difference between asking Einstein to calculate your tax return and giving him a research team, funding, and a decade to rethink the nature of time itself. Once intelligence is provided with the framework and resources to reliably execute work rather than merely answer questions, it stops acting like software and starts behaving like an economic actor, and that is where the numbers become staggering.
Emad Mostaque, who has been tracking these trajectories more carefully than most, estimates that within five years autonomous agents will drive more economic activity than humans. This sounds dramatic until you do the arithmetic: a human decision-maker executes perhaps ten decisions per day while a single AI agent might execute ten thousand, and when you multiply by a few million agents you are looking at a volume of economic transactions that dwarfs anything in human history.
The infrastructure question this raises is usually framed in terms of speed and throughput, asking whether payment rails can handle the volume and whether settlement can keep up, but those are the wrong questions. The real question is the one the admiralty courts faced two centuries ago: what happens when autonomous economic actors make decisions faster than the systems designed to hold them accountable?
The Accountability Gap
When an AI agent buys data that turns out to be stolen, who answers? When an agent enters a contract that violates sanctions, who is liable? When an agent executes a trade in a jurisdiction where that transaction is prohibited, where does the legal responsibility land?
Today there is no answer, and without one the agentic economy stays confined to sandboxes, toy demos, and unregulated experiments, much as crypto itself spent years on the margins until regulatory clarity began turning headwinds into tailwinds. For the agentic economy, the stakes are even higher. The moment agents touch regulated commerce, whether finance, insurance, healthcare, cross-border trade, or government procurement, accountability becomes the gatekeeper. These sectors represent the majority of global economic activity and every transaction within them must ultimately be traceable to a responsible legal entity, without exception.
We have solved capability and we are solving cost at breathtaking speed, but accountability remains wide open, and it is accountability, far more than speed or intelligence, that will determine whether the agentic economy operates inside the real economy or forever alongside it.
Building for the Agentic Economy
History suggests how this resolves. When ships outran their owners' ability to supervise them, the law invented legal personhood for vessels, and when corporations outgrew any individual's capacity to oversee them, the law invented limited liability. Each time a framework emerged that preserved autonomy while creating traceable accountability after the fact.
The agentic economy will require something analogous, and whatever system underlies accountability for autonomous agents will need specific properties: verifiable identity that can be selectively disclosed, direct settlement without custodial intermediaries, predictable fees at volumes suited to machines rather than humans, and a traceable link between every autonomous action and a responsible legal entity.
Most of the financial infrastructure currently proposed for AI agents was built on the crypto mindset of privacy via anonymity. But anonymous infrastructure carries no built-in link between a transaction and a responsible entity, and at the volume and speed at which agents will operate, that missing link goes from being a manageable issue to a structural problem with unlimited downside.
A small number of blockchain architectures were built with a different set of assumptions, anticipating the need for privacy and accountability to coexist rather than compete. On Concordium, agents transact between identities verified at the protocol level but disclosed only under defined legal conditions, in stablecoins with full self-custody and built-in accountability. They prove compliance without exposing identity and never touch smart contracts. Fees are low, fiat-pegged, and covered by sponsored transactions. These were design choices made for compliance reasons years before the agentic economy arose, and they are what led Coinbase to partner with Concordium for x402 agentic payment integration.
The historical pattern is clear: the economy builds the actors first and the accountability frameworks follow. The ships came first and the law that made them persons came later. The agents are arriving now, and as the legal and regulatory frameworks catch up, the blockchains that already have accountability in their architecture will be the ones that thrive.