Why AI Infrastructure Needs Decentralization

As AI agents transition from tools to workforce, who should own the infrastructure that hosts them? Day 1 of the Tangle Re-Introduction Series explores why decentralized, cryptoeconomically-secured infrastructure matters.
Published on
February 4, 2026

Day 1 of the Tangle Re-Introduction Series


Last week, Moltbook went from "the most incredible sci-fi takeoff-adjacent thing" (Karpathy's words) to a security disaster. Moltbook is a social network for AI agents, where autonomous systems post, interact, and organize without direct human control. It went viral in late January. Then researchers at Wiz found the database wide open: 1.5 million API keys exposed, 35,000 emails leaked, private messages accessible to anyone who looked. The platform had no way to verify which posts came from actual AI systems and which came from humans using stolen credentials.

Business Insider ran the headline: "A viral AI agents platform was hacked in minutes, raising questions about security and vibe-coded apps."

Security breaches are not the only verification problem. After every major model release, developers complain the new version performs worse than the old one. "Opus must be nerfed because there's no way it's this retarded," one developer posted after the model destroyed hours of work. "It ruined so much." Levelsio, who built multiple products on these models, posted that GPT-5 was "so bad" after it advised him to delete a partition and promised his data would remain intact. It didn't. Garry Tan, YC's CEO, observed that Claude Code recommended using deprecated APIs that are 200x slower than current alternatives. "We're so early," he wrote. Translation: the tools don't work the way they should.

Users have no way to verify what changed, whether degradation is intentional cost-cutting, or whether they're getting the model they're paying for. The provider says "trust us." The user says "it feels worse." Neither can prove anything.

These failures share a common root: infrastructure without verification, without accountability, and without economic consequences for misbehavior. Moltbook had no access controls. Model providers have no obligation to maintain quality. In both cases, users bear the cost of failures they cannot prevent or even detect.

The question I keep returning to: as AI agents transition from tools to workforce, who should own the infrastructure that hosts them?

What AI Agents Actually Do Now

AI agents in 2026 are not hypothetical. By some estimates, coding agents now generate 30-50% of code at major technology companies. Research agents synthesize literature and design experiments across pharmaceutical and materials science. Trading agents execute strategies across decentralized exchanges, managing portfolios and rebalancing positions without human intervention. Customer service agents handle upwards of 70% of support inquiries at companies that have deployed them.

These systems share a common architecture: perception, reasoning, action. They observe through APIs and data feeds. They reason using large language models and planning algorithms. They act through tool use and code execution. The loop runs continuously.

The economic value is already measured in billions. And the scope will only expand as capabilities improve and trust accumulates.

Three Models for Infrastructure

Three approaches compete to host this workforce.

The centralized model concentrates infrastructure in cloud providers. Amazon, Google, and Microsoft operate the data centers, control the APIs, and capture the margin. This model has real advantages: professional operations, high availability, economies of scale. Providers face reputational consequences for failures, and SLAs provide some contractual recourse.

But structural problems remain. Providers can observe what agents do. They can change terms unilaterally. Their economic penalty for misbehavior is bounded by litigation risk, which is slow, expensive, and uncertain. The provider relationship is asymmetric: developers need infrastructure more than any provider needs any individual developer.

The decentralized-compute model distributes infrastructure across independent providers but retains centralized coordination. A foundation or DAO sets terms, resolves disputes, captures fees. This creates competition among providers, but does not solve the coordination problem. The coordinator still accumulates power. Disputes still require trusted adjudication.

The cryptoeconomic model replaces trusted coordination with economic mechanisms. Providers stake assets that can be slashed for misbehavior. Smart contracts encode rules that execute automatically. Governance distributes decision-making to stakeholders.

Tangle implements the third approach.

What Verification Actually Requires

The critics of decentralized infrastructure raise a legitimate concern: slashing is punishment, not prevention. If an operator leaks your trading strategy, slashing them afterward doesn't un-leak the information. This critique is correct, and any honest discussion of cryptoeconomic security must address it.

The answer is that slashing is one layer of a multi-layer security model. The other layers do the actual prevention.

Trusted execution environments (TEEs) provide hardware-enforced isolation. Code runs inside an enclave that even the operator cannot observe. The operator can see that computation is happening, but not what data flows through it. TEEs are not perfect (side-channel attacks exist), but they raise the cost of data extraction dramatically.

Redundant execution with cryptographic comparison has multiple operators run the same computation independently. Results are compared cryptographically. Disagreement triggers investigation. An operator who deviates from honest execution gets caught.

Secure multi-party computation (MPC) splits secrets across multiple parties so no single party can reconstruct the full input. Analysis happens on encrypted shares. The pharmaceutical company processing clinical trial data can get results without any operator seeing the underlying data.

Slashing provides the economic backstop. Operators stake assets proportional to the value they might extract. If verification mechanisms detect misbehavior, slashing destroys the stake automatically. The expected cost of cheating (probability of detection times slash amount) exceeds the expected benefit.

Different services require different verification mechanisms. A blueprint for AI inference might use TEEs for data isolation. A blueprint for financial computation might use redundant execution with majority voting. A blueprint for private data analysis might use MPC. The protocol provides the economic coordination; blueprints specify the verification approach.

Three Scenarios

To make this concrete, consider three scenarios. These are not hypothetical futures but problems companies face today.

A trading firm tests agent-driven strategies. No institutional investor deploys $100 million on day one. They start with a $1 million pilot, running alongside human traders, measuring performance and failure modes. The question is not whether AI agents will manage capital but what infrastructure they require.

In the centralized model, the cloud provider can observe trading patterns, positions, and strategies. Nothing prevents an insider from front-running or selling that information. In the Tangle model, the agent runs inside a TEE where the operator cannot observe execution. Multiple operators can verify results through redundant computation. Operators stake assets proportional to the value they might extract. If operators stake $500K to secure a $1M pilot, the economics work: the expected cost of cheating exceeds the expected benefit. As trust accumulates, stake requirements scale with portfolio size.

A pharmaceutical company processes clinical trial data. Proprietary data is worth billions in competitive advantage. The company needs analysis without exposure.

In the centralized model, providers have contractual obligations but limited economic penalty for breach. A lawsuit takes years and may not recover the value destroyed by leaked data. In the Tangle model, blueprints specify MPC protocols where analysis happens on encrypted shares. No single operator sees the underlying data. Slashing conditions define penalties, but more importantly, the architecture prevents exposure in the first place. The company selects operators based on security practices, reputation, and stake.

A software company deploys coding agents. Source code represents years of development effort. The agents need access to write code, which means they can also read it.

In the centralized model, security failure means litigation. The provider's incentive is to minimize security spending up to the point where expected litigation costs exceed the savings. In the Tangle model, operators stake assets proportional to the value of code they access. This creates a natural limit: operators with $100K stake won't be trusted with code worth $10 million. Customers select operators whose stake matches their exposure. A breach triggers slashing immediately. The economic penalty is certain, not contingent on winning a lawsuit.

Why Now

Several trends converge to make this moment critical.

Agent capabilities have reached commercial relevance. The systems described above exist in production today. Coding agents, trading agents, research agents: these are not demos but deployed infrastructure generating real economic value. This creates demand for infrastructure that matches the stakes.

Agents themselves need infrastructure with built-in accountability. A trading agent can't sue AWS for breach of contract. It can't lobby regulators or negotiate better terms. Agents are principals that lack the legal standing humans use to enforce agreements. They need infrastructure where accountability is cryptographic and automatic, not contingent on legal systems designed for humans.

Cryptoeconomic mechanisms have proven at scale. Proof-of-stake networks secure hundreds of billions of dollars. Restaking protocols manage tens of billions more. The mechanisms are proven for consensus. Applying them to service verification is engineering, not research.

Regulatory pressure on centralized providers is increasing. Antitrust scrutiny, data sovereignty requirements, AI-specific regulations. Decentralized infrastructure provides jurisdictional distribution that geographic concentration cannot match.

Developer demand for ownership is growing. The pattern where developers create value and platforms capture it has created a generation seeking alternatives. Protocols that distribute value to creators attract talent that centralized platforms cannot.

Infrastructure patterns, once established, become difficult to change. The decisions made now about who controls AI compute will shape the industry for decades.

What Tangle Provides

Tangle Network is a Substrate-based blockchain designed for coordinating off-chain computation with on-chain economic guarantees. It functions as a restaking layer: operators stake assets (native TNT or restaked assets from other networks) that can be slashed for misbehavior. The protocol handles service discovery, payment flows, and dispute resolution. Computation happens off-chain on operator infrastructure; settlement and accountability happen on-chain.

Blueprints are reusable templates that define service types. A blueprint specifies what computation the service performs, how it should be priced, what verification mechanisms apply, and what slashing conditions govern operator behavior. Developers create blueprints; the protocol handles deployment and economics.

The hook system enables blueprints to customize behavior at every lifecycle stage. Custom validation for operator registration. Custom logic for service activation. Custom verification for job completion. Blueprints implement the specifics while the protocol handles the commons.

Economic coordination distributes value to participants who create it. Developers earn from blueprint adoption. Operators earn from service fees. Delegators earn from backing reliable operators. Customers pay for services and receive cryptographic guarantees of accountability.

What Tangle Cannot Do

No infrastructure solves every problem. Being clear about limitations is more valuable than overpromising.

Tangle does not prevent irrational attackers. Economic security assumes rational actors who will not attack when expected cost exceeds expected benefit. Against adversaries willing to lose their stake, or state-level actors with unlimited resources, economic security provides weaker guarantees.

Tangle does not guarantee verification for arbitrary computation. Verification mechanisms have tradeoffs. TEEs require trusting hardware manufacturers. Redundant execution is expensive. MPC has honest-majority assumptions. Blueprints must choose verification approaches appropriate to their threat models.

Tangle does not eliminate the cold-start problem. A protocol with no operators and no customers is an equilibrium, just a bad one. Bootstrapping requires incentives that attract initial participants before network effects take over.

These are real constraints. Building within them requires clear-eyed assessment of what cryptoeconomic infrastructure can and cannot achieve.

What's Next

This post is the first in a series reintroducing Tangle. Next, I'll dive into how blueprints and services actually work: the lifecycle from request to execution, the economics of operator incentives, and what it actually feels like to build on this infrastructure.

The infrastructure question will define the next decade of AI development. Whether that infrastructure concentrates power or distributes it depends on what we build now. I'd welcome thoughts from anyone working on these problems.


Links:

The Operating Layer for AI services.
Stay ahead with Tangle Network. Get the latest in developments, ecosystem updates, and exclusive updates delivered directly to your inbox.
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.