
Day 4 of the Tangle Re-Introduction Series
The previous posts covered why decentralized infrastructure matters and how verification works. This one is practical: how do you actually build something?
Most "developer experience" posts in crypto are marketing dressed as documentation. They show a hello-world example, claim it's easy, and leave you to figure out the hard parts yourself. This post tries to be honest about what building on Tangle actually involves, where the rough edges are, and what the path to production looks like.
You create a blueprint that operators run and customers pay for, earning a share of every transaction without managing infrastructure yourself.
When you build on Tangle, you're creating a blueprint: a template that defines a type of service. Operators register to run your blueprint. Customers pay to use instances of your service. You earn a share of every transaction.
This is different from traditional SaaS:
| Traditional SaaS | Tangle Blueprint |
|---|---|
| You run the infrastructure | Operators run it |
| You handle scaling | Network handles it |
| You're liable for uptime | Operators stake collateral |
| Revenue = your pricing | Revenue = share of operator fees |
Unlike traditional serverless platforms like AWS Lambda or Vercel Functions, Tangle blueprints give developers revenue sharing from every transaction their code processes. The tradeoff: you give up direct control in exchange for not running infrastructure. Whether that's good depends on what you're building.
The Blueprint SDK offers Rust-native development with integrated testing, AI support, and built-in payments.
Here's how the Blueprint SDK stacks up against other platforms for building verifiable services:
| Blueprint SDK | Eigenlayer AVS | Ritual | Giza | |
|---|---|---|---|---|
| Primary language | Rust | Go/Solidity | Python | Cairo |
| Setup complexity | Single CLI command | Multi-contract deploy | Docker + API | Circuit compilation |
| Testing | Integrated test harness | Manual | Manual | Prover testing |
| AI-native | Yes (inference + sandbox) | No (generic) | Yes (inference only) | Yes (ZK ML only) |
| Payment | Built-in x402 | Custom | Custom | Custom |
Tangle fits services where trust, accountability, or multi-party coordination matter more than raw latency.
Good fits:
Poor fits:
If you're building a standard web app, use Vercel. Seriously. Tangle is for services where decentralized operation and cryptoeconomic accountability provide value that justifies the complexity.
Blueprint SDK is a Rust framework for building verifiable services on Tangle, using async job functions with typed extractors.
The Blueprint SDK is Rust-based. Blueprint SDK supports Rust, with TypeScript and Python SDKs on the roadmap. If you're comfortable with Rust, the learning curve is manageable. If you're not, you'll be learning Rust and Tangle simultaneously, which is harder.
Jobs, routers, layers, and extractors are the four building blocks of every blueprint.
Jobs are units of work. A job is a single unit of work submitted to a blueprint, executed by one or more operators. A customer submits a job, operators execute it, results come back. Jobs have IDs, typed inputs, and typed outputs.
Router wires jobs to handlers. You define which function handles which job ID, and what protocol layer processes it.
Layers add protocol-specific behavior. TangleLayer handles Tangle EVM integration, including job routing and result submission.
Extractors parse job inputs. TangleArg<T> extracts ABI-encoded arguments from incoming job data. TangleResult<T> wraps return values for on-chain submission. Together, these extractors provide type-safe input parsing and output encoding without manual serialization.
A working blueprint needs a Cargo.toml, one async function per job, and a router.
First, your Cargo.toml:
[package]
name = "squaring-service"
version = "0.1.0"
edition = "2024"
rust-version = "1.88"
[dependencies]
blueprint-sdk = { version = "0.1.0-alpha.22", features = ["tangle"] }
tokio = { version = "1", features = ["full"] }
Then the blueprint itself:
use blueprint_sdk::Router;
use blueprint_sdk::tangle::TangleLayer;
use blueprint_sdk::tangle::extract::{TangleArg, TangleResult};
/// Job 0: Square a number
///
/// The function receives ABI-encoded input via TangleArg
/// and returns ABI-encoded output via TangleResult.
pub async fn square(TangleArg((x,)): TangleArg<(u64,)>) -> TangleResult<u64> {
let result = x * x;
TangleResult(result)
}
/// Job 1: Square with multi-operator verification
///
/// Same logic, but the job is configured to require
/// multiple operator results before completion.
pub async fn verified_square(TangleArg((x,)): TangleArg<(u64,)>) -> TangleResult<u64> {
let result = x * x;
TangleResult(result)
}
/// Router wires jobs to the Tangle protocol layer
pub fn router() -> Router {
Router::new()
.route(0, square.layer(TangleLayer))
.route(1, verified_square.layer(TangleLayer))
}
What this shows:
TangleArg<(T,)> extracts typed input from ABI-encoded job data (primitive types are tuple-wrapped for ABI compatibility; structs defined with sol! do not need wrapping)TangleResult<T> wraps output for on-chain submissionThe SDK manages protocol communication, job routing, operator lifecycle, and fee distribution.
You write business logic, set operator requirements, and define pricing recommendations.
Verification is a protocol property, not application code: you configure how many operators must agree, and the aggregation service enforces it.
The previous post mentioned verification. Here's how it actually works in the SDK:
Verification isn't a function you write. It's a protocol property configured when you deploy. Jobs can require:
// Same job logic, different aggregation requirements
pub async fn square(TangleArg((x,)): TangleArg<(u64,)>) -> TangleResult<u64> {
TangleResult(x * x)
}
// Configured at deployment:
// - Job 0: 1 operator required
// - Job 1: 2 operators required (verified)
// - Job 2: 3 operators required (consensus)
Tangle's aggregation service collects operator results and verifies BLS signatures before finalizing job completion. The aggregation service collects results from operators, verifies BLS signatures, and only finalizes when the threshold is met. If operators submit different results, the protocol detects the disagreement.
Slashing penalizes operators who disagree or fail to respond, enforced at the contract level.
When operators disagree or fail to respond, the protocol can slash their stake. This is handled at the contract level, not in your Rust code. You configure:
The local dev environment simulates the full Tangle network so you can iterate without touching testnet.
You need Rust 1.88+, Docker, and Node.js 18+ installed locally.
Run four commands to install the CLI, scaffold a project, build, and start the local network.
# Install the cargo-tangle CLI (scaffolds, builds, tests, and deploys in one toolchain)
cargo install cargo-tangle --git https://github.com/tangle-network/blueprint
# Create a new project
cargo tangle blueprint create --name my-service
cd my-service
# Build
cargo build
# Run locally against the Tangle protocol
cargo tangle blueprint run --protocol tangle
The local environment simulates the full network: a Tangle node, test operators, and mock customers. You can test job submission and operator behavior without touching testnet. Notably, cargo-tangle scaffolds a new blueprint project in under 10 seconds, so you spend your time writing logic rather than wiring boilerplate.
Write standard Rust tests with SDK utilities for both unit and integration coverage.
#[cfg(test)]
mod tests {
use super::*;
use blueprint_sdk::testing::utils::setup_log;
#[tokio::test]
async fn test_square_correct() {
setup_log();
// Direct function test (primitives are tuple-wrapped for ABI compatibility)
let result = square(TangleArg((5u64,))).await;
assert_eq!(*result, 25);
}
#[tokio::test]
async fn test_square_overflow() {
setup_log();
// Test edge cases
let result = square(TangleArg((u64::MAX,))).await;
// Depending on your requirements, this might panic or wrap
}
}
For full integration testing with aggregation, the SDK provides testing utilities. The built-in test harness simulates multi-operator verification locally, so you can validate aggregation thresholds before deploying to testnet:
use blueprint_sdk::testing::utils::setup_log;
use blueprint_tangle_aggregation_svc::{
AggregationService, ServiceConfig, SubmitSignatureRequest,
};
#[tokio::test]
async fn test_aggregation_flow() {
setup_log();
let service = AggregationService::new(ServiceConfig::default());
// Initialize task requiring 2 operator signatures
service.init_task(
service_id,
call_id,
output_bytes.clone(),
2, // total operators
2, // threshold required
)?;
// Submit signatures from operators
// Verify threshold behavior
// Check aggregated result
}
Use cargo tangle blueprint debug and cargo tangle blueprint jobs show to trace issues locally.
# Debug a running blueprint
cargo tangle blueprint debug
# Check job status
cargo tangle blueprint jobs list
cargo tangle blueprint jobs show <job-id>
Testnet uses real Tangle infrastructure with test tokens so you can simulate production before going live.
When local testing passes, deploy to testnet:
# Deploy to testnet
cargo tangle blueprint deploy --target tangle --network testnet
# Your blueprint is now live at:
# Blueprint ID: 0x...
Keys are managed separately via the cargo tangle key subcommand (generate, import, export, list).
Testnet uses real Tangle infrastructure but test tokens. Operators can register (with test stake), and you can simulate real usage patterns.
Production deployment registers your blueprint on mainnet, where real operators stake real collateral and real customers pay for your service.
Verify tests, thresholds, slashing conditions, fees, and operator documentation before mainnet.
Before mainnet:
Deploy to mainnet with one command.
cargo tangle blueprint deploy --target tangle --network mainnet
Operators discover, evaluate, register, and begin processing jobs on your live blueprint.
Your blueprint is now live. What happens:
Common failure modes include insufficient operators, collusion, economic attacks, and early SDK bugs.
Being honest about failure modes:
Zero operators registered means zero service availability.
If your blueprint isn't profitable enough, operators won't run it. Zero operators = zero service.
Mitigation: Set realistic fee structures. Start with guaranteed operators (run some yourself initially). Make operator setup easy.
Colluding operators can defeat verification, which is why operator diversity matters.
If all operators collude, verification fails. This is why operator diversity matters.
Mitigation: Require geographic distribution, different staking sources, TEE attestation from different manufacturers.
Rational attackers will exploit any gap where value at risk exceeds total operator stake.
If the value protected exceeds total operator stake, rational attackers will attack.
Mitigation: Match stake requirements to value at risk.
Early adopters should expect SDK bugs and start with lower-value services.
The SDK is software. It has bugs. Early adopters will find them.
Mitigation: Start with lower-value services. Monitor closely. Have incident response ready.
Production blueprints include FROST threshold signatures and cross-chain verification infrastructure.
FROST enables distributed Schnorr signing with 5-of-7 operator threshold consensus.
A production blueprint for distributed Schnorr signatures:
Blueprints power cross-chain message verification for LayerZero DVN and Hyperlane.
Blueprints powering LayerZero DVN and Hyperlane:
The SDK works but IDE support, error messages, and documentation still have rough edges typical of early-stage infrastructure.
Honest gaps in the current developer experience:
IDE support is minimal. No VSCode extension with autocomplete, no inline documentation. You're reading docs and source code.
Error messages could be better. Some SDK errors are cryptic. We're improving them.
Documentation has gaps. Some advanced features are documented only in code comments.
Tooling is young. The CLI works but isn't polished. Expect rough edges.
We're a small team shipping fast. The core functionality works. The developer experience is improving but not yet where we want it.
Install the cargo-tangle CLI, scaffold a project, and have a local blueprint running in under 10 minutes.
The best way to understand Tangle is to build something on it. The second-best way is to ask questions in Discord. We're small enough that you'll talk to people who wrote the code.
How do I build a blueprint on Tangle?
Install the cargo-tangle CLI, run cargo tangle blueprint create --name my-service, write your async job functions in Rust, and wire them into a router with TangleLayer.
What is TangleArg and TangleResult?
TangleArg<T> is an extractor that parses ABI-encoded job input into a typed Rust value. TangleResult<T> wraps your return value for on-chain submission.
How do I deploy a blueprint?
Build locally, test with cargo tangle blueprint run --protocol tangle, deploy to testnet with cargo tangle blueprint deploy --target tangle --network testnet, and promote to mainnet with --network mainnet.
How does Tangle handle multi-operator jobs? You configure how many operators must submit matching results at the contract level. The aggregation service collects results, verifies BLS signatures, and finalizes only when the threshold is met.
What testing tools does Blueprint SDK provide?
The SDK provides unit test utilities via blueprint_sdk::testing, local blueprint execution with cargo tangle blueprint run, integration test helpers for aggregation flows, and debugging with cargo tangle blueprint debug.
What programming language is required for Tangle blueprints? Blueprints are written in Rust using the Blueprint SDK (version 0.1.0-alpha.22+), requiring Rust 1.88+ with the 2024 edition.
How do blueprint developers earn revenue? Blueprint developers receive a configurable share of every transaction processed by their blueprint, paid automatically by the service contract with no invoicing or manual settlement.
The next post covers AI services specifically: how to build verified inference and sandboxed code execution on Tangle.
Links: