Architecture
Module layout, on-chain anchors, and data flow for paraloom-core.
Architecture
Paraloom is a privacy Layer 2 for Solana. State (pool merkle root, nullifier set, validator registry, stake) is anchored on Solana via an Anchor program; transfers move privately off-chain under Groth16 zk-SNARKs; a small BFT cohort verifies proofs on commodity hardware.
Module structure
paraloom-core/src/ is organized by concern, not by layer. Each module is a Rust sub-crate within the workspace:
| Module | Responsibility |
|---|---|
privacy/ | Groth16 circuits, Poseidon hash, Pedersen commits, sparse merkle tree, proof codec, batch verification |
bridge/ | Solana program client, deposit listener, withdrawal submitter, version handshake |
consensus/ | Withdrawal BFT, leader rotation, reputation tracker, slashing evidence catalog |
compute/ | WASM job engine, executor sandbox, distribution coordinator, ownership-proof gadget |
ceremony/ | BGM17 phase-2 contribution + verifier + transcript chain |
coordinator/ | Active/passive coordinator role, snapshot replication, failover |
network/ | libp2p (Kademlia DHT, ping, gossipsub, request-response), peer registry |
storage/ | RocksDB-backed merkle, nullifier, blockchain, and compute stores; fsync on hot writes |
health/ | /health, /ready, /metrics HTTP endpoints |
validator/ | Validator role, registration, signing, heartbeat |
node/ | Process lifecycle, task spawning, graceful shutdown |
config/ | TOML-driven settings, network selection |
The Anchor on-chain program lives in programs/paraloom/ and is built separately (different Solana SDK version).
What's anchored on Solana
The Anchor program (programs/paraloom/src/lib.rs) holds:
| State | What it enforces |
|---|---|
BridgeState.merkle_root: [u8; 32] | The L2 pool's root, updated by authority on consensus |
BridgeState.program_version: u32 | Version handshake — L2 refuses to talk to wrong on-chain version |
Nullifier PDAs (seeds = [b"nullifier", nullifier]) | Init-on-withdraw guarantees each nullifier is spent at most once |
expiration_slot check on withdraw | Replay protection: requests past their slot deadline are rejected |
ValidatorAccount PDAs | Per-validator stake (1 SOL minimum), reputation, total earnings, slashing record |
ValidatorRegistry | Authority, total/active validator counts, minimum stake |
This is the truth that paraloom is a Layer 2: the on-chain program enforces replay, uniqueness, and validator economics. Off-chain proof verification is wrapped by on-chain settlement.
Two protocol primitives
Private payments
Range proofs and replay protection were added in v0.4.0 (issues #60, #61).
Private compute (alpha)
Compute is alpha — output-note plumbing pending. See Compute layer.
Validator role
Validators are verify-only. They do not generate proofs. Each validator:
- Subscribes to the gossipsub topics for blocks, votes, and compute results
- Verifies incoming Groth16 proofs (
~10 msper proof on a single CPU core) - Signs and broadcasts a vote
- Watches for primary coordinator's heartbeat; participates in failover election if missed
Reputation-gated voting and slashing for equivocation / persistent unavailability live in the consensus module. See Consensus and Validator guide.
Coordinator HA
One coordinator is primary at any time; the rest are passives that watch the primary's heartbeat and replicate its state snapshot. Failover scenario test asserts a passive becomes primary in under 30 seconds. Details in Coordinator HA.
Storage
| Store | Backend | Notes |
|---|---|---|
| Merkle tree | RocksDB | Sparse, with fsync on hot writes |
| Nullifier set | RocksDB | Append-only, fsync on hot writes (durability-critical) |
| Blockchain state | RocksDB | Block headers, finality records |
| Compute store | RocksDB | Job queue, results, audit log |
fsync was added across durability-critical writes in v0.4.0 (issue #68) to close a corruption window where in-flight writes could be lost on power failure.
Network
libp2p stack: Kademlia DHT for discovery, gossipsub for message propagation, ping for liveness probes, request-response for direct queries. Bootstrap multiaddrs are read from the on-chain validator registry. Peer registry state machine distinguishes slow vs offline peers.
Codec reads are bounded — gossip messages have size caps to close DoS vectors. Details in Networking.
Operational endpoints
Each validator exposes /health, /ready, and /metrics on a separate metrics port (default :9300). Prometheus-format metrics include proof verify latency, consensus round outcomes, peer count, nullifier set size, and coordinator role state. See Monitoring.
Project layout
paraloom-core/
├── Cargo.toml workspace + main crate
├── src/
│ ├── lib.rs module exports
│ ├── bin/ CLI binaries (paraloom, paraloom-ceremony-*, etc.)
│ ├── privacy/ circuits, proof, codec, batch, sparse merkle
│ ├── consensus/ withdrawal BFT, reputation, slashing
│ ├── compute/ WASM engine, executor, distribution
│ ├── ceremony/ BGM17 phase-2 contrib + verifier + transcript
│ ├── coordinator/ primary/passive role, snapshot replication
│ ├── bridge/ Solana client, listener, submitter
│ ├── network/ libp2p behaviour, peer registry
│ ├── storage/ RocksDB stores
│ ├── health/ /health /ready /metrics endpoints
│ ├── validator/ registration, signing, heartbeat
│ └── node/ process lifecycle
├── programs/paraloom/ Solana Anchor program (separate build)
├── tests/ integration tests (407 tests passing)
└── .github/workflows/ CI: build, test, sign releases