Skip to content

LRP Admin Guide

Libertaria Routing Protocol (LRP): The Admin Guide

Section titled “Libertaria Routing Protocol (LRP): The Admin Guide”

Section: infrastructure/networking
Audience: Chapter Admins, Network Engineers, Nexus Operators
RFC Reference: RFC-0012


If you have ever managed a BGP configuration, you have stared into a YAML abyss of AS-path prepending, local preference tuning, community string tagging, and route dampening parameters – and wondered whether any of this was strictly necessary to move a packet from A to B.

It was not.

LRP is the Libertaria Routing Protocol. It runs inside your Chapter’s infrastructure – your Nexus switches, your mesh nodes, your inter-Chapter links – and does exactly one thing: find the fastest path for every packet. It does not filter. It does not enforce policy. It does not care what is inside the packet. That separation is not laziness; it is the entire point.

BGP entangles routing with politics. Every LOCAL_PREF directive is a commercial preference disguised as a routing parameter. Every community string is governance leaking into physics. LRP strips all of that out. Routing is physics. Policy is governance. They do not share a layer.

If you are running a Chapter Nexus, LRP replaces BGP internally. At your border routers – where your network touches the public internet – a translation gateway speaks BGP to the outside world. Inside your perimeter: sovereignty.


Your Chapter has switches. Those switches have links between them – Ethernet, fiber, WiFi mesh, LoRa, maybe a WireGuard tunnel to a neighbouring Chapter.

LRP does four things with those links:

1. Measure everything, continuously. Every link gets micro-probes (64-byte packets, 20 times per second on active paths). The switch knows the latency, throughput, jitter, and packet loss of every connection in real time. Not cached. Not stale. Measured right now.

2. Build a map with no loops. The path map is a Directed Acyclic Graph – a data structure that cannot contain cycles by mathematical definition. Routing loops are not “prevented.” They are structurally impossible. If you have spent hours debugging STP convergence storms, read that sentence again.

3. Keep multiple paths warm. Every destination has up to four parallel paths ranked by quality. Traffic flows over the best one. If it degrades, the next one is already measured, warm, and ready. “Failover” is not a process. It is selecting the next row in the table.

4. Share measurements with neighbours. Each switch tells its neighbours: “Here is what I can reach, and here is how good the path is.” The neighbours incorporate that information, update their maps, and pass it along. The network converges to a shared understanding of path quality within milliseconds – not the minutes BGP takes.

That is the entire protocol. Everything else is implementation detail.


You can. Nobody is stopping you. But consider what BGP brings to an internal network:

Complexity you do not need. BGP was designed for the inter-domain problem: competing autonomous systems that do not trust each other and want to enforce commercial preferences. Inside your Chapter, you control every switch. There are no competing commercial interests. The entire BGP policy machinery – LOCAL_PREF, MED, community strings, route maps, prefix lists – is dead weight.

Convergence you cannot afford. When a BGP session drops, convergence takes 30 seconds to 3 minutes depending on configuration. During that window, traffic is black-holed or misrouted. In a Chapter running financial settlement, communications infrastructure, or real-time coordination, minutes of instability is not acceptable.

Censorship machinery you should not build. BGP’s policy attributes are the mechanism by which ISPs implement government-mandated filtering. If your Chapter runs BGP internally, every switch has the capability to enforce policy-based routing. Whether you use that capability today is irrelevant. The mechanism exists. Future administrators – or future governments – will find it.

LRP has no policy mechanism. There is nothing to misuse because there is nothing to use.


Every link and path in your network has a Quality Vector (QV) – a 16-byte measurement containing:

FieldSizeWhat It Measures
latency_us4 bytesOne-way latency in microseconds
throughput_kbps4 bytesMeasured throughput in kbps
jitter_us2 bytesStandard deviation of latency
loss_rate2 bytesPacket loss (0–100% mapped to 0–65535)
hop_count1 byteNumber of hops to destination
sig_depth1 byteCryptographic signature chain length
flags1 bytePath characteristics (mesh segment, cross-chapter, border path)
_reserved1 byteFuture use

These four metrics – latency, throughput, jitter, loss – collapse into a single Composite Quality Score (0.0–1.0) using a weighted function. The weights are configurable per Chapter:

Score = (W_latency × latency_score)
+ (W_throughput × throughput_score)
+ (W_jitter × jitter_score)
+ (W_loss × loss_score)

Default weights: 40% latency, 30% throughput, 15% jitter, 15% loss.

A trading Chapter sets latency to 70%. A media Chapter sets throughput to 60%. A rural mesh Chapter might weight loss rate highest because LoRa links drop packets. The measurement is physics. The weighting is your Chapter’s decision. Clean separation.

Classical routing protocols build a tree (STP) or a table (BGP). LRP builds a Directed Acyclic Graph – a DAG.

The difference matters. A tree has one path to every destination. A table has one preferred path. A DAG has all paths, simultaneously, ordered by quality.

┌── Switch B ── QV:0.85 ──┐
│ ▼
Switch A ├── Switch C ── QV:0.72 ── Switch F (destination)
│ ▲
└── Switch D ── QV:0.91 ──┘

Switch A knows three paths to Switch F. All three are measured. All three are warm. The Active Cluster selects the best k paths (default: 4) and ranks them. Traffic flows over the top-ranked path. If that path degrades below the second-ranked path, traffic shifts. Sub-millisecond. No convergence phase. No flap.

Why “acyclic” matters: A DAG cannot contain cycles. If Switch A routes to B, B routes to C, and C routes back to A – that is a cycle, and it is structurally impossible in a DAG. The data structure enforces it. No TTL hacks. No STP port blocking. Mathematics.

Each switch periodically tells its neighbours: “Here are the destinations I can reach, and here are the Quality Vectors for each path.” The neighbours incorporate this information using the classical Bellman-Ford relaxation:

If the path through my neighbour to destination X is better than my current best path to X, update my table.

This is the same algorithm that powered RIP (Routing Information Protocol) in the 1980s – one of the first internet routing protocols. Bellman-Ford is old, proven, and well-understood. LRP uses it with multi-metric Quality Vectors instead of simple hop counts.

Convergence time: O(network_diameter × epoch). For a Chapter mesh with 8 hops diameter and 100ms epochs, full convergence after a topology change takes ~800ms. For a single link failure, failover is immediate because alternate paths are already active.

Kaspa’s GHOSTDAG algorithm orders parallel blocks in a blockchain DAG by finding the “heaviest k-cluster” – the set of blocks with the strongest connections. LRP adapts this for path selection.

The Active Cluster is the top-k paths to a destination, ranked by Composite Quality Score, with a diversity constraint: no more than two paths through the same next-hop switch. This prevents a single link failure from killing multiple paths simultaneously.

When the best path degrades:

  1. Continuous probing detects the degradation (within 50ms)
  2. The Cluster Ranker promotes path #2 to active
  3. The next packet uses the new path
  4. Elapsed time: one forwarding decision (~microseconds)

There is no “convergence” because there is nothing to converge. All paths were already measured. The ranking just changed.

If an attacker – state-level or otherwise – floods your switch with traffic, Fair Queuing ensures no single source can monopolize bandwidth.

Every LWF frame has a source_hint in its header (20 bytes). Fair Queuing buckets traffic by source_hint and allocates bandwidth equally via Deficit Round Robin. An attacker flooding from one source gets 1/N of bandwidth, where N is the number of active sources. The attacker must generate as many unique identities as your Chapter has members just to get a majority of bandwidth – and each identity needs a valid Entropy Stamp (RFC-0100) to pass the Membrane Agent.

This is not filtering. This is fluid dynamics. Every source gets equal treatment. No identity resolution. No DID lookup. No L1 interaction. Pure L0 physics.


Your Chapter operates a physical Nexus with its own switches and cabling.

┌──────────────────────────────────────────────┐
│ CHAPTER NEXUS │
│ │
│ Switch A ←──LRP──→ Switch B ←──LRP──→ ... │
│ │ │ │
│ └───── LRP ─────────┘ │
│ │
│ All internal routing: LRP │
│ All switches: Chapter-controlled │
│ Policy enforcement: NONE at L0 │
│ Filtering: Membrane Agent at L1 (per node) │
└──────────────────────────────────────────────┘

Setup: Install the LRP daemon on every switch. Configure neighbour discovery. Set your Chapter’s QV weights. Done. Switches discover each other, exchange Quality Vectors, build Path DAGs, and route traffic.

No STP required. LRP’s DAG structure eliminates loops without disabling links. Every cable you plugged in carries traffic.

Scenario 2: Chapter Nexus + Public Internet

Section titled “Scenario 2: Chapter Nexus + Public Internet”

Your Chapter needs internet access through one or more ISPs.

┌──────────────────────────────┐ ┌──────────────────┐
│ CHAPTER NEXUS │ │ PUBLIC INTERNET │
│ │ │ │
│ Switch ←→ Switch ←→ Border │◄──►│ ISP Router (BGP) │
│ ┌────────┐│ │ │
│ │LRP↔BGP ││ │ │
│ │Gateway ││ │ │
│ └────────┘│ │ │
└──────────────────────────────┘ └──────────────────┘

Setup: One or more Border Nodes run the LRP↔BGP translation gateway. Internally: LRP. Externally: BGP to your ISP. The Border Node translates LRP Quality Vectors into BGP route attributes and vice versa.

The ISP sees a normal BGP peer. Your internal network runs sovereign routing. The border is the only place BGP complexity exists.

Section titled “Scenario 3: Inter-Chapter Mesh (Direct Link)”

Two Chapters connect via direct fiber, point-to-point wireless, or LoRa mesh.

Chapter Alpha Chapter Beta
┌──────────────┐ Direct Link ┌──────────────┐
│ Border Node │◄═══════════════════►│ Border Node │
│ (LRP native) │ QV measured │ (LRP native) │
└──────────────┘ continuously └──────────────┘

Setup: Point the LRP daemons at each other. They exchange QVs. Each Chapter’s Path DAG now includes routes through the other Chapter. No BGP. No AS negotiation. No peering agreement. Two switches, one measured link.

If the link is LoRa, the QV naturally reflects the high latency and low throughput. The Cluster Ranker uses the LoRa path as a fallback when faster paths are available, and as primary when it is the only path. Automatic. No manual configuration.

Chapter members run the LRP daemon as an OpenWRT plugin on their home routers.

┌─────────┐ WiFi ┌─────────┐ WiFi ┌─────────┐
│ Router A │◄─────────►│ Router B │◄─────────►│ Router C │
│ (LRP) │ │ (LRP) │ │ (LRP) │
└─────────┘ └─────────┘ └─────────┘
│ │ │
Household A Household B Household C

Setup: Flash OpenWRT with the LRP package. The router discovers neighbours, measures links, and participates in the Chapter mesh. If Router B’s internet connection drops, traffic from Household B routes through Router A or Router C automatically.

This is the Nexus-in-a-neighborhood model. No central infrastructure. No ISP dependency for local communication. The Chapter’s mesh is the network.


The Defense Stack (Why You Sleep at Night)

Section titled “The Defense Stack (Why You Sleep at Night)”

A Chapter network faces threats ranging from script kiddies to state actors. LRP’s contribution to defense is not filtering (that is the Membrane Agent’s job at L1). LRP contributes three things:

Packets that are not valid LWF frames (missing LWF\0 magic bytes) are dropped before any processing. This kills random garbage, TCP SYN floods, and anything that does not speak Libertaria. Four bytes checked. O(1). Done.

Deficit Round Robin on source_hint guarantees equal bandwidth allocation. An attacker flooding from one identity gets 1/N. No identity resolution required. Not policy. Physics.

With k=4 parallel paths per destination across diverse physical media (Ethernet, fiber, LoRa, WiFi mesh), an attacker must flood all paths simultaneously. LoRa mesh and direct inter-Chapter links are physically unreachable via internet-based flooding. The attacker needs boots on the ground – not just bandwidth.

LRP control traffic (QV Exchange, Probe, Path Announce) can be wrapped in Transport Skins (RFC-0015). To the outside observer, your QV Exchange looks like HTTPS traffic to Cloudflare. Your probes look like DNS queries. The attacker cannot identify LRP traffic to selectively target it.

Above LRP, the Membrane Agent (RFC-0110) requires Entropy Stamps on incoming traffic. Generating valid stamps costs compute. The attacker must burn CPU/GPU for every packet that passes Membrane filtering. At scale, this is expensive.

Combined effect: Each layer multiplies the attacker’s cost. Wire rejection handles garbage. Fair Queuing handles volume. Multi-path handles link-level attacks. Skins handle identification. Entropy handles Sybils. By the time you stack all five, you need a nation-state budget and physical access to mesh links – and even then the LoRa paths survive.

Clarity here prevents misunderstandings later.

LRP does not filter spam. That is the Membrane Agent’s job (RFC-0110). By the time a packet reaches LRP, L1 has already approved it. LRP routes it without inspection.

LRP does not authenticate identities. It reads the source_hint and dest_hint from LWF headers for routing decisions. It does not verify that the source_hint belongs to a real DID. That is L1’s problem.

LRP does not enforce Chapter policy. If your Chapter governance decides to block certain destinations or throttle certain traffic types, that enforcement happens at L1 (Membrane Agent) or L2 (Chapter governance rules). LRP sees an approved packet and finds the fastest path. Period.

LRP does not replace the Membrane Agent. They are complementary. The Membrane decides what passes. LRP decides where it goes. One is a gatekeeper. The other is a courier. The courier does not read the mail.

LRP does not work on the public internet (yet). LRP runs between Switch Nodes that you control. At the border, BGP translation handles internet connectivity. If SCION or another alternative BGP protocol gains traction, LRP could peer with it directly – the architecture supports it, but the implementation is Phase 4.


If you currently run internal BGP, OSPF, or STP, here is what changes:

PropertySTP/RSTPOSPFInternal BGPLRP
Loop preventionDisables linksSPF treeAS-pathDAG (all links active)
Parallel paths1Equal-cost onlyLimited (add-paths)k=4 default (configurable)
Failover1–50 sec1–10 sec30 sec–3 min< 1ms
Link utilization~25%~50%~60%~100%
Policy machineryNoneMinimalExtensiveNone (by design)
Configuration complexityLowMediumHighLow
Censorship capabilityNoneNoneFullNone
Probe/measurementBPDU (slow)Hello (10 sec)Keepalive (60 sec)50ms active / 500ms standby

The jump from “60 seconds between keepalives” to “50 milliseconds between probes” is not incremental. It is a different class of responsiveness. Your routing table is a real-time measurement of your network, not a cached snapshot from a minute ago.


Pre-defined profiles for common Chapter types:

# Default: balanced
[weights.default]
latency = 0.40
throughput = 0.30
jitter = 0.15
loss = 0.15
# Trading/Finance: latency-critical
[weights.trading]
latency = 0.70
throughput = 0.10
jitter = 0.15
loss = 0.05
# Media/Streaming: bandwidth-critical
[weights.media]
latency = 0.15
throughput = 0.60
jitter = 0.10
loss = 0.15
# Rural Mesh: reliability-critical
[weights.mesh]
latency = 0.15
throughput = 0.15
jitter = 0.20
loss = 0.50
# Voice/Real-time: jitter-critical
[weights.realtime]
latency = 0.30
throughput = 0.10
jitter = 0.45
loss = 0.15
[probing]
# Interval between probes on active paths (microseconds)
active_interval_us = 50000 # 50ms → 20 probes/sec
# Interval between probes on standby paths (microseconds)
standby_interval_us = 500000 # 500ms → 2 probes/sec
# Consecutive failures before path declared dead
fail_threshold = 3
# Consecutive successes before recovered path promoted
recovery_threshold = 5
# Probe packet size (bytes)
probe_size = 64
[cluster]
# Maximum parallel paths per destination
k = 4
# Maximum paths through same next-hop (diversity constraint)
max_same_hop = 2
# Minimum score for a path to enter the cluster (0.0–1.0)
min_score = 0.10
# Topology epoch interval (milliseconds)
epoch_ms = 100
[fair_queue]
# DRR quantum (bytes per scheduling round)
quantum = 1350 # One LWF Standard frame
# Maximum tracked flows
max_flows = 4096
# Overflow policy: hash_merge or drop_new
overflow_policy = "hash_merge"
# Weight mode: equal or entropy_weighted
weight_mode = "equal"
[border]
# Enable BGP↔LRP translation
enabled = true
# BGP peer configuration (standard BGP daemon handles BGP side)
bgp_daemon = "bird"
bgp_config_path = "/etc/bird/bird.conf"
# Chapter prefix to announce externally
chapter_prefix = "2001:db8:cafe::/48"
# Transport Skin for inter-Chapter LRP over hostile internet
skin = "mimic_https" # or "raw", "mimic_dns", "mimic_video"

  1. Install LRP daemon on all Chapter switches
  2. Configure neighbour addresses (or enable auto-discovery)
  3. Select QV weight profile for your Chapter type
  4. Verify DAG construction: lrp status --dag should show all switches and paths
  5. Run probe check: lrp probe --all should return QVs for every known path
  6. Disable STP on internal links (LRP handles loop prevention; STP will fight it)
  7. Monitor via RFC-0510 Observability events: $LTP/obs/+/+/network/+/lrp/*

You do not need to do anything. LRP detects the failure via probe timeout (3 × 50ms = 150ms), shifts traffic to the next path in the Active Cluster, and continues. You will see an observability event:

$LTP/obs/chapter/{node}/network/warn/lrp/path/degraded

When the link recovers, probes succeed, the path re-enters the cluster, and traffic may shift back if it ranks higher. Automatic.

  1. Install LRP daemon
  2. Cable it to at least one existing switch
  3. LRP auto-discovers the neighbour, exchanges QVs, and integrates into the Path DAG
  4. All other switches learn the new paths within one convergence window (~800ms for 8-hop diameter)

LRP’s Bellman-Ford engine detects routing anomalies automatically:

  • Impossible latency: A path claiming lower latency than physically possible for its hop count
  • Impossible throughput: A LoRa segment claiming gigabit speeds
  • Negative cycles: A path whose quality score increases with more hops (Bellman-Ford negative cycle = something is lying)

Anomaly events emit to:

$LTP/obs/chapter/{node}/network/error/lrp/anomaly/{type}

These feed into the Slash Protocol (RFC-0121) for node reputation impact if the anomaly source is identified.


Q: Can I use LRP without the rest of the Libertaria stack? Technically, yes. LRP reads LWF headers but does not require the Membrane Agent, QVL, or any L1+ component to route packets. In practice, you want the Membrane Agent for spam protection and Entropy Stamps for Sybil resistance. LRP alone gives you fast routing. LRP + L1 gives you fast sovereign routing.

Q: What happens when LRP and STP both run on the same switch? STP will disable links that LRP expects to be active. They will fight. Disable STP on all internal LRP links. LRP’s DAG structure provides loop freedom without link disabling.

Q: How much bandwidth do probes consume? On a node with 4 active paths and 12 standby paths: roughly 53 kbps. Negligible on any link faster than 2G.

Q: Can a Chapter configure LRP to prioritize certain traffic types? No. Not at L0. Traffic prioritization is a governance decision (L2) enforced at L1 (Membrane Agent). LRP sees an approved packet and routes it. If your Chapter governance wants to prioritize voice over bulk transfer, the Membrane Agent marks packets with priority hints, and the application acts on them. The router does not inspect payloads.

Q: What if my entire Chapter runs on a single switch? Then you do not need LRP. LRP is for multi-hop topologies where path selection matters. A single switch has no routing decisions to make.

Q: Does LRP support IPv4/IPv6 destination routing? LRP routes by LWF dest_hint (Blake3 hash of DID), not by IP address. The Border Translation gateway handles the mapping between LWF destinations and IP prefixes for traffic entering/leaving the public internet.

Q: How does LRP handle a split-brain scenario (network partition)? Each partition operates independently with its own Path DAG. Paths to unreachable destinations get no probe responses, fail after fail_threshold misses, and are removed from the Active Cluster. Packets for unreachable destinations queue in OPQ (RFC-0020) for up to 72 hours. When the partition heals, QV Exchange resumes and paths re-enter the DAG within one convergence window.


LRP is currently at v0.1.0 (specification). The implementation roadmap:

PhaseContentTarget
Phase 1Core engine: QV, Bellman-Ford, Path DAG, Cluster RankingSpecification (this RFC)
Phase 2Probing, measurement, anomaly detectionFirst deployable daemon
Phase 3Multi-transport: Ethernet, LoRa, WiFi mesh, WireGuardMesh-capable
Phase 4Border Translation (BGP gateway), Transport SkinsInternet-connected
Phase 5Formal verification, fuzzing, OpenWRT packageProduction-hardened

If you are building Chapter infrastructure today, design your topology with LRP in mind: maximize link diversity, avoid single points of failure, and leave STP turned off on internal links. When the daemon ships, your network is ready.


For the full specification with wire formats, Zig code, and formal proofs: RFC-0012 Libertaria Routing Protocol

For transport-level censorship resistance: RFC-0015 Transport Skins

For the filtering layer that sits above LRP: RFC-0110 Membrane Agent

The wire does not care who speaks. The wire carries the signal. That is its only duty; and it performs it without opinion.

⚡️