diff --git a/docs/3sf_mini.md b/docs/3sf_mini.md index 2583925..316fec3 100644 --- a/docs/3sf_mini.md +++ b/docs/3sf_mini.md @@ -1,22 +1,582 @@ -# 3SF-mini +# 3SF-mini: Justification & Finalization -TODO: add 3SF-mini explanation +ethlambda uses **3SF-mini** (Three-Stage Finality, minimal version) for justification +and finalization. Unlike the Ethereum Beacon Chain's epoch-based Casper FFG, 3SF-mini +operates at the **slot level**: any slot can be justified, not just epoch boundaries. -## Justifiable Slot Backoff +## Quick Example: Three Slots to Finality -The 3SF-mini algorithm introduces a backoff mechanism to increase finalization rate during periods of asynchrony. -This is achieved by "diluting" the possible targets of a justification vote, through the `slot_is_justifiable_after` function (`Slot.is_justifiable_after` in the spec). -The function marks only some slots as valid justification targets, with the distance between them increasing over time since the last finalization. -This increases the period during which votes for a given slot can be included, improving the chances of achieving the required 2/3 majority for justification. -Also, since two consecutive justified **justifiable** slots are needed to finalized a slot, this backoff isn't immediately reset after finalization occurs, only lowering over time when synchrony is restored. +4 validators, slot N already finalized and justified. -As an example, consider this scenario: +```text + source target + │ │ + ▼ ▼ + Slot N ──[ N-2 ]──[ N-1 ]──[ N ] + F J H -- The last finalized slot is 0. -- Slot 1 is justified. -- During the next 14 slots (2 to 15), only some votes with differing targets are included, so no new justification occurs. -- At slots 16, 17, 18, and 19, the last justifiable slot is 16, so enough votes are included to justify slot 16 (with slot 1 as source). - - Since there are multiple justifiable slots between 1 and 16, slot 1 isn't finalized yet. -- Slot 20 is reached, and in the following slots, enough votes are included to justify it, with slot 16 as source. - - Since slots 16 and 20 are consecutive justifiable slots, slot 16 is now finalized (and past slots too). - - The backoff is effectively reduced, since the next justifiable slots after 20 are 21, 22, 25, 28, and so on. + source target + │ │ + ▼ ▼ + Slot N+1 ──[ N-2 ]──[ N-1 ]──[ N ]────[ N+1 ] + F F J H + + source target + │ │ + ▼ ▼ + Slot N+2 ──[ N-2 ]──[ N-1 ]──[ N ]────[ N+1 ]────[ N+2 ] + F F F J H + + H = head J = justified F = finalized +``` + +At each slot, validators vote for the newest block as their **target**, citing +the latest justified checkpoint as their **source**: + +- **Slot N+1:** Votes `source=N, target=N+1`. Three of four vote + (3×3=9 >= 2×4=8), so **N+1 is justified**. +- **Slot N+2:** Votes `source=N+1, target=N+2`. Three of four vote, so + **N+2 justified**. N+1 and N+2 are consecutive justifiable slots and both + are justified, so **N+1 is finalized**. + +In the ideal case, each block carries attestations that justify the parent slot +and finalize the one before it. In practice, forks, missed slots, and delayed +votes can break this cadence. The rest of this document explains the rules that +make this work, and what happens when things go wrong. + +## Concepts + +| Term | Meaning | +|------|---------| +| **Justified** | A checkpoint backed by at least two-thirds of validator votes | +| **Finalized** | A checkpoint that can never be reverted | +| **Source** | The latest justified checkpoint (vote origin) | +| **Target** | The checkpoint being voted for (vote destination) | +| **Justifiable** | A slot that *could* become justified (per the 3SF-mini schedule) | + +## Justification via Supermajority + +A checkpoint becomes **justified** when at least two-thirds of validators attest to it as a target: + +```text + JUSTIFICATION + ───────────── + + Validators: V0 V1 V2 V3 V4 V5 V6 V7 V8 + │ │ │ │ │ │ │ + └───┴───┴───┴───┴───────┴───┘ + │ + 7 out of 9 votes + (3×7=21 >= 2×9=18) ✓ + │ + ▼ + ┌──────────────┐ + │ Checkpoint C │ + │ JUSTIFIED ✓ │ + └──────────────┘ +``` + +The threshold is computed as: `3 × vote_count >= 2 × validator_count` + +> **In ethlambda:** Justification and finalization are processed inside +> `process_attestations()` in `crates/blockchain/state_transition/src/lib.rs`, +> called from `process_block()`. The supermajority check is +> `3 * vote_count >= 2 * validator_count`. + +Attestations must also pass validity checks before they count: +- Source checkpoint must already be justified +- Target must not already be justified +- Neither source nor target may have a zero-hash root +- Source slot < Target slot (time flows forward) +- Both checkpoints must reference known blocks +- Target slot must be **justifiable** per the 3SF-mini schedule (see below) + +## The Justifiability Schedule + +Not every slot can be justified, only slots at specific distances from the last +finalized slot. This is the novel part of 3SF-mini. + +A slot is **justifiable** if `delta = slot - finalized_slot` matches any rule: + +> **In ethlambda:** The function `slot_is_justifiable_after(slot, finalized_slot)` in +> `crates/blockchain/state_transition/src/lib.rs` implements this check. It uses +> `isqrt()` for perfect square detection and the identity `4n(n+1) + 1 = (2n+1)²` +> for pronic number detection. + +```text + ┌───────────────────────────────────────────────────────┐ + │ JUSTIFIABILITY RULES │ + │ │ + │ Rule 1: delta ≤ 5 (always justifiable) │ + │ │ + │ Rule 2: delta = n² (perfect squares) │ + │ 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, ... │ + │ │ + │ Rule 3: delta = n(n+1) (pronic numbers) │ + │ 2, 6, 12, 20, 30, 42, 56, 72, 90, 110, ... │ + │ │ + └───────────────────────────────────────────────────────┘ +``` + +Visualizing the first 40 slots after finalization (✓ = justifiable): + +```text + delta: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 + ✓ ✓ ✓ ✓ ✓ ✓ ✓ · · ✓ · · ✓ · · · ✓ · · · ✓ + ╰─ delta ≤ 5 ──╯ 2×3 3² 3×4 4² 4×5 + + delta: 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 + · · · · ✓ · · · · ✓ · · · · · ✓ · · · · + 5² 5×6 6² +``` + +| delta | Rule | Formula | Gap since previous | +|-------|------|---------|--------------------| +| 0–5 | 1 | ≤ 5 | - | +| 6 | 3 | 2×3 | 1 | +| 9 | 2 | 3² | 3 | +| 12 | 3 | 3×4 | 3 | +| 16 | 2 | 4² | 4 | +| 20 | 3 | 4×5 | 4 | +| 25 | 2 | 5² | 5 | +| 30 | 3 | 5×6 | 5 | +| 36 | 2 | 6² | 6 | + +**Key property:** Gaps between justifiable slots grow, but never become infinite. +As more time passes since finalization, the network gets progressively wider windows +to accumulate votes. This creates a natural backpressure: if the network is struggling +to reach a two-thirds majority (e.g., due to partitions or validator dropouts), the increasing +gaps give more time for the supermajority to form. + +## Finalization + +A justified checkpoint becomes **finalized** when it is the source of a justification +whose target is the **next justifiable slot**. In other words, there must be **no +justifiable slots between source and target**: the two must be consecutive entries in +the justifiability schedule. + +> **In ethlambda:** The `try_finalize()` function iterates over slots between +> source and target and calls `slot_is_justifiable_after` on each. If any slot +> is justifiable, finalization fails (source and target aren't consecutive). +> The check uses `original_finalized_slot` (the finalized slot at the start of +> block processing), not the current one, since finalization can advance +> mid-processing. + +```text + FINALIZATION CHECK + ────────────────── + + Example 1: Finalization FAILS + + Finalized=10 Source=13 (justified) Target=16 (justified) + + [ 10 ] · · · [ 13 ] 14 15 [ 16 ] + ▲ ▲ + │ └── delta=5 ≤ 5 → justifiable! + └────── delta=4 ≤ 5 → justifiable! + + Justifiable slots exist between S and T → NOT FINALIZED ✗ + (13 and 16 are not consecutive justifiable slots) + + + Example 2: Finalization SUCCEEDS + + Finalized=10 Source=16 (justified) Target=19 (justified) + + [ 10 ] · · · [ 16 ] 17 18 [ 19 ] + ▲ ▲ + │ └── delta=8 → not justifiable ✓ + └────── delta=7 → not justifiable ✓ + + No justifiable slots between S and T → S is FINALIZED ✓ + (16 and 19 are consecutive: delta=6=2×3, then delta=9=3²) +``` + +The reasoning: if a justifiable slot exists between source and target, validators +could have directed their votes to that intermediate slot instead, potentially on a +different fork. By requiring source and target to be consecutive justifiable slots, +the protocol ensures that no alternative justification path can exist between them. + +### Justifiable Slot Backoff + +The justifiability schedule acts as a backoff mechanism to increase finalization rate +during periods of asynchrony. By "diluting" the possible targets of a justification +vote (via the `slot_is_justifiable_after` function), the protocol increases the window +during which votes for a given slot can be included, improving the chances of achieving +the required two-thirds majority. + +Since finalization requires two consecutively justifiable slots to both be justified, +this backoff isn't immediately reset after finalization occurs; it only lowers over +time when synchrony is restored. + +**Example:** Extended asynchrony with gradual recovery. + +``` + F=0. Justifiable slots grow sparser as delta increases: + + delta ≤ 5: 0 1 2 3 4 5 (gap = 1) + delta 6–20: 6 9 12 16 20 (gap = 3–4) + delta 20–36: 20 25 30 36 (gap = 5–6) + ... + delta ~1000: 900 930 961 992 1024 (gap = 30–32) + 30² 30×31 31² 31×32 32² +``` + +**Phase 1: Long asynchrony, slow progress.** + +``` + Validators vote, but with many justifiable targets, votes scatter + and no single slot reaches >=2/3. As gaps widen, votes concentrate. + + Near slot 1000, the 32-slot gap between 992 and 1024 means + no competing justifiable target exists for 32 slots after 992. + All votes funnel toward 1024 once it is built. +``` + +**Phase 2: Slot 992 finalized.** + +``` + Slot 992 justified (source = earlier justified slot). + Slot 1024 justified (source = 992). + + slot: 0 ... 992 1024 + F J ······· J + ▲ ▲ + source ──────────▶ target + + Slots 993–1023: any justifiable from F=0? + Perfect squares? 31²=961 (before), 32²=1024 (boundary). None. + Pronic? 31×32=992 (boundary), 32×33=1056 (after). None. + No justifiable slots between them → slot 992 FINALIZED ✓ +``` + +**Phase 3: Partial reset. Backoff shrinks but doesn't vanish.** + +``` + New F=992. Justifiable slots shift: + + slot: 992 993 994 995 996 997 998 ··· 1001 ··· 1004 ··· 1008 ··· 1022 ··· 1028 + F ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ + ╰── delta ≤ 5 ──╯ 2×3 3² 3×4 4² 5×6 6² + + Dense slots 993–998 are already in the past! + Near the current slot (~1024), justifiable slots are ~6 apart: + + ... 1022 1028 1034 1041 ... + δ=30 δ=36 δ=42 δ=49 + 5×6 6² 6×7 7² + └──6──┘ └──6──┘ └──7──┘ + + Gaps shrank from 32 → 6, but didn't reset to 1. +``` + +**Phase 4: Further finalization closes the gap.** + +``` + Justify 1022 and 1028, finalize 1022. New F=1022. + + From F=1022, at slot ~1028 (delta = 6): + + slot: 1022 1023 1024 1025 1026 1027 1028 + F ✓ ✓ ✓ ✓ ✓ ✓ + ╰────── delta ≤ 5 ──────╯ 2×3 + + Gaps are back to 1. Fast finalization resumes. + + Summary of gradual recovery: + + ┌───────────────────┬──────┬───────┬───────┬──────────────┐ + │ Finalization step │ F │ Head │ Delta │ Nearby gaps │ + ├───────────────────┼──────┼───────┼───────┼──────────────┤ + │ Before any │ 0 │ ~1000 │ ~1000 │ 31–32 │ + │ After 1st (992) │ 992 │ ~1024 │ ~32 │ 6–7 │ + │ After 2nd (1022) │ 1022 │ ~1028 │ ~6 │ 1 │ + └───────────────────┴──────┴───────┴───────┴──────────────┘ + + Each finalization step reduces the delta between the finalized + slot and the chain head, progressively tightening the gaps. +``` + +When finalization advances, the following cleanup occurs: + +- `justified_slots` window shifts forward (old slots pruned) +- `LiveChain` entries for finalized slots are pruned +- Gossip signatures and aggregation proofs for finalized blocks are cleaned up +- Future fork choice runs start from the finalized slot's successor + +> **In ethlambda:** The `justified_slots` bitlist uses relative indexing (index 0 = +> `finalized_slot + 1`). When finalization advances, `shift_window()` in +> `crates/blockchain/state_transition/src/justified_slots_ops.rs` drops the +> now-finalized prefix. The attestation target is also walked back to the nearest +> justifiable slot via `slot_is_justifiable_after` in `crates/blockchain/src/store.rs`. + +## End-to-End: From Head Selection to Finalization + +This section connects [LMD-GHOST fork choice](ghost-fork-choice.md) with 3SF-mini. +The [quick example above](#quick-example-three-slots-to-finality) showed the happy +path; here we focus on what happens when things go wrong. + +### Recap: Attestation Anatomy + +Each attestation carries three checkpoints, each determined by a different mechanism: + +```text + ┌────────────────────────────────────────────────────────────────┐ + │ ATTESTATION │ + │ │ + │ head Newest block the validator sees │ + │ ← LMD-GHOST with min_score = 0 │ + │ │ + │ target Block the validator wants justified next │ + │ ← Derived from safe target, walked back to nearest │ + │ justifiable slot (feeds into 3SF-mini) │ + │ │ + │ source Latest justified checkpoint │ + │ ← Read from store state │ + └────────────────────────────────────────────────────────────────┘ +``` + +The **safe target** is computed by running LMD-GHOST with a two-thirds vote threshold. +Only blocks backed by a supermajority qualify, so the safe target is always at or +behind the head. The attestation **target** is derived by walking back from the head +toward the safe target (max 3 steps), then to the nearest justifiable slot. See +[Safe Target Selection](ghost-fork-choice.md#safe-target-selection) for details. + +> **In ethlambda:** `get_attestation_target()` in `crates/blockchain/src/store.rs` +> implements this walk-back. `JUSTIFICATION_LOOKBACK_SLOTS = 3` provides a liveness +> guarantee: even if the safe target is stuck, the target eventually advances once +> the head moves far enough ahead. + +### Lagging Safe Target (Fork with Delayed Convergence) + +When validators disagree about the head, the safe target lags behind: no single +branch has two-thirds support. This delays justification until the fork resolves. + +```text + Setup: 9 validators, finalized=100, justified=101 + Safe target threshold: >=6 votes (2/3 of 9) +``` + +**Slots 102–103: Fork splits votes. No progress.** + +```text + ┌──[ B102a ]──[ B103a ] V0–V4 (5) + [ F=100 ]──[ J=101 ]─┤ + └──[ B102b ]──[ B103b ] V5–V8 (4) +``` + +Neither branch clears two-thirds → safe target stuck at B101. Walk-back from head +always lands on source (B101). **No attestation can advance justification.** + +**Slot 104: V7 and V8 switch sides. Fork resolves.** + +V7 and V8 receive B102a (delayed by the partition) and switch to the a-branch. + +```text + ┌──[ B102a ]──[ B103a ]──[ B104a ] V0–V4, V7, V8 (7) + [ F=100 ]──[ J=101 ]─┤ + └──[ B102b ]──[ B103b ]──[ B104b ] V5–V6 (2) +``` + +B102a subtree now has 7 votes >= 6 → **safe target = B102a**. Walk-back from B104a +lands on B102a (2 steps). Slot 102 is justifiable (delta=2 ≤ 5). + +```text + source=101 ──▶ target=102 7/9 votes → 3×7=21 >= 2×9=18 → JUSTIFIED ✓ + Finalization: no slots between 101 and 102 → 101 FINALIZED ✓ +``` + +After slot 104: **finalized=101, justified=102.** + +**Slots 105–106: Full convergence and recovery.** + +All 9 validators on the a-branch. Slot 105: target=B104a → **B104a JUSTIFIED**. +But finalization fails: slot 103 (between source=102 and target=104) is justifiable +but was never justified (lost in the fork). + +Slot 106: target=B105a → **B105a JUSTIFIED**. No justifiable slots between 104 and +105 → **104 FINALIZED**. Finalization jumped from 101 to 104, skipping 102 and 103. + +```text + FORK WITH DELAYED CONVERGENCE + ═════════════════════════════ + + Slot: 100 101 102 103 104 105 106 + Status: F J · · · · · + fork ──────┤ + resolves + Head: · B101 B102a B103a B104a B105a B106a + Safe: · B100 B101 B101 B102a B104a B105a + stuck ─────┘ ▲ + │ + V7+V8 switch, safe target unsticks + + Justified: · 101 ─ ─ 102 104 105 + Finalized: · · ─ ─ 101 ─ 104 + ▲ + finalization jumps ──┘ + (102,103 skipped; 103 was never justified) +``` + +## Comparison with Casper FFG + +Both 3SF-mini and Casper FFG are finality gadgets built on the same foundation: +supermajority links between checkpoints. They differ fundamentally in their unit of +time and what that implies for validator participation. For a thorough treatment of +Casper FFG as used in Ethereum, see the +[eth2book chapter on Casper FFG](https://eth2book.info/capella/part2/consensus/casper_ffg/). + +### Slots vs Epochs: The Core Architectural Split + +**3SF-mini: Every Validator, Every Slot** + +In 3SF-mini, **all validators vote in every slot**. A checkpoint can be justified at +any slot (subject to the justifiability schedule), and finalization can happen as soon +as two consecutive justifiable slots are both justified. + +```text + 3SF-mini (4-second slots, 4 validators) + + Slot 100 Slot 101 Slot 102 Slot 103 + ┌───────┐ ┌───────┐ ┌───────┐ ┌───────┐ + │V0 V1 │ │V0 V1 │ │V0 V1 │ │V0 V1 │ + │V2 V3 │ │V2 V3 │ │V2 V3 │ │V2 V3 │ + └───┬───┘ └───┬───┘ └───┬───┘ └───┬───┘ + │ │ │ │ + 4 votes 4 votes 4 votes 4 votes + per slot per slot per slot per slot + + Every validator participates in every slot. + >=2/3 threshold checked per-slot → can justify any slot. +``` + +This is simple and fast, but it means every validator must produce and verify a vote +every slot. The total message load scales as `validators × slots`. + +**Casper FFG: Validators Split Across an Epoch** + +Ethereum's beacon chain has ~1,000,000 active validators. Having all of them vote every +12-second slot would be unmanageable. Instead, Casper FFG groups 32 slots into an +**epoch**, and splits the validator set across the slots within it: + +```text + Casper FFG (12-second slots, 32 per epoch, ~900k validators) + + Epoch N + ┌─────────────────────────────────────────────────────────────┐ + │ Slot 0 Slot 1 Slot 2 ... Slot 30 Slot 31 │ + │ ┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐ │ + │ │~28125│ │~28125│ │~28125│ ... │~28125│ │~28125│ │ + │ │valids│ │valids│ │valids│ │valids│ │valids│ │ + │ └──┬───┘ └──┬───┘ └──┬───┘ └──┬───┘ └──┬───┘ │ + │ │ │ │ │ │ │ + └────┼───────────┼───────────┼───────────────┼───────────┼────┘ + └───────────┴───────────┴───┬───────────┴───────────┘ + │ + All ~900k votes + collected over 32 slots + │ + ▼ + Epoch checkpoint + (first slot of epoch) + + Each validator attests exactly ONCE per epoch. + The full >=2/3 tally is only meaningful at epoch boundaries. +``` + +Each validator is shuffled into a **committee** assigned to one specific slot. Within +that slot, the committee may be further split (up to 64 sub-committees) for parallel +aggregation. The result: each validator only attests once per epoch, and the network +processes ~28,000 attestations per slot instead of ~900,000. + +**The trade-off:** + +| | **3SF-mini** | **Casper FFG** | +|---|---|---| +| **Who votes when** | All validators, every slot | Each validator once per epoch (in its assigned slot) | +| **Messages per slot** | `N` (all validators) | `N / 32` (one committee) | +| **Supermajority known after** | 1 slot (all votes in) | 1 epoch (need all 32 committees) | +| **Fastest finalization** | 2 slots = **8 seconds** | 2 epochs = **~12.8 minutes** | +| **Practical validator limit** | Hundreds–thousands | Millions | + +Epochs exist because of a scalability constraint, not a protocol-theory preference. If +you could process a million votes per slot, Casper FFG wouldn't need epochs at all. 3SF-mini +sidesteps this by targeting a smaller validator set, which lets it operate at slot granularity. + +### Finalization Logic + +Both require a chain of justified checkpoints, but the rules differ in what they check. + +**Casper FFG** uses **k-finality**. The original rule (k=1) requires a direct supermajority +link from a checkpoint to its immediate successor: justify epoch N+1 with source=N, and N +is finalized. Ethereum generalizes this to **k=2**, which handles the case where the network +falls slightly behind: + +```text + Casper FFG — 1-finality (ideal case): + + Epoch N Epoch N+1 + ┌─────┐ ┌─────┐ + │ CP │══════▶│ CP │ Supermajority link N → N+1 + │ J ✓ │ │ │ + └─────┘ └─────┘ + + Processing this link: + 1. Epoch N+1 becomes JUSTIFIED (target of a supermajority link) + 2. Epoch N becomes FINALIZED (direct successor justified) + + + Casper FFG — 2-finality (one epoch behind): + + Epoch N Epoch N+1 Epoch N+2 + ┌─────┐ ┌─────┐ ┌─────┐ + │ CP │ │ CP │ │ CP │ + │ J ✓ │ │ J ✓ │ │ │ + └─────┘ └─────┘ └─────┘ + │ │ + └══════ supermajority ══════┘ + link N → N+2 + + The direct link N→N+1 didn't form in time. + Instead, a link forms from N→N+2. Processing this link: + 1. Epoch N+2 becomes JUSTIFIED (target of a supermajority link) + 2. Epoch N becomes FINALIZED (all intermediates are justified) +``` + +The 2-finality rule is a recovery mechanism: even if the network missed the ideal one-epoch +finalization window, it gets a second chance. Ethereum tracks the justification status of the +last 4 epoch boundaries to detect both cases. In practice, most finalization happens via +1-finality during normal operation; 2-finality kicks in during brief network hiccups. + +**3SF-mini** takes a different approach entirely: + +```text + Slot S Slot T + ┌─────┐ ┌─────┐ + │ CP │──────▶│ CP │ No justifiable slots exist + │ J ✓ │ │ J ✓ │ between S and T + └─────┘ └─────┘ + ∴ Slot S is FINALIZED + + Rule: Finalized when NO intermediate checkpoints could exist +``` + +Instead of checking that intermediate checkpoints *are justified*, 3SF-mini checks that no +intermediate checkpoints *could exist at all*. This is a stronger guarantee: validators' +votes between source and target could only have gone to the target, since there's nowhere else +to direct them. This structural property is also why 3SF-mini doesn't need Casper's +surround-vote slashing condition. + +Casper's k-finality is essentially a tolerance parameter: "how many epochs behind can we be +and still finalize?" Ethereum chose k=2, meaning it tolerates one missed epoch. 3SF-mini +doesn't need this concept because the justifiability schedule itself adapts. Instead of +tolerating missed windows, it makes the windows wider when the network is struggling. + +### Adaptive Backoff (unique to 3SF-mini) + +Casper FFG has a fixed checkpoint every epoch, regardless of network conditions. 3SF-mini's +justifiability schedule adapts: gaps between justifiable slots grow under prolonged asynchrony +(via the perfect square and pronic number rules), creating natural vote concentration when the +network is struggling to reach a two-thirds majority. Casper FFG has no equivalent; its epoch spacing is the +same whether the network is healthy or partitioned. See +[Justifiable Slot Backoff](#justifiable-slot-backoff) for a detailed walkthrough. diff --git a/docs/lmd_ghost.md b/docs/lmd_ghost.md new file mode 100644 index 0000000..980fd3c --- /dev/null +++ b/docs/lmd_ghost.md @@ -0,0 +1,758 @@ +# 👻 LMD-GHOST fork choice algorithm + +A deep dive into how the **LMD-GHOST** (Latest Message Driven, Greedy Heaviest Observed SubTree) +fork choice algorithm works. LMD-GHOST is the fork choice rule used by Ethereum's consensus layer +and its derivatives. Each validator's **latest attestation** is their single active vote, and the +algorithm follows the heaviest branch at every fork. + +This document is implementation-agnostic, with ethlambda-specific details called out in +blockquotes marked **"In ethlambda"**. + +> Much of the conceptual framing in this document is inspired by Ben Edgington's +> [Eth2 Book](https://eth2book.info/), particularly the +> [LMD GHOST chapter](https://eth2book.info/latest/part2/consensus/lmd_ghost/). +> Highly recommended reading for anyone interested in Ethereum consensus. + +--- + +## Background & History + +The GHOST protocol was introduced by **Sompolinsky and Zohar** in a +[2013 paper][ghost-paper]. Its core idea: instead of choosing the heaviest chain, +we choose the **heaviest subtree**, counting orphaned blocks as evidence of support for +their ancestors. + +The "LMD" in LMD-GHOST stands for **Latest Message Driven**: only each validator's +**most recent** attestation counts, preventing vote amplification. LMD-GHOST is the +fork choice rule used by the Ethereum Beacon Chain and Lean Ethereum. + +[ghost-paper]: https://eprint.iacr.org/2013/881.pdf + +--- + +## Why Fork Choice? + +In a distributed system where validators propose blocks concurrently, the blockchain can +fork: two valid blocks may appear at the same slot, creating competing chains. The +**fork choice rule** answers a critical question: + +> *Which chain tip should I follow?* + +```text + ┌──────────┐ + ┌────▶│ Block C │ ← Chain tip 1 + │ │ slot 5 │ +┌──────────┐ │ └──────────┘ +│ Block A │─┤ +│ slot 3 │ │ ┌──────────┐ +└──────────┘ └────▶│ Block D │ ← Chain tip 2 + │ slot 5 │ + └──────────┘ + + Which tip should validators follow? +``` + +Every node in the network must be able to independently arrive at the same answer using +only its local view of blocks and attestations. The fork choice rule is what makes this +possible. It is a deterministic function from a node's observed state to a single chain tip. + +--- + +## From Heaviest Chain to Heaviest Subtree + +The simplest fork choice rule is **heaviest chain**: follow the chain tip with the most +accumulated weight. This works when fork rates are low, but breaks down when honest +validators fork within a common branch: + +```text + HEAVIEST CHAIN vs HEAVIEST SUBTREE + ────────────────────────────────── + + An attacker with 40% of stake forks at A. + The honest majority (60%) builds on B but forks into C and D: + + ┌───B──┬──C V0, V1, V2 vote for C (30%) + A ────┤ └──D V3, V4, V5 vote for D (30%) + │ + └───X──Y──Z V6, V7, V8, V9 vote for Z (40%) + + Heaviest chain: + Z has 40% of votes, C and D each have 30%. + Attacker wins! ✗ + + Heaviest subtree (LMD-GHOST): + At A: B subtree has 60% (C + D), X subtree has 40%. + Pick B. Then at B: C has 30%, D has 30% (tiebreaker). + Honest majority wins. ✓ +``` + +LMD-GHOST is strictly better when honest validators fork within a common subtree. +Instead of requiring all honest validators to agree on a single chain tip (which is +impossible under network delay), it aggregates their support at each level of the tree. + +### How Subtree Weight Works (the "GHOST" Part) + +The key insight behind the "Heaviest Observed SubTree" part of LMD-GHOST: +**a vote for a block is implicitly a vote for all its ancestors.** + +When a validator attests to block F as their head, they are also expressing support +for every block on the path from the root to F: + +```text + Validator attests: head = F + + A ── B ── C ── D ── E ── F + ▲ ▲ ▲ ▲ ▲ ▲ + │ │ │ │ │ │ + └────┴────┴────┴────┴────┘ + All ancestors implicitly supported +``` + +This is why LMD-GHOST counts the **subtree** weight: a block's weight includes every +attestation for any of its descendants, because those attestations implicitly endorse +the ancestor too. The algorithm exploits this by walking backward from each attested +head and incrementing every block along the path. + +--- + +## LMD: Why Only the Latest Message? + +The "LMD" in LMD-GHOST stands for **Latest Message Driven**. Each validator's **most +recent** attestation is their only vote. All previous attestations are discarded. + +```text + Validator 7's attestation history: + + Slot 10: attests to head = B ← discarded + Slot 11: attests to head = C ← discarded + Slot 12: attests to head = E ← THIS is the active vote + + Only the slot 12 attestation counts for fork choice. +``` + +Why only the latest? Two reasons: + +1. **Prevents double-voting.** If all messages counted, a validator could cast many + attestations and amplify their influence. With LMD, each validator gets exactly one + active vote regardless of how many attestations they've broadcast. + +2. **Reflects current knowledge.** A validator's latest attestation reflects their most + recent view of the chain. Older attestations may reference blocks that are no longer + on the best chain. Keeping only the latest ensures fork choice uses the most up-to-date + information. + +The fork choice store maintains a mapping of `validator_index → latest attestation`. +When a new attestation arrives from a validator, it **replaces** their previous entry: + +```text + Fork choice store (latest messages): + + ┌──────────────┬──────────────────────────────┐ + │ Validator │ Latest Attestation │ + ├──────────────┼──────────────────────────────┤ + │ 0 │ head=E, target=C, source=A │ + │ 1 │ head=D, target=C, source=A │ + │ 2 │ head=E, target=C, source=A │ + │ 3 │ head=F, target=D, source=A │ + │ ... │ ... │ + └──────────────┴──────────────────────────────┘ + + One row per validator. New attestation → overwrite row. +``` + +--- + +## LMD-GHOST Step by Step + +The algorithm takes a set of inputs and produces a single block root: the head of +the chain. + +### Inputs + +| Input | Purpose | +|-------|---------| +| Start root | The justified checkpoint (root of the subtree to search) | +| Block tree | The set of known blocks: root → (slot, parent) | +| Attestations | Latest message per validator: validator_index → attestation | +| Min score | Minimum weight for a branch to be considered (0 = follow any branch; higher = conservative) | + +> **In ethlambda:** The function is `compute_lmd_ghost_head()` in +> `crates/blockchain/fork_choice/src/lib.rs`. The block tree comes from +> the `LiveChain` storage index, and `min_score` is 0 for head selection +> or ⌈2V/3⌉ for safe target computation. + +### The Algorithm + +First, **accumulate weights.** Each attestation "paints" the path from its head back +to the start root. In the simplest form (equal-weight validators), this adds +1 to +every block on the path. In systems with balance-weighted voting, the validator's +effective balance is added instead. + +```text + Validator 0 attests to head = F + + J ─ A ─ B ─ C ─ D ─ E ─ F (J = justified root) + +1 +1 +1 +1 +1 +1 J is at start_slot, not counted + + Validator 1 attests to head = D + + J ─ A ─ B ─ C ─ D + +1 +1 +1 +1 + + Accumulated weights: + + Block: J A B C D E F + Weight: ─ 2 2 2 2 1 1 + │ + └ start_root (not weighted, used as the descent origin) +``` + +> **In ethlambda:** All validators have equal weight (+1 per vote). The Ethereum +> Beacon Chain instead weights votes by effective balance (up to 2048 ETH). + +Then, **greedily descend.** Starting from the start root, at each node pick the child +with the most weight. Repeat until reaching a leaf: + +```text + J ──┬── B (5) ← pick B (higher weight) + └── G (2) + + B ──┬── C (3) ← pick C (higher weight) + └── H (2) + + C ──── D (3) ← only child, continue + + D ── (no children) → HEAD = D! +``` + +Children below `min_score` are ignored during the descent. With `min_score = 0` +(normal head selection) all children are visible. With a higher threshold, only +branches with strong support are followed. This is used for +[safe target selection](#safe-target-selection). + +### The Tiebreaker + +When two children have exactly equal weight, a deterministic tiebreaker is needed. +Without one, different nodes could pick different heads from the same data, breaking +consensus. The tiebreaker is **lexicographically higher block root hash**, i.e., +higher hash value wins. + +```text + Equal weight scenario: + + Parent + │ + ┌───┴───┐ + B (3) C (3) ← Equal weight! + root: root: + 0x3a.. 0x7f.. ← 0x7f > 0x3a, so pick C +``` + +The choice of "higher hash wins" is a convention. Any deterministic rule would work; +what matters is that all nodes apply the same one. + +--- + +## Worked Example: Head Selection + +Consider a network with **5 validators** (indices 0–4) and the following block tree +rooted at the justified checkpoint `J` at slot 10: + +```text + BLOCK TREE + ────────── + +Slot 10 ┌──────┐ +(justified) │ J │ ← Justified checkpoint (start_root) + └──┬───┘ + │ +Slot 11 ┌──┴───┐ + │ A │ + └──┬───┘ + ┌──┴────────┐ + │ │ +Slot 12 ┌──┴───┐ ┌──┴───┐ + │ B │ │ C │ + └──┬───┘ └──┬───┘ + │ │ +Slot 13 ┌──┴───┐ ┌──┴───┐ + │ D │ │ E │ + └──────┘ └──────┘ +``` + +**Latest attestations (one per validator):** + +| Validator | Attested Head | Path back from head to J | +|-----------|---------------|--------------------------| +| 0 | D | D → B → A → (J) | +| 1 | D | D → B → A → (J) | +| 2 | E | E → C → A → (J) | +| 3 | E | E → C → A → (J) | +| 4 | E | E → C → A → (J) | + +**Accumulate weights** by walking backward from each attested head, adding +1 per +block (stopping at J's slot): + +```text + V0 (head=D): D+1 B+1 A+1 + V1 (head=D): D+1 B+1 A+1 + V2 (head=E): E+1 C+1 A+1 + V3 (head=E): E+1 C+1 A+1 + V4 (head=E): E+1 C+1 A+1 +``` + +| Block | Weight | Explanation | +|-------|--------|-------------| +| A | 5 | On path of all 5 validators | +| B | 2 | On path of V0, V1 | +| C | 3 | On path of V2, V3, V4 | +| D | 2 | Head of V0, V1 | +| E | 3 | Head of V2, V3, V4 | + +**Greedily descend** from J, always picking the heaviest child: + +```text + Start at J + └─▶ A (only child, weight 5) + ├── B (weight 2) + └── C (weight 3) ← Pick C (3 > 2) + └─▶ E (only child, weight 3) + └─▶ No children → HEAD = E ✓ +``` + +**Result:** The canonical head is **Block E**. Even though both branches have the same +depth, the C→E branch has 3 votes vs B→D's 2 votes. + +```text + RESOLVED HEAD + ───────────── + +Slot 10 ┌──────┐ + │ J │ + └──┬───┘ + │ +Slot 11 ┌──┴───┐ + │ A │ ✓ canonical + └──┬───┘ + ┌──┴────────┐ + │ │ +Slot 12 ┌──┴───┐ ┌──┴───┐ + │ B │ │ C │ ✓ canonical (weight 3 > 2) + └──┬───┘ └──┬───┘ + │ │ +Slot 13 ┌──┴───┐ ┌──┴───┐ + │ D │ │ E │ ★ HEAD + └──────┘ └──────┘ +``` + +### What If a Vote Changes? + +Suppose validator 1 now sees block E and switches their attestation from D to E: + +```text + Before: V0=D, V1=D, V2=E, V3=E, V4=E → Head = E (3 vs 2) + After: V0=D, V1=E, V2=E, V3=E, V4=E → Head = E (4 vs 1) + + The head didn't change, but the margin increased from 1 to 3. + If instead V2 and V3 had switched to D: + + After: V0=D, V1=D, V2=D, V3=D, V4=E → Head = D (4 vs 1) + + The head reorgs from E to D. +``` + +--- + +## Fork Choice vs Finality + +An important conceptual distinction: **LMD-GHOST provides fork choice, not finality.** + +LMD-GHOST gives the network a way to agree on the current head of the chain at any +moment, but the head can change. A block selected by fork choice today could be +reorged away tomorrow if attestations shift. LMD-GHOST alone provides no guarantee +that any block is permanent. + +**Finality**, the guarantee that a block can never be reverted, comes from a separate +mechanism called a **finality gadget**. LMD-GHOST is designed to compose with any +finality gadget (e.g., Casper FFG in the Ethereum Beacon Chain, or [3SF-mini](3sf_mini.md) in Lean Ethereum). + +```text + ┌────────────────────────────────────────────────────┐ + │ CONSENSUS = TWO LAYERS │ + │ │ + │ ┌─────────────┐ ┌──────────────────────┐ │ + │ │ LMD-GHOST │ │ Finality Gadget │ │ + │ │ │ │ │ │ + │ │ "Which tip │ │ "Which blocks are │ │ + │ │ is best │ │ permanent and can │ │ + │ │ right now?"│ │ never be reverted?" │ │ + │ │ │ │ │ │ + │ │ Dynamic, │ │ Monotonic, only │ │ + │ │ can reorg │ │ moves forward │ │ + │ └──────┬──────┘ └──────────┬───────────┘ │ + │ │ │ │ + │ └──────────┬───────────────┘ │ + │ ▼ │ + │ ┌──────────────────┐ │ + │ │ Full Consensus │ │ + │ └──────────────────┘ │ + └────────────────────────────────────────────────────┘ +``` + +> **In ethlambda:** The finality gadget is [3SF-mini](3sf_mini.md), which operates at +> the slot level rather than epoch boundaries. + +The two layers interact: LMD-GHOST runs its greedy descent **starting from the latest +justified checkpoint** (not genesis). This means finality constrains fork choice: once +a checkpoint is finalized, no fork choice run will ever consider blocks before it. + +```text + ┌─────────┐ ┌─────────┐ ┌──── ... + │FINALIZED│────────▶│JUSTIFIED│────────▶│ fork choice + │ slot 50 │ │ slot 55 │ │ runs here + └─────────┘ └─────────┘ └──── ... + │ │ + │ └── start_root for LMD-GHOST + │ + └── everything before this is permanent +``` + +This has a major practical benefit: **finality allows aggressive pruning of the block +tree.** Without finality, fork choice would need to consider every block since genesis, +and the tree would grow without bound. With finality, all blocks at or before the finalized +checkpoint can be discarded from the fork choice's working set. + +> **In ethlambda:** The `LiveChain` index (the in-memory block tree used by fork choice) +> is pruned every time finalization advances, keeping it bounded to only the non-finalized +> portion of the chain. + +--- + +## Attestation Pipeline + +In a naive implementation, every attestation would influence fork choice the instant it +arrives. This creates problems: validators with faster network connections see different +heads than slower ones, and the proposer's view of the chain could shift mid-block-construction. + +Lean Ethereum solves this with a **staged promotion pipeline**: attestations are +collected into a pending set and only promoted to the active fork choice set at +designated moments. This ensures all validators operate on a consistent view. + +```text + ATTESTATION LIFECYCLE + ───────────────────── + + ┌──────────────┐ ┌──────────────────┐ ┌──────────────────┐ + │ Network │ │ Pending │ │ Active │ + │ (gossip) │──────▶│ Attestations │──────▶│ Attestations │ + │ │ │ │ │ │ + └──────────────┘ └──────────────────┘ └──────────────────┘ + │ │ + NOT used for Used for fork choice + fork choice weight calculations + │ │ + Promoted at ─────────────▶ designated intervals + fixed points +``` + +> **In ethlambda:** The two stages are called "new" and "known" attestations, stored +> in `LatestNewAttestations` and `LatestKnownAttestations` tables respectively. +> Promotion happens at tick intervals 0 (if proposing) and 3 (end of slot). + +### Why Staged Promotion? + +The staged design serves two purposes: + +1. **Consistency:** All validators promote attestations at the same moments, + reducing divergence in head selection. Without batching, validators with faster + network connections would see different heads than slower ones. + +2. **Proposer fairness:** The proposer computes the block against a known, fixed set + of attestations. If new attestations could influence fork choice mid-computation, + different validators might disagree on the head. + +### On-Chain vs Off-Chain Attestations + +Attestations arrive from two sources, and how they enter the pipeline matters: + +| Source | Enters As | Reason | +|--------|-----------|--------| +| Network gossip | **Pending** | Must wait for promotion window | +| Block body (on-chain) | **Active** | Already consensus-validated | +| Proposer's own attestation | **Pending** | Prevents proposer weight advantage | + +The proposer's own attestation enters as pending (not active) deliberately. If it +were immediately active, the proposer would gain an unfair weight advantage for +their own block, a circular dependency where proposing a block gives you an extra +vote toward making that block canonical. + +--- + +## Safe Target Selection + +The **safe target** is a conservative head computed with a high weight threshold. +It constrains the `target` field in attestations, which feeds into +[3SF-mini](3sf_mini.md) for justification and finalization decisions. Validators +still vote for the newest head they see (regular LMD-GHOST with `min_score = 0`) +in the `head` field. The safe target only affects which blocks can progress +toward finality. It is computed by running the same LMD-GHOST algorithm but with +a non-zero `min_score` in the filtering phase. + +```text + SAFE TARGET vs HEAD + ──────────────────── + + Regular head (min_score = 0): + Follow heaviest branch, even with a slim margin + + ┌── B (3 votes) ← HEAD (3 > 2) + J ── A ──┤ + └── C (2 votes) + + + Safe target (min_score = ⌈2V/3⌉): + Only follow branches with supermajority support + + V = 5 validators, threshold = ⌈10/3⌉ = 4 + + ┌── B (3 votes) ← Below threshold (3 < 4), pruned + J ── A ──┤ + └── C (2 votes) ← Below threshold (2 < 4), pruned + + Safe target = A (no children pass threshold) +``` + +This means the safe target **lags behind** the head. It only advances when a branch +accumulates overwhelming support, making it resistant to temporary fluctuations: + +```text + Timeline of safe target vs head: + + Slot: 10 11 12 13 14 15 16 + Head: J A B D D E F + Safe: J J J A A A D + │ + Safe target is always ────┘ + at or behind the head +``` + +The safe target prevents [3SF-mini](3sf_mini.md) from finalizing unstable branches: +without it, a slim-majority fork could reach justification and finalization before +the network converges. By requiring supermajority support for the target, only +branches with strong consensus can progress toward finality, even though +validators' head votes freely follow the newest chain tip. + +--- + +## Reorgs + +A **reorg** (reorganization) occurs when the fork choice head switches from one branch +to another. This happens when a competing branch accumulates more attestation weight +than the current head's branch. + +```text + REORG SCENARIO + ────────────── + + Before (head = D): + + ┌── B ── D ★ HEAD (weight 4) + J ── A ──┤ + └── C ── E (weight 3) + + + New attestations arrive, 3 validators switch to E: + + ┌── B ── D (weight 2) + J ── A ──┤ + └── C ── E ★ HEAD (weight 5) ← REORG! + + + The canonical chain changed from J─A─B─D to J─A─C─E + Blocks B and D are no longer canonical (but remain in the block tree). +``` + +Reorgs are normal during transient network conditions but should be rare in stable +operation. They cannot cross a finalization boundary: once a block is finalized, it is +permanently part of the canonical chain. + +> **In ethlambda:** Reorgs are detected by checking whether the old and new heads +> share a common prefix, and tracked via Prometheus metrics +> (`lean_fork_choice_reorgs_total`). + +--- + +## LMD-GHOST Variants + +LMD-GHOST is one of several variants that have been proposed and studied. Understanding +the design space helps explain why LMD was chosen. + +| Variant | Full Name | What Counts | Trade-off | +|---------|-----------|-------------|-----------| +| **IMD** | Immediate Message Driven | All attestations ever | Maximizes data but creates unbounded storage and is vulnerable to long-range rewriting | +| **LMD** | Latest Message Driven | Only each validator's most recent attestation | Good balance: one vote per validator, reflects current view, bounded storage | +| **FMD** | Fresh Message Driven | Only attestations from current/previous epoch | Prevents very old attestations from influencing fork choice, but validators who go offline lose influence immediately | +| **RLMD** | Recent Latest Message Driven | Latest attestation, but only if within N epochs | Parameterized compromise between LMD and FMD; tunable staleness threshold | + +The Ethereum consensus mini-spec originally used IMD-GHOST but switched to LMD in +November 2018 due to superior stability properties. + +```text + IMD: All attestations count LMD: Only latest counts + + V0: slot 5 → head B V0: slot 5 → head B (overwritten) + V0: slot 8 → head C V0: slot 8 → head C ← active + V0: slot 11 → head E V0: slot 11 → head E ← active + + V0 contributes 3 votes! V0 contributes 1 vote. + Validators who attest more Equal influence regardless + often have outsized influence. of attestation frequency. +``` + +--- + +## ethlambda Implementation Reference + +This section covers ethlambda-specific details: scheduling, Beacon Chain differences, +source code locations, and performance. + +### Tick-Based Scheduling + +ethlambda divides time into **4-second slots**, each split into **4 intervals** (1 second +each). Fork choice operations are scheduled at specific intervals: + +```text + ONE SLOT (4 seconds) + ┌──────────────┬──────────────┬──────────────┬──────────────┐ + │ Interval 0 │ Interval 1 │ Interval 2 │ Interval 3 │ + │ (t+0s) │ (t+1s) │ (t+2s) │ (t+3s) │ + ├──────────────┼──────────────┼──────────────┼──────────────┤ + │ │ │ │ │ + │ IF PROPOSER: │ NON-PROPOSER:│ update_safe │ accept_new │ + │ accept new │ produce │ _target() │ _attestations│ + │ attestations│ attestation │ │ () │ + │ + propose │ │ (2/3 vote │ │ + │ block │ │ threshold) │ update_head()│ + │ │ │ │ │ + │ update_head()│ │ │ │ + │ │ │ │ │ + └──────────────┴──────────────┴──────────────┴──────────────┘ + + ◄─────────────── Slot N ──────────────────────────────────────► +``` + +**Detailed sequence:** + +```text + Interval 0 ─ Slot boundary + │ + ├── Am I the proposer for this slot? + │ ├── YES: promote new → known attestations + │ │ run fork choice → update_head() + │ │ build block using known attestations + │ │ publish block to network + │ └── NO: (wait for block from proposer) + │ + Interval 1 ─ Attestation production + │ + ├── Non-proposers: + │ └── Create attestation with: + │ • head = current fork choice head (newest head) + │ • target = derived from safe_target (for 3SF-mini) + │ • source = latest_justified checkpoint + │ Publish attestation to gossipsub + │ + Interval 2 ─ Safe target update + │ + ├── Recalculate safe_target using 2/3 supermajority threshold + │ └── Only blocks with ≥ ⌈2V/3⌉ attestation weight qualify + │ (V = total validators) + │ + Interval 3 ─ End of slot + │ + ├── Promote new → known attestations + └── Run fork choice → update_head() +``` + +### Differences from the Ethereum Beacon Chain + +ethlambda is a lean consensus client with several simplifications compared to the +Ethereum Beacon Chain: + +| Aspect | ethlambda | Ethereum Beacon Chain | +|--------|-----------|----------------------| +| **Vote weight** | Equal: 1 vote per validator | Proportional to effective balance (up to 32 ETH) | +| **Proposer boost** | None | Yes: newly proposed blocks get temporary bonus weight | +| **Equivocation handling** | Not in fork choice | Equivocating validators' weight excluded | +| **Attestation frequency** | Every slot | Once per epoch | +| **Committee structure** | All validators attest each slot | Validators split into per-slot committees | +| **Slot duration** | 4 seconds | 12 seconds | + +**No proposer boost.** The Beacon Chain adds a "proposer boost", a temporary weight bonus +given to newly proposed blocks to prevent balancing attacks. ethlambda does not implement +this. Instead, proposer fairness is handled through the two-stage attestation pipeline +(the proposer's own attestation enters as "new", not "known"). + +**No balance weighting.** In the Beacon Chain, a validator with 32 ETH of effective balance +has more fork choice weight than one with 16 ETH. In ethlambda, every validator has exactly +equal weight (1 vote = 1 unit of weight), simplifying the algorithm and analysis. + +**No equivocation discounting.** The Beacon Chain's fork choice detects validators who +equivocate (attest to conflicting blocks in the same slot) and excludes their weight. This +addresses the "nothing at stake" problem where validators can costlessly vote for multiple +forks. ethlambda does not implement this in its fork choice. + +### Key Files + +| File | Component | +|------|-----------| +| `crates/blockchain/fork_choice/src/lib.rs` | Core LMD-GHOST algorithm (`compute_lmd_ghost_head`) | +| `crates/blockchain/src/store.rs` | Store: head update, safe target, attestation promotion | +| `crates/blockchain/src/lib.rs` | BlockChain actor: tick scheduling, interval dispatch | +| `crates/common/types/src/attestation.rs` | `AttestationData` type (head, target, source, slot) | +| `crates/common/types/src/state.rs` | `Checkpoint` (root + slot), `State` | +| `crates/storage/src/api/` | `LiveChain` table, `StorageBackend` trait | + +### Data Flow Summary + +```text + ┌───────────┐ ┌──────────────┐ ┌───────────────┐ + │ Gossipsub │────────▶│ New │──(promote)─▶│ Known │ + │ (network) │ │ Attestations │ │ Attestations │ + └───────────┘ └──────────────┘ └───────┬───────┘ + │ + ┌───────────┐ │ + │ LiveChain │──── { root → (slot, parent) } ───────────────┤ + │ (index) │ │ + └───────────┘ │ + ▼ + ┌─────────────────┐ + ┌───────────┐ │ compute_lmd_ │ + │ Justified │──── start_root ───────────────▶│ ghost_head() │ + │Checkpoint │ │ │ + └───────────┘ └────────┬────────┘ + │ + ┌──────┴──────┐ + │ │ + ▼ ▼ + ┌──────────┐ ┌───────────┐ + │ HEAD │ │ SAFE │ + │ (min=0) │ │ TARGET │ + └──────────┘ │ (min=2V/3)│ + └───────────┘ +``` + +### Performance Characteristics + +| Operation | Time Complexity | Description | +|-----------|----------------|-------------| +| Weight accumulation | O(A × D) | A = attestations, D = max chain depth from justified root | +| Greedy descent | O(D × B) | D = depth, B = max branching factor | +| Attestation promotion | O(V) | V = total validators | +| LiveChain lookup | O(B) | B = non-finalized blocks | + +In practice with a small validator set and bounded non-finalized chain length, +all operations complete in sub-millisecond time. The `// TODO: add proto-array +implementation` comment in the source indicates a future optimization path: +proto-array is an O(1) amortized fork choice algorithm used by most Beacon Chain +clients.