Implement the following plan:
Raft KV Store Emulation Plan
Context
Create a replicated KV store using Raft consensus, matching the semantics of multipaxos_kv: put fails if key exists, get returns value or error, delete fails if key missing. Client is given leader directly (no leader discovery). The create-emulation skill requires TOML-based configuration.
Architecture
Raft vs MultiPaxos mapping
| MultiPaxos KV |
Raft KV |
| Acceptor (voter) |
Follower node (AppendEntries responder) |
| Leader (proposer + state machine) |
Leader node (log replication + state machine) |
| Replica (passive learner) |
N/A - followers apply committed entries themselves |
| Phase 1 (Prepare/Promise) |
Leader election (RequestVote) |
| Phase 2 (Accept/Accepted) |
Log replication (AppendEntries) |
Key Design Decisions
- Single node actor type (
node.gleam): All Raft nodes are the same type with roles Leader/Follower/Candidate
- Simplified election at startup: Node 1 starts election immediately, wins (like multipaxos_kv's Phase 1 at init)
- Synchronous replication: Leader calls followers with
actor.call during request handling (same pattern as multipaxos_kv's Phase 2)
- Commit notification: After majority ack, leader sends fire-and-forget
CommitNotification to followers so they apply entries
- TOML config: Read
num_nodes and num_clients from config file
Message Flow
Client --[ClientPut/Get/Delete]--> Leader Node
Leader appends entry to log (term, command)
Leader --[AppendEntries(entries)]--> all Follower Nodes (synchronous calls)
Leader waits for majority of success responses
Leader advances commit_index, applies to local KV store
Leader --[reply]--> Client
Leader --[CommitNotification(commit_index)]--> all Followers (fire-and-forget)
Followers apply committed entries to their local KV stores
File Structure
raft_kv/
gleam.toml
config.toml -- default config: num_nodes=5, num_clients=3
src/
raft_kv.gleam -- main: read TOML config, start nodes, election, clients
raft_kv/
types.gleam -- Command, OpResult, LogEntry (shared types)
node.gleam -- Raft node actor (all roles)
client.gleam -- client (adapted from multipaxos_kv/client.gleam)
test/
raft_kv_test.gleam -- basic test
Implementation Details
1. types.gleam - Shared types
Command = Put/Get/Delete/Noop (same as multipaxos_kv/types.gleam)
OpResult = OpOk | OpError (same as multipaxos_kv/types.gleam)
LogEntry(term: Int, command: Command) - new for Raft
2. node.gleam - Raft node actor (most complex file)
State:
- id, current_term, voted_for: Option(Int), role: Role
- log: Dict(Int, LogEntry) - 1-indexed
- commit_index, last_applied
- peers: List(#(Int, Subject(Message))) - set after startup via SetPeers
- store: Dict(String, String) - KV state machine
- Leader-only: next_index: Dict(Int, Int), match_index: Dict(Int, Int)
Messages:
- SetPeers(List(#(Int, Subject(Message)))) - configuration after startup
- StartElection - trigger election (sent to node 1 by main)
- RequestVote(term, candidate_id, last_log_index, last_log_term, reply_with) / VoteResponse
- AppendEntries(term, leader_id, prev_log_index, prev_log_term, entries, leader_commit, reply_with) / AppendEntriesResponse
- CommitNotification(leader_commit: Int) - fire-and-forget commit update
- ClientPut/ClientGet/ClientDelete - same signatures as multipaxos_kv/leader.gleam
Key functions:
- handle_start_election: Increment term, vote for self, call RequestVote on all peers, become leader on majority
- handle_request_vote: Standard Raft voting logic (term check, log up-to-date check)
- handle_append_entries: Check term, check prev_log match, append entries, update commit_index, apply
- handle_client_request: (Leader only) Append to log, synchronous AppendEntries to peers, on majority: commit, apply, reply, broadcast CommitNotification
- apply_committed: Apply entries from last_applied+1 to commit_index (same KV semantics as multipaxos_kv)
3. client.gleam - Adapted from multipaxos_kv/client.gleam
- Same structure: 5 random requests, keys ["a","b","c","d","e"]
- Calls
node.put, node.get, node.delete instead of leader.*
- References
Subject(node.Message) instead of Subject(leader.Message)
4. raft_kv.gleam - Main entry point
- Read config.toml for
num_nodes (default 5) and num_clients (default 3)
- Start N nodes, send SetPeers to each with all other nodes
- Send StartElection to node 1, small sleep to let election complete
- Start clients targeting node 1
- Send Start to all clients, sleep 3s
5. config.toml
num_nodes = 5
num_clients = 3
Dependencies
gleam_stdlib, gleam_otp, gleam_erlang, tom, argv
Note: tom for TOML parsing. argv to optionally accept config file path as CLI arg.
Reusable patterns from multipaxos_kv
apply_command logic from multipaxos_kv/leader.gleam:106-156 (KV semantics)
apply_committed pattern from multipaxos_kv/replica.gleam:27-83 (sequential log application)
- Client structure from
multipaxos_kv/client.gleam (nearly identical)
- Main wiring from
multipaxos_kv/multipaxos_kv.gleam (start actors, connect, run)
Verification
gleam build - compiles without errors
gleam run - shows: nodes starting, election completing, clients issuing requests with Raft replication trace, followers applying committed entries
gleam format - code is formatted
- KV semantics match: put rejects existing keys, delete rejects missing keys, get returns value or error
If you need specific details from before exiting plan mode (like exact code snippets, error messages, or content you generated), read the full transcript at: /Users/apanda/.claude/projects/-Users-apanda-code-agent-spec-code/b2f3cfa8-4232-4552-95f3-5eccc20c4847.jsonl