Status: Phase 4 is partially implemented behind the
phase4feature flag (simulation-first). An experimental read-only kernel FUSE mount is available on Unix when built withspacectlfeaturekernel_fuseand systemlibfuse3headers installed.Reality Check:
- Protocol projections exist (helpers):
protocol-nvme,protocol-nfs::phase4,protocol-csi- Local projection mount exists:
spacectl project mount(prefers a read-only kernel FUSE mount on Unix; falls back to acontent-file view)- Federation (Phase 4b) uses a gRPC WAN bridge:
Policy.federation.targets+spacectl zone add+spacectl federation serve- Phase 5 (planned/early):
Policy.transformattaches WASM transforms to read/write streaming (seedocs/phase5.md)This document mixes current behavior and aspirational architecture. When unsure, treat it as a guide for the
phase4feature implementation.For actual feature status, see the main README Feature Status Table.
Phase 4 aims to realize the "one capsule, infinite views" vision by projecting capsules as NVMe-oF, NFS v4.2, FUSE, and CSI surfaces without materializing extra copies, while sharding metadata with Paxos for sovereign, low-latency federation.
Goals:
- Project capsules into multiple view pipelines with zero-copy re-encryption/recompression hooks.
- Extend PODMS scaling with Raft-powered metadata shards, zone-aware routing, and telemetry-driven federation.
- Gate new functionality behind
phase4so single-node users have no regressions. - Provide CLI, docs, and scripts that prove the mesh works end-to-end (NVMe discovery, CSI provisioning, geo federation).
Current Reality: Phase 4 is still experimental, but it is no longer “docs-only”: you can create a capsule, mount a view, and (optionally) replicate into another zone without changing client tooling.
- Linux hosts with SPDK-friendly toolchains, eBPF, and optionally RDMA hardware (Mellanox/ConnectX or mocks).
- Docker/Kind for system tests; no Windows/macOS support yet.
- Zonal policy compiler (PODMS Step 3) already wired through
common::podmsand the Scaling Agent. - All new code lives in
crates/protocol-nvme,crates/protocol-nfs,crates/protocol-fuse,crates/protocol-csi, andcrates/scalingunder thephase4feature.
protocol-nvmereturnsNvmeViewbacked byspdk-rsnamespaces. It callspolicy_compiler::compile_scaling(viascaling::compiler) withTelemetry::ViewProjectionto emitScalingAction::Federate/ShardEChooks.protocol-nfsexposesexport_nfs_view()returning a runningnfs-rs::NfsServer. Federation actions mirror the NVMe flow.protocol-fuseprovides the local projection mount: a read-only kernel FUSE filesystem on Unix (exposing/content), with a portablecontent-file view fallback elsewhere.protocol-csiprovisions Kubernetes volumes throughcsi-driver-rs(stub) and publishes capsules via the local view mount.
Each protocol forwards actions to MeshNode::federate_capsule and MeshNode::shard_metadata, which now talks to a lightweight raft-rs cluster stub storing shards per zone.
MeshNodeusesRaftCluster::{new, for_zone}andShardKey::newwhen sharding metadata, writing serialized capsule records to Raft logs (stubbed invendor/raft-rs).- Capsules derive deterministic shard IDs via
CapsuleId::shard_keys(count). - The CLI triggers these flows through the
spacectl project --view <nvme|nfs|fuse|csi>command (see below). - For payload replication across zones, use
crates/federation::Bridgewithspacectl zone add+spacectl federation serve(seescripts/test_federation_mock.sh).
- Protocol views read via the shared pipeline (
WritePipeline::read_capsule/read_range), which performs decompression/decryption based on stored segment metadata and capsule policy. - Policy enforcement is centralized: all Phase 4 views invoke
scaling::enforce_view_policybefore projection so federation/sharding actions can execute prior to exposing the view.
cargo run -p spacectl -- project \
--view nvme \
--id 550e8400-e29b-41d4-a716-446655440000 \
--policy-file examples/phase4-policy.yaml- The command loads a YAML policy, spins up a minimal
MeshNode(Metro zone,127.0.0.1:0), and routes to the right protocol helper. - Policies can request sovereignty/latency targets and optional federation rules via
federation(targets + priority). - Enable the entire pipeline with
cargo build --features phase4orspacectl --features phase4 project ....
Local projection mount (recommended for dev validation)
# 1) Store a local file as a capsule (optionally with federation targets)
spacectl put ./hello.txt --id 550e8400-e29b-41d4-a716-446655440000 --policy-file examples/phase4-policy.yaml
# 2) Project it into a directory containing `content`
spacectl project mount --id 550e8400-e29b-41d4-a716-446655440000 --target /tmp/space-view
# 3) Use standard tooling
cat /tmp/space-view/content- Unit Tests per crate
protocol-nvmeensuresproject_nvme_viewreturns anNvmeViewand exercises Raft stubs.protocol-nfsreuses the metadata assertions and validates theNfsServeris configured even after federation.protocol-fuseandprotocol-csihave tokio tests for the portable view mount/provisioning handles (kernel FUSE is Unix-only).
- Integration idea
- Multi-node KIND scenario (Phase4 script) writes capsules, projects an NFS view, federates to a geo zone, and re-reads data.
- Security / Chaos
scripts/test_federation_resilience.shis currently a local Phase 3 Raft failover smoke test (3 nodes, leader kill, metadata ops continue). A Chaos Mesh/KIND partition harness is a future add-on.
- Benchmarks (future)
- Use Criterion for
project_nvme_viewlatency (<50ms) andMeshNode::federate_capsule(<100?s) by mocking RDMA loops.
- Use Criterion for
- Smoke scripts
scripts/test_phase4.shrunsspacectl project --view nvmeand validates NVMe discovery output end-to-end.scripts/test_phase4_projection.shrunsspacectl put+spacectl project mountand verifiescat <mount>/contentworks.
scripts/test_phase4_views.sh: Buildsspacectlwith--features phase4, runs a KIND multi-node cluster (deployment/kind-config.yaml), projects NVMe/NFS/CSI views, and relies onkindkubectlto deploy the driver (deployment/csi-driver.yaml).scripts/test_federation_resilience.sh: Local 3-node Phase 3 Raft metadata failover smoke test.
deployment/kind-config.yamldescribes a 3-node cluster (control-plane + 2 workers) with port mappings for NVMe/TCP.deployment/csi-driver.yamlis a namespaced Deployment + Service for the CSI driver built fromspacectl.
- Week 1: Bootstrap
phase4feature, add protocol crates, confirmphase4gating. - Week 2: Wire new views, metadata sharding,
MeshNodefederation, and CLI hooks. - Week 3: Integration scripts + KIND/CAPI manifest; add tests.
- Week 4: Benchmarks, security validators (eBPF policy gate placeholders, Chaos testing).
- Week 5: Docs, demos, multi-node recordings. (Weeks 5-6 buffer for polish.)
- SPDK, NFS, FUSE dependencies: We vendor minimal crates (
spdk-rs,nfs-rs,fuse-rs,csi-driver-rs) as placeholders and keep the hardware-specific logic wrapped in feature gates. - Raft/Paxos complexity: Start with single
MeshNodeshards and araft-rsstub. Replace stub with a negotiable cluster when production hardware is ready. - Latency: Sampling with Criterion and tracing (
tokio::time::Instant) ensures views stay under 50ms; fall back to TCP/TLS transport when RDMA not present. - Kubernetes integration: Scripts deploy the CSI driver to KIND for sanity checking; the driver is still a facade around
spacectl project csi.
- Why now? Phase 3 proved the universal capsule and PODMS scaling. This phase completes the fabric by adding cross-protocol views and federated metadata.
- Hardware required? Linux only today. RDMA/Mellanox optional ? the scripts and vendor crates mock transport with TCP ports.
- Does single-node mode break? No.
phase4is opt-in. Without--features phase4, the new crates and CLI path remain unused. - Can we add SMB or iSCSI later? Yes. The new phases expose
project_nvme_viewhooks where future protocols can plug right in. - How do we prove compliance? Logs include tracing spans (
nvme_project,nfs_export,fuse_mount,csi_provision).MeshNodeemitsinfo!events when shards are stored, making audit chains easy to follow.
Refer to federation.md for zonal routing + Raft shard details and README for quick-start commands.