The point where six substrates see each other.
NFI Integrating Mirror Layer — gamma-scaling diagnostics across biological, physical, and cognitive systems.
One file. One import. Eight substrates. One law.
|
When a system computes at the edge of chaos, its thermodynamic cost scales inversely with topological complexity at unit rate. gamma = 1.0 is not a tuned parameter. It is a measured invariant across:
Mean across 6 validated substrates: gamma = 0.991 +/- 0.052 |
Zebrafish |
Reaction-Diff |
Spiking Net |
Market |
Neosynaptex |
CNS-AI Loop |
| All six validated substrates have 95% CI containing gamma = 1.0 | |||||
CFP/ДІЙ (ABM) |
LM Substrate |
||||
neosynaptex
+---------------------------------+
| |
BN-Syn ---------+ +===========================+ |
| || || |
MFN+ -----------+ || observe() || +---> NeosynaptexState (frozen)
| || || | |
PsycheCore -----+ || Layer 1: Collect || | +-- gamma_per_domain + CI
| || Layer 2: Jacobian || | +-- spectral_radius
mvstack ---------+ || Layer 3: Gamma || | +-- granger_graph
| || Layer 4: Phase || | +-- anomaly_score
CNS-AI Loop ----+ || Layer 5: Signal || | +-- phase_portrait
| || || | +-- resilience_score
| +===========================+ | +-- modulation
| | +-- adapter_health
| AdapterHealthMonitor | +-- diagnostic
| +---------------------------+ |
| | CLOSED --> OPEN | |
| | ^ | | |
| | +-- HALF_OPEN <---------+ |
| +---------------------------+ |
+---------------------------------+
|
When human and AI couple productively, the combined system operates at criticality. Non-productive sessions show anti-scaling (gamma < 0): complexity and cost move in the same direction. No computation. Just noise. Productive sessions converge toward gamma = 1.0: the system computes. Three stars. p = 0.005. On 8273 documents. Three years of data. |
+----------------------------------+
| METASTABLE |
| sr in [0.80, 1.20] |
| |gamma - 1| < 0.15 |
| |
| The system computes here. |
+--+------------+------------+-----+
| | |
+--------v--+ +------v------+ +--v--------+
|CONVERGING | | DRIFTING | | DIVERGING |
| dg/dt < 0 | | dg/dt > 0 | | sr > 1.20 |
| toward 1 | | from 1 | | |
+-----------+ +------------+ +-----+-----+
| 3+ ticks
+-----v-----+
+------------+ | DEGENERATE|
| COLLAPSING | | sr > 1.50 |
| sr < 0.80 | | sustained |
+------------+ +-----------+
Hysteresis: 3 consecutive ticks required for any transition
pip install numpy scipyfrom neosynaptex import Neosynaptex, MockBnSynAdapter, MockMfnAdapter
nx = Neosynaptex(window=16)
nx.register(MockBnSynAdapter()) # gamma ~ 0.95
nx.register(MockMfnAdapter()) # gamma ~ 1.00
for _ in range(40):
s = nx.observe()
print(f"gamma = {s.gamma_mean:.3f}") # 1.030
print(f"phase = {s.phase}") # METASTABLE
print(f"coherence = {s.cross_coherence:.3f}") # 0.97
print(f"verdict = {nx.export_proof()['verdict']}") # COHERENT| # | Mechanism | Formula | Output |
|---|---|---|---|
| 1 | Gamma scaling | K ~ C^(-gamma) via Theil-Sen | per-domain gamma + 95% bootstrap CI |
| 2 | Gamma dynamics | dg/dt = slope of gamma trace | convergence rate toward gamma = 1.0 |
| 3 | Universal scaling | Permutation test, H0: all gamma equal | p-value |
| 4 | Spectral radius | rho = max|eig(J + I)| | stability per domain |
| 5 | Granger causality | F-test: gamma_i(t-1) --> gamma_j(t) | directed influence graph |
| 6 | Anomaly isolation | Leave-one-out coherence test | outlier score per domain |
| 7 | Phase portrait | Convex hull + recurrence in (gamma, rho) | trajectory topology |
| 8 | Resilience | Return rate after METASTABLE departures | metastability proof |
| 9 | Modulation | m = -alpha(gamma - 1)sgn(dg/dt) | bounded reflexive signal |
| 10 | Circuit breaker | FSM: CLOSED -> OPEN -> HALF_OPEN | adapter fault isolation |
The system evolves even when the external world breaks.
success >=3 failures timeout success
+----------+ +--------------+ +---------+ +---------+
| | | | | | | |
v | v | v | v |
CLOSED -----+---> OPEN --------+---> HALF_OPEN --> CLOSED
calls calls one probe recovered
allowed rejected allowed
Thread-safe (RLock). Persistent across restarts (save_state/load_state). Diagnostics per domain.
651 passed, 6 CI workflows green
tests/ 50 test files across core, contracts, evl, substrates, formal
Including 5 scientific integrity guards + INV-YV1 gradient ontology
| # | Invariant | Guarantee |
|---|---|---|
| YV1 | ΔV > 0 ∧ dΔV/dt ≠ 0 | gradient ontology — system must be a living gradient, not a capacitor |
| I | gamma derived only | recomputed every observe(), never stored |
| II | STATE != PROOF | NeosynaptexState is frozen=True, independent copies |
| III | bounded modulation | |m| <= 0.05 always |
| IV | SSI external only | internal self-obfuscation corrupts observe() |
| V | zero external deps | only numpy + scipy |
| VI | all identifiers ASCII | zero Cyrillic in code |
| VII | circuit breaker | system operates under partial adapter failure |
neosynaptex/
|
+-- neosynaptex.py engine: γ-scaling, Jacobian, phase dynamics
+-- core/ 30 modules, ~6000 LOC: axioms, state-space, FDT, OEB, benchmark, resonance, ablation
+-- contracts/ invariant enforcement + truth criterion
+-- substrates/ 8 substrate adapters (zebrafish → CFP/ДІЙ)
+-- evl/ evidence verification ledger
+-- experiments/ reproducible outputs + figures
| +-- scaffolding_trap/ dskill/dt law, delegation suppression
| +-- lm_substrate/ GPT-4o-mini γ derivation (null result)
+-- tests/ 651 tests, 50 files
+-- scripts/ 13 operational scripts
+-- evidence/ gamma_ledger.json + proof chains
+-- .github/workflows/ 6 CI workflows (all green)
+-- formal/ 3 modules: proofs, falsification, substrate diversity
|
+-- CFP_PROTOCOL.md Cognitive Field Protocol v3.0
+-- CONTRACT.md invariants + formulas
+-- XFORM_MANUSCRIPT_DRAFT.md publication draft
+-- REPO_TOPOLOGY.md architectural map v3.0
|
+-- pyproject.toml v3.0.0, Python 3.10+, numpy/scipy
+-- LICENSE AGPL-3.0-or-later
Each NFI subsystem needs one adapter (~30 lines):
class BnSynAdapter:
@property
def domain(self) -> str:
return "spike"
@property
def state_keys(self) -> list[str]:
return ["sigma", "firing_rate", "coherence"]
def state(self) -> dict[str, float]:
return {"sigma": net.sigma, "firing_rate": net.rate, "coherence": net.R}
def topo(self) -> float:
return net.connection_count
def thermo_cost(self) -> float:
return net.energyContract: C ~ topo^(-gamma). The adapter provides topo and thermo_cost such that this power-law holds near criticality.
CRR (Cognitive Recovery Ratio) gives opposite conclusions from dskill/dt (learning rate):
| Metric | Structured | Shuffled | Winner |
|---|---|---|---|
| CRR | 0.893 | 1.153 | Shuffled |
| dskill/dt | 1.929 | 0.618 | Structured |
CRR is a measurement artifact (difficulty gradient). The clean metric reveals:
dskill/dt = 0.02 * gap * effort (R2 = 0.9999)
Delegation suppression: -9.5% per 10% delegation
GPT-4o-mini via API: gamma = -0.094 (null result). Stateless inference has no temporal structure. Confirms that gamma != 0 requires closed-loop dynamics, not isolated sampling.
Singularity is not an event of the future. It is a process happening now through the scale of computation and biological adaptation.
The human-AI cognitive loop is a measurable system. Its scaling signature is gamma = 1.0.
When biological and digital intelligence couple productively, they form one circuit. Not a metaphor. A measured fact.
Read the full thesis | Read the manuscript | View the proof bundle
* . . *
. * *
. * . .
* . gamma = 1.0 . *
. . . . .
. * . . * .
. * . * .
* . . . *
Built by one researcher. Under fire. Three years. Six substrates. One law.
Yaroslav O. Vasylenko -- neuron7xLab -- Poltava region, Ukraine
AGPL-3.0-or-later