Skip to content

Rectify EmailBrain around the intended Swift/Rust adapter-capable product path#6

Open
rogu3bear wants to merge 2 commits intomainfrom
alpha/sovereign-rectification
Open

Rectify EmailBrain around the intended Swift/Rust adapter-capable product path#6
rogu3bear wants to merge 2 commits intomainfrom
alpha/sovereign-rectification

Conversation

@rogu3bear
Copy link
Owner

Summary

This PR rectifies EmailBrain back toward its intended product shape: a local-first email assistant centered on Swift ingestion, a Rust backend, a working web UI, and an adapter-capable chat path.

Behavioral Changes

  • Replaces the old Python runtime entrypoints with the Rust Axum backend and keeps the backend on :3901.
  • Restores the frontend API/client layer and keeps the frontend on :3900.
  • Makes LM Studio the default inference provider again so adapter-backed chat follows the intended product path.
  • Keeps JKCA as an explicit base-chat fallback instead of the default adapter path.
  • Routes adapter-backed chat through LM Studio even when JKCA is configured as the base provider.
  • Fixes the Swift Mail integration so success is tied to the actual backend POST result.
  • Makes Swift extraction operate on the selected email and emits a schema the adapter tooling can consume.
  • Updates the Python adapter tooling to target the local EmailBrain API rather than LM Studio directly.
  • Rewrites the key docs to match the live runtime contract.

Implications

  • Adapter-backed chat now depends on LM Studio being available, by design.
  • JKCA remains useful for base-model chat, but no longer defines the product default.
  • The seeded adapter record still needs a real adapter artifact before adapter chat can succeed end-to-end.

Risk Areas

  • Inference routing: medium risk. The new fallback logic changes provider selection semantics.
  • Swift ingestion UX: medium risk. Success/failure handling is now asynchronous and tied to network outcome.
  • Adapter tooling: medium risk. Schema compatibility was broadened, but the actual ML training stack remains optional and environment-sensitive.
  • Docs/runtime alignment: low risk. The docs now match the corrected contract.

Confidence By Area

  • Swift source validity: high
  • Rust backend compilation and tests: high
  • Frontend build and lint: high
  • Provider routing semantics: medium-high
  • Adapter training/test tooling: medium
  • End-to-end adapter inference with a real local adapter artifact: low-medium

Verification

  • swiftc -typecheck on the Swift app sources
  • xcodebuild -project swiftmail/mailbrain/mailbrain.xcodeproj -scheme mailbrain -sdk macosx CODE_SIGNING_ALLOWED=NO CODE_SIGNING_REQUIRED=NO build
  • python3 -m py_compile adapters/train_lora.py adapters/test_lora.py backend/config.py
  • cargo test --manifest-path backend/Cargo.toml
  • npm run build in frontend
  • npm run lint in frontend
  • Live backend smoke on :3901 for /health
  • Live chat routing smoke with EMAILBRAIN_INFERENCE_PROVIDER=jkca, confirming adapter traffic no longer fails with the old provider-contract 400

What To Look At

  • Provider routing in backend/src/main.rs
  • Swift async result handling in swiftmail/mailbrain/mailbrain/ContentView.swift and swiftmail/mailbrain/mailbrain/MailService.swift
  • Schema contract alignment between swiftmail/mailbrain/mailbrain/EmailExtractor.swift and adapters/train_lora.py
  • Updated product contract in README.md, architecture_analysis.md, and docs/sequence-diagram.md

Next Steps

  • Add or train a real adapter artifact for the seeded adapter path.
  • Decide whether JKCA should eventually learn the adapter contract instead of remaining fallback-only.
  • Add an automated smoke test for provider-aware adapter routing.

…duct path

This replaces the incomplete Python backend with the Rust Axum service, restores the frontend API layer, moves the app onto the 3900/3901 ports, and realigns the runtime so adapter-backed chat follows the LM Studio path by default while JKCA remains an explicit base-chat fallback. The reasoning is that the product idea and the live runtime had drifted apart: the UI advertised adapters, the docs contradicted the implementation, and the old default provider could not satisfy the adapter contract.

The commit also fixes the broken Swift ingestion surface by removing the duplicate LoRA extraction path, tying success UI to the actual backend response, and making extraction operate on the selected email. The remaining Python tooling is kept only where it still adds value for training/testing adapters, with the trainer and test harness updated to consume the extracted schema and target the local EmailBrain API rather than LM Studio directly.

Risk areas are the LM Studio dependency for adapter-backed chat, the seeded adapter path still requiring a real adapter artifact, and the optional JKCA fallback remaining dependent on that runtime being available when selected. These changes were taken because the previous state could compile in pieces but did not reliably express the authoritative product intent end-to-end.
@rogu3bear
Copy link
Owner Author

@codex review

@chatgpt-codex-connector
Copy link

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

…emediation

This release packages the backend, frontend, Swift, and adapter-tooling changes that were driven by the 50-edge-case investigation and then validated with targeted checks and live HTTP probes.

Behavior:
- hardens the Rust API around input validation, pagination, deduplication, safer CORS aliases, bounded logging, safer adapter-path handling, and clearer provider failures
- upgrades the Next.js client with abortable data fetching, safer date formatting, semantic selection controls, adapter recovery, dynamic endpoint labels, and local chat transcript continuity
- fixes the Swift Mail bridge to prefer the selected message, stop fabricating empty-mail placeholders, preserve recipients and thread metadata when available, honor configurable backend URLs, and avoid leaking message content to stdout
- makes the LoRA scripts resolve paths from the repo instead of caller cwd and tolerate missing optional Python backend config imports

Reasoning:
- the prior product path relied on optimistic local assumptions, which caused ambiguous failures, stale UI state, filesystem drift, and poor operator feedback once the local runtimes were missing or misconfigured
- the chosen path keeps diffs reviewable and reversible while moving failures to the boundary where they can be reported deterministically

Evidence:
- cargo test passed in backend (11 tests)
- npm run type-check passed in frontend
- python3 -m py_compile passed for adapters/test_lora.py and adapters/train_lora.py
- xcrun swiftc -typecheck passed for swiftmail/mailbrain/mailbrain/*.swift
- live probes confirmed backend health, frontend reachability, email ingest/list/detail, duplicate-ingest handling, bad-date rejection, invalid-id rejection, empty-prompt rejection, and localhost/127.0.0.1 CORS aliasing; remaining chat failures were traced to missing LM Studio and missing adapter artifacts rather than code regressions

Risk:
- base chat still depends on LM Studio running at the configured endpoint
- adapter chat still depends on a real adapter existing on disk for the seeded registry path
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant