Local inference server adapters for the Tenet platform.
This package provides:
LMStudioAdapterfor LM Studio native v1 model lifecycle APIs plus OAI-compat generationOllamaAdapterfor Ollama native APIsGenericLocalAdapterfor generic local OAI-compatible servers
All adapters enforce an SSRF guard that only permits loopback and private-network targets.
- Requirements: SRS_TENETLLMLOCAL
- Architecture: ARCH_TENETLLMLOCAL
- Realization: PLAN_TENETLLMLOCAL
- Verification: VER_TENETLLMLOCAL
- Governance: TenetOS
pip install tenet-llm-local[project.entry-points."tenet.llm_adapters"]
lmstudio = "tenet_llm_local._lmstudio:LMStudioAdapter"
ollama = "tenet_llm_local._ollama:OllamaAdapter"
local = "tenet_llm_local._generic:GenericLocalAdapter"