TenetLLMLocal provides local inference server adapters for the Tenet platform.
- Requirements: SRS_TENETLLMLOCAL
- Architecture: ARCH_TENETLLMLOCAL
- Realization: PLAN_TENETLLMLOCAL
- Verification: VER_TENETLLMLOCAL
- Governance: TenetOS
This package includes adapters for LM Studio, Ollama, and generic OpenAI-compatible local servers.
pip install tenet-llm-localConfigure TenetCore backend provider entries to use lmstudio, ollama, or local adapter names.
Endpoints are configured by host runtime configuration and environment variables. SSRF safeguards restrict targets to loopback/private network destinations.
- Python entry-point group:
tenet.llm_adapters - Standalone CLI: not applicable
- Product code:
src/tenet_llm_local/ - Adapter implementations:
_lmstudio.py,_ollama.py,_generic.py
pytest tests/- Contributing: CONTRIBUTING.md
See CHANGELOG.md.
AGPL-3.0-only. See LICENSE.