Skip to content

Migrate LLM API cache from pickle to JSON; extract shared LLMCacheBase class#146

Draft
Copilot wants to merge 2 commits intomainfrom
copilot/refactor-llm-api-caching
Draft

Migrate LLM API cache from pickle to JSON; extract shared LLMCacheBase class#146
Copilot wants to merge 2 commits intomainfrom
copilot/refactor-llm-api-caching

Conversation

Copy link

Copilot AI commented Feb 18, 2026

The API cache stores only dicts, so pickle is unnecessary. The Ollama client duplicated the caching logic without sharing a common base.

Changes

  • LLMCacheBase class in openai_client.py — extracts _save_cache, _load_cache, set_api_cache using json.dump/json.load with UTF-8 encoding
  • OpenAIClient inherits from LLMCacheBase, removes redundant cache methods and pickle import
  • OllamaClient inherits from LLMCacheBase instead of reimplementing caching standalone
  • AzureClient unchanged — inherits transitively via OpenAIClient
  • Default cache filename changed from openai_api_cache.pickleopenai_api_cache.json in config.ini, examples/config.ini, and __init__.py
  • Tests updated from pickle to JSON assertions; added TestLLMCacheBase (6 tests) and TestOllamaClientCache (5 tests) covering inheritance, roundtrip, corruption handling

Inheritance

LLMCacheBase          ← _save_cache, _load_cache, set_api_cache (JSON)
├── OpenAIClient
│   └── AzureClient
└── OllamaClient

✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

…Base class

- Create LLMCacheBase class with JSON-based _save_cache, _load_cache, set_api_cache
- Refactor OpenAIClient to inherit caching from LLMCacheBase
- Refactor OllamaClient to inherit from LLMCacheBase (was standalone)
- AzureClient inherits from OpenAIClient, so it works automatically
- Update default cache file name from openai_api_cache.pickle to openai_api_cache.json
- Update tests to use JSON format instead of pickle
- Add tests for LLMCacheBase, OllamaClient cache, and inheritance checks

Co-authored-by: paulosalem <1709404+paulosalem@users.noreply.github.com>
Copilot AI changed the title [WIP] Refactor LLM API caching to use JSON format Migrate LLM API cache from pickle to JSON; extract shared LLMCacheBase class Feb 18, 2026
Copilot AI requested a review from paulosalem February 18, 2026 15:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants