To build this into a CLI tool that you can invoke with python -m explain_codebase, you need to follow a specific directory structure and configure your pyproject.toml to create an entry point.
Here is the step-by-step guide.
git clone https://github.com/kwhyte7/explain-codebase.git
By default, this tool uses ollama, however you can use other providers.
Here is the link to download ollama, follow the download instructions and ensure that it's running on port 11434. Download a model (example qwen3.5:0.8b) by doing
ollama pull qwen3.5:0.8b
in your terminal.
Change directory into the cloned repo.
Install your chosen provider using uv add langchain[provider]
Add your API keys in ~/.explain_codebase.conf.yml, under model_kwargs. Should look something like:
model: provider:model-alias # example openai:gpt-5
model_kwargs:
- api_key: your-api-keyTo test the tool immediately without uploading it to PyPI, install your project in "editable" (dev) mode:
pip install -e .Now, anywhere in your terminal, you can run:
python -m explain_codebaseThis will start the program, and you should see a .codebase_explained directory appear.
Once it's been generated, you can either view the documentation files raw with a webbrowser, or you can run
python -m http.server 3000
to start a webserver, and you can view it at http://localhost:3000/
[project.scripts]: Defines the CLI command. We named itexplain_codebase.explain_codebase.__main__:main: Python looks for a file named__main__.pyinside the packageexplain_codebaseand runs the function calledmain.python -m: Tells Python to execute the moduleexplain_codebaseusing the entry point we defined above. =======