Skip to content

parkky21/Revan

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🧠 Local Mind


πŸ“Έ Demo

Click to expand Screenshot 2025-07-17 015536 Screenshot 2025-07-17 021432 Screenshot 2025-07-17 001636

πŸ§‘β€πŸ’» Requirements

  • Tavily Search API Key:
    Local Mind uses Tavily Search to fetch live web results for its research agent.

    • Get your free Tavily API key here: https://app.tavily.com/home
    • Add your key to your .env file as:
      TAVILY_API_KEY=your-key-here
      
    • This is required for web research to function!
  • Local LLM Model:
    Local Mind runs a quantized Jan-nano model on your machine, so all your data stays private.

    • What is Jan-nano?
      Jan-nano is a highly efficient, open-source LLM by Menlo, specifically designed for running on local CPUs and resource-limited hardware, making it perfect for private, local AI.

      • Trained on high-quality English datasets.
      • Optimized for speed and context length.
      • Well-suited for chat, question answering, and code.
    • Quantized Model Downloads:
      Local Mind supports quantized GGUF versions for best performance on your system.

      • Choose a quantized file (*.gguf) from this list.
      • Download the version that matches your hardware and put it in your server/model/ or the main server directory.
      Model Variant RAM Required File Size Download Link
      Q4, Q5, Q6 Varies ~1-2GB Choose here

πŸš€ Quick Start

You need both a Tavily API key and a quantized Jan-nano GGUF model file!

  1. Get your Tavily API Key
    and add it to server/.env:

TAVILY\_API\_KEY=your-key-here

  1. Download a quantized Jan-nano model:
    Pick from Jan-nano GGUF releases
    and put your chosen .gguf file in server/model/.

  2. Follow previous setup for server and client...


🧠 About Jan-nano


πŸ•ΈοΈ About Tavily Search

  • Tavily is a fast, privacy-friendly web search API for AI research agents.
  • Get your API key at https://app.tavily.com/home.
  • Enables Local Mind to find and cite up-to-date answers beyond your files.

See the rest of the README above for setup, API, and use cases!


πŸ—οΈ Project Structure


local-mind/
β”œβ”€β”€ client/       # Next.js React frontend (chat UI, file upload, etc)
β”œβ”€β”€ server/       # FastAPI backend, LLM, RAG, and web search agents
β”‚   β”œβ”€β”€ app/
β”‚   β”‚   β”œβ”€β”€ config.py
β”‚   β”‚   β”œβ”€β”€ utils.py
β”‚   β”‚   β”œβ”€β”€ rag.py
β”‚   β”‚   β”œβ”€β”€ research.py
β”‚   β”‚   └── main.py
β”‚   β”œβ”€β”€ data/           # Your uploaded/managed documents
β”‚   β”œβ”€β”€ store\_rag/      # Local vector store index
β”‚   └── requirements.txt
β”œβ”€β”€ README.md


✨ What is Local Mind?

Local Mind is your own personal, private, full-stack AI workspace.

  • Upload files, chat with your own knowledge base
  • Research with live web search (and get real, cited answers)
  • All locally, all private β€” powered by FastAPI and Next.js

πŸ–₯️ Client (Next.js)

  • Beautiful chat interface
  • File upload and management
  • Live streaming answers from LLM
  • Web search results shown in real time

The client talks to the FastAPI backend via simple HTTP endpoints.


⚑️ Server (FastAPI, LlamaIndex, LangChain, LLM)

  • Runs your local LLM (Llama.cpp)
  • Retrieval-Augmented Generation on your uploaded files
  • Web research agent for up-to-date answers
  • Automatic file watching and index updating
  • API endpoints for chat, file management, research

πŸ› οΈ Use Cases

  • Personal Knowledge Base: Chat with your notes, manuals, or code docs.
  • Research Copilot: Get the best web and local insights, cited and summarized.
  • Secure Team Docs: Host on LAN, share within your orgβ€”no data ever leaves.
  • Developer Assistant: Index your codebase docs, get instant answers.
  • Academic Summaries: Ask questions to your PDFs or web research.

πŸš€ Quick Start

1️⃣ Clone the repo

git clone https://github.com/yourusername/local-mind.git
cd local-mind

2️⃣ Setup the Server

cd server
python -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate
pip install -r requirements.txt
# Place your Llama.cpp GGUF model in this folder (e.g. jan-nano-128k-Q5_K_M.gguf)
python -m app.main
# or
uvicorn app.main:app --reload

3️⃣ Setup the Client

cd ../client
npm install
npm run dev

🌐 How it Works

  1. Upload your files (PDF, TXT, etc) via the web UI.
  2. Chat with your knowledge base. Get answers instantly.
  3. Ask web questions: The agent pulls the latest info and cites real URLs.
  4. All private: Your files and chats never leave your device.

πŸ“– API Reference

The FastAPI backend exposes endpoints such as:

  • POST /rag/upload/ β€” Upload documents
  • GET /rag/files/ β€” List uploaded files
  • DELETE /rag/delete/{filename} β€” Delete documents
  • GET /rag/stream β€” Chat with your files (RAG)
  • GET /research/stream β€” Research agent (web search)
  • GET /health β€” Health check

See http://localhost:8000/docs for full details.


πŸ’‘ Why Local Mind?

  • πŸ›‘οΈ Privacy First: 100% local, no data leaves your computer.
  • 🧠 Multi-source AI: Mixes your files and the web.
  • ⚑ Lightning Fast: No cloud lag, no rate limits.
  • 🧩 Composable: Easy to extend or add your own tools.

🀝 Contributing

Pull requests, issues, and suggestions are always welcome!

  • Want to add more LLM models? New RAG features? Better UI? Jump in!

About

Locally Talk with your private documents, and even do a private research without any tokens concerns

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors