A smart Retrieval-Augmented Generation (RAG) chatbot for medical documents. Upload PDFs, extract their content, and interact with the model to get precise answers.
Built with a glassmorphic Tailwind CSS chat UI and a FastAPI backend, supporting Groq or local LLM inference.
- 🧠 RAG-powered Q&A: Ask questions directly from uploaded PDFs.
- 📁 PDF Upload & Indexing: Automatically extract and embed content.
- 💬 Interactive Chat: Smooth typing animations for real-time conversations.
- 🌙 Dark / Light Mode Toggle: UI mode persists with
localStorage. - 💾 Chat History: Saved locally in the browser for quick access.
- 🧹 Clear Chat: Reset your conversation anytime.
- ✅ Health Status Indicator: Monitor backend connectivity.
- ⚡ Modern Tech Stack: FastAPI backend + Tailwind UI, compatible with LangChain and Groq.
- Frontend: React + Tailwind CSS + Glassmorphism design
- Backend: FastAPI + LangChain / Groq for RAG inference
- Data Storage: Browser localStorage for chat history
- PDF Processing: PDF text extraction + embedding generation
- Clone the repo
git clone https://github.com/utsab345/medical-rag-chatbot.git
cd medical-rag-chatbot- Install dependencies
pip install -r requirements.txt npm install
- Run backend
uvicorn app.main:app --reload
- Run frontend
npm run dev
