Builds an AI-powered question-answering chatbot using the LangChain framework. The bot uses a Retrieval-Augmented Generation (RAG) pattern: it loads a knowledge base of text documents, splits them into chunks, stores them in a vector database (FAISS), and retrieves the most relevant context before generating an answer. Includes a ChatGPT-style frontend.
75-langchain-chatbot/
main.py # FastAPI entry point with chat endpoint
bot.py # LangChain chain construction (retriever + LLM)
knowledge.py # Knowledge base loader and text splitter
config.py # LLM model config and API key placeholder
data/
knowledge_base.txt # Mock knowledge documents
requirements.txt # Dependencies
index.html # Unified ChatGPT-style frontend
README.md # This file
pip install -r requirements.txtEdit config.py and set your OpenAI API key (or use a local model).
uvicorn main:app --reloadOpen index.html in your browser to start chatting.
// POST /chat {"question": "What is Python used for?"}
{
"question": "What is Python used for?",
"answer": "Python is a versatile programming language used for web development, data science, machine learning, automation, scripting, and building APIs. It is known for its readability and large ecosystem of libraries.",
"sources": ["knowledge_base.txt:chunk_3", "knowledge_base.txt:chunk_7"],
"response_time_ms": 1250
}- RAG (Retrieval-Augmented Generation): Combines document retrieval with LLM generation for grounded answers.
- Text Splitting: Large documents are chunked into overlapping segments for better retrieval.
- Vector Store (FAISS): Embeds text chunks into vectors for semantic similarity search.
- Prompt Templates: LangChain's
PromptTemplatestructures the LLM input with context injection. - Conversation Memory:
ConversationBufferMemorymaintains chat history across turns.