A generic agentic AI platform where skills are pluggable Docker containers. PropSearch — a real estate search demo — is the reference application built on top.
The platform is an agent core: an agentic loop, a skill system, layered memory, and a streaming API. Skills implement two HTTP endpoints (GET /schema, POST /execute) and are discovered automatically at startup. The core knows nothing about any skill's domain.
PropSearch is the user-facing demo. It uses Zillow's natural language search via RealtyAPI and renders property listings as interactive cards with maps.
PropSearch is a chat interface — no search bar, filters, or dropdowns. Describe what you're looking for in plain English; results appear inline in the conversation as property cards.
find me a 3-bed house in Austin under $500k with a big yard
The agent responds token by token. Property cards — photo, address, price, beds, baths, sqft — appear directly in the chat thread, not in a separate results panel.
To refine, keep talking: the agent carries your prior filters forward and updates only what you changed. The new results appear below the previous ones in the same thread.
Clicking a property card sends a message on your behalf. The agent fetches full details and renders a richer card inline: year built, lot size, HOA fee, Zestimate. A map (OpenStreetMap) appears below, showing the property's exact location. The map only appears here because Zillow's search API doesn't return coordinates — only the detail endpoint does. A constraint that ended up feeling like a natural interaction: search to browse, click to commit.
A toggle above results switches between a photo grid and a compact list view. The preference is stored in localStorage and persists across sessions and page refreshes. After each response, three clickable suggestion chips appear based on the conversation so far — clicking one sends it as your next message.
The sidebar shows past conversations with auto-generated titles. Start a new session and the agent already knows your budget and location from last time — preferences are stored during the previous conversation and injected into context at the start of every new one. Nothing is saved explicitly by the user.
Each message hits a FastAPI endpoint, runs through an agentic loop that calls an LLM, dispatches tool calls to skill containers, scores and stores results in ChromaDB, then streams tokens and structured data back to the frontend over SSE. Skills are isolated Docker containers — the core knows nothing about their domain.
Full details in Architecture and Design.
- current_time — returns the current time in the user's timezone (Skill #1)
- real_estate — property search and detail via Zillow/RealtyAPI (Skill #2)
| Layer | Technology |
|---|---|
| Agent core | FastAPI, LiteLLM, ChromaDB |
| Skills | Docker containers (any language) |
| Frontend | React 18, TypeScript, Vite, Tailwind CSS |
| Maps | react-leaflet, OpenStreetMap |
| Memory | ChromaDB (L2/L3), Markdown (L0/L1) |
A five-part series documenting what was built and why.
- Post 1: What I Built and Why — the motivation, the platform, and the skill system
- Post 2: The User Experience — what PropSearch looks like in use, from search to detail to memory
- Post 3: Architecture and Design — the skill system, the four-layer memory model, and the key design decisions
- Post 4: Implementation Choices — the agentic loop, streaming, the container pool, context assembly, and scoring
- Post 5: Adding a New Skill — how to implement the skill contract and plug a new skill into the platform




