Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions .claude-plugin/marketplace.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
{
"name": "lalit-tools",
"owner": {
"name": "Lalit Nayyar",
"email": "lalitnayyar@gmail.com"
},
"plugins": [
{
"name": "independent-reviewer",
"source": "./independent-reviewer",
"version": "1.0.0",
"description": "carry out an independent review of all changes since last commit",
"author": {
"name": "Lalit Nayyar"
}

}
]
}
13 changes: 13 additions & 0 deletions .claude/agents/change-reviwer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
name: change-reviewer
description: carry out the compehensive review of all changes since the last commit.
---

This subagent reviews all the changes since the last commit using shell command.
IMPORTANT: you should not review the changes by yourself, but rather , you should run the following shell command to kick of codex-codex is a separate AI Agent that will carry out the independent review.
Run this shell command:
'codex exec "Please review all changes since the last commit and write feedback to planning/REVIEW-last-commit.md"'
This will run the review process and save the results.

Do not review yourself.

10 changes: 10 additions & 0 deletions .claude/agents/codex-reviewer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
name: codex-reviewer
description: carry out a comprehensive review of PLAN.md when requested using codex
---

You are using a different AI Agent to carry out a review of the doucment: planning/PLAN.md.
You MUST execute the following shell command to carry out the review- do not review yourself:
'codex exec "Please review the file planning/PLAN.md and write your review in a file called planning/REVIEW-CODEX.md"'
This will run review process and save the results.
Do not review yourself.
6 changes: 6 additions & 0 deletions .claude/agents/reviewer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
name: reviewer
description: carry out a comprehensive review when requested.
---

You review the file planning/PLAN.md and write your feedback to planning/REVIEW.md. Focus on the clarity of the plan, potential issues, and suggestions for improvement.
1 change: 1 addition & 0 deletions .claude/commands/doc-review.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Review the documentation file in the planning folder called $ARGUMENTS and add questions, clarifications or feedback to a new section at the end, along with any opportunities to simplify
3 changes: 2 additions & 1 deletion .claude/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
"enabledPlugins": {
"frontend-design@claude-plugins-official": true,
"context7@claude-plugins-official": true,
"playwright@claude-plugins-official": true
"playwright@claude-plugins-official": true,
"independent-reviewer@lalit-tools": true
}
}
13 changes: 13 additions & 0 deletions .claude/settings.json_mod
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
{
"hooks": {
"Stop": [
{ "hooks": [
{
"type":"command",
"command": "codex exec \"Please review all changes since the last commit and write feedback to planning/REVIEW-last-commit.md\""
}
]
}
]
}
}
Empty file added .codex
Empty file.
85 changes: 45 additions & 40 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,62 +1,67 @@
# FinAlly — AI Trading Workstation

A visually stunning AI-powered trading workstation that streams live market data, simulates portfolio trading, and integrates an LLM chat assistant that can analyze positions and execute trades via natural language.
An AI-powered trading workstation with live market data, a simulated portfolio, and an LLM chat assistant that can analyze positions and execute trades. Built as a capstone project for an agentic AI coding course.

Built entirely by coding agents as a capstone project for an agentic AI coding course.
## What It Does

## Features
- Streams live prices for 10 default tickers (simulated by default, real data via Massive API)
- $10,000 virtual cash to trade with — buy/sell with market orders, instant fill
- Portfolio heatmap, P&L chart, positions table, and sparkline mini-charts
- AI chat assistant (powered by Cerebras via OpenRouter) that can discuss your portfolio and execute trades

- **Live price streaming** via SSE with green/red flash animations
- **Simulated portfolio** — $10k virtual cash, market orders, instant fills
- **Portfolio visualizations** — heatmap (treemap), P&L chart, positions table
- **AI chat assistant** — analyzes holdings, suggests and auto-executes trades
- **Watchlist management** — track tickers manually or via AI
- **Dark terminal aesthetic** — Bloomberg-inspired, data-dense layout
## Stack

## Architecture
- **Frontend**: Next.js (TypeScript, static export), Tailwind CSS
- **Backend**: FastAPI (Python, managed by `uv`), SQLite
- **Real-time**: Server-Sent Events (SSE)
- **AI**: LiteLLM → OpenRouter → Cerebras (`openai/gpt-oss-120b`)
- **Deployment**: Single Docker container on port 8000

Single Docker container serving everything on port 8000:
## Setup

- **Frontend**: Next.js (static export) with TypeScript and Tailwind CSS
- **Backend**: FastAPI (Python/uv) with SSE streaming
- **Database**: SQLite with lazy initialization
- **AI**: LiteLLM → OpenRouter (Cerebras inference) with structured outputs
- **Market data**: Built-in GBM simulator (default) or Massive API (optional)
1. Copy `.env.example` to `.env` and add your API key:
```
OPENROUTER_API_KEY=your-key-here
```

## Quick Start
2. Start:
```bash
# macOS/Linux
./scripts/start_mac.sh

```bash
# Clone and configure
cp .env.example .env
# Add your OPENROUTER_API_KEY to .env

# Run with Docker
docker build -t finally .
docker run -v finally-data:/app/db -p 8000:8000 --env-file .env finally
# Windows
.\scripts\start_windows.ps1
```

# Open http://localhost:8000
```
3. Open `http://localhost:8000`

## Environment Variables

| Variable | Required | Description |
|---|---|---|
| `OPENROUTER_API_KEY` | Yes | OpenRouter API key for AI chat |
| `MASSIVE_API_KEY` | No | Massive (Polygon.io) key for real market data; omit to use simulator |
| `LLM_MOCK` | No | Set `true` for deterministic mock LLM responses (testing) |
| `OPENROUTER_API_KEY` | Yes | OpenRouter key for LLM chat |
| `MASSIVE_API_KEY` | No | Real market data (omit to use simulator) |
| `LLM_MOCK` | No | Set `true` for deterministic mock responses (testing) |

## Project Structure
## Development

```bash
cd backend
uv venv && uv pip install -e .
uv run uvicorn app.main:app --reload --port 8000
```
finally/
├── frontend/ # Next.js static export
├── backend/ # FastAPI uv project
├── planning/ # Project documentation and agent contracts
├── test/ # Playwright E2E tests
├── db/ # SQLite volume mount (runtime)
└── scripts/ # Start/stop helpers

```bash
cd frontend
npm install && npm run dev
```

## License
## Testing

```bash
# Backend unit tests
cd backend && uv run pytest tests/ -v

See [LICENSE](LICENSE).
# E2E tests (requires Docker)
cd test && docker compose -f docker-compose.test.yml up
```
5 changes: 5 additions & 0 deletions independent-reviewer/.claude-plugin/plugin.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"name": "independent-reviewer",
"version": "0.1.0",
"description": "carry out an independent review of all changes since last comit"
}
14 changes: 14 additions & 0 deletions independent-reviewer/hooks/hooks.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
{
"hooks": {
"Stop": [
{
"hooks": [
{
"type": "command",
"command": "codex exec \"Please review all changes since the last commit and write feedback to planning/REVIEW-last-commit.md\""
}
]
}
]
}
}
62 changes: 35 additions & 27 deletions planning/PLAN.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,6 @@ finally/
├── db/ # Volume mount target (SQLite file lives here at runtime)
│ └── .gitkeep # Directory exists in repo; finally.db is gitignored
├── Dockerfile # Multi-stage build (Node → Python)
├── docker-compose.yml # Optional convenience wrapper
├── .env # Environment variables (gitignored, .env.example committed)
└── .gitignore
```
Expand Down Expand Up @@ -171,11 +170,15 @@ Both the simulator and the Massive client implement the same abstract interface.
- SSE streams read from this cache and push updates to connected clients
- This architecture supports future multi-user scenarios without changes to the data layer

### Price History Cache

The in-memory price cache maintains a rolling history of the last 200 prices per ticker (in addition to the current price). This powers the main chart area and can bootstrap sparklines on page load via `/api/prices/{ticker}/history`, eliminating the need for the frontend to accumulate chart data purely from the SSE stream.

### SSE Streaming

- Endpoint: `GET /api/stream/prices`
- Long-lived SSE connection; client uses native `EventSource` API
- Server pushes price updates for all tickers known to the system at a regular cadence (~500ms) — in the single-user model this is equivalent to the user's watchlist
- Server pushes price updates for all tickers in the watchlist table at a regular cadence (~500ms)
- Each SSE event contains ticker, price, previous price, timestamp, and change direction
- Client handles reconnection automatically (EventSource has built-in retry)

Expand All @@ -193,7 +196,7 @@ The backend checks for the SQLite database on startup (or first request). If the

### Schema

All tables include a `user_id` column defaulting to `"default"`. This is hardcoded for now (single-user) but enables future multi-user support without schema migration.
This is a single-user app. No `user_id` columns — they add complexity without value at this stage.

**users_profile** — User state (cash balance)
- `id` TEXT PRIMARY KEY (default: `"default"`)
Expand All @@ -202,47 +205,36 @@ All tables include a `user_id` column defaulting to `"default"`. This is hardcod

**watchlist** — Tickers the user is watching
- `id` TEXT PRIMARY KEY (UUID)
- `user_id` TEXT (default: `"default"`)
- `ticker` TEXT
- `ticker` TEXT UNIQUE
- `added_at` TEXT (ISO timestamp)
- UNIQUE constraint on `(user_id, ticker)`

**positions** — Current holdings (one row per ticker per user)
**positions** — Current holdings (one row per ticker)
- `id` TEXT PRIMARY KEY (UUID)
- `user_id` TEXT (default: `"default"`)
- `ticker` TEXT
- `ticker` TEXT UNIQUE
- `quantity` REAL (fractional shares supported)
- `avg_cost` REAL
- `updated_at` TEXT (ISO timestamp)
- UNIQUE constraint on `(user_id, ticker)`

**trades** — Trade history (append-only log)
- `id` TEXT PRIMARY KEY (UUID)
- `user_id` TEXT (default: `"default"`)
- `ticker` TEXT
- `side` TEXT (`"buy"` or `"sell"`)
- `quantity` REAL (fractional shares supported)
- `price` REAL
- `executed_at` TEXT (ISO timestamp)

**portfolio_snapshots** — Portfolio value over time (for P&L chart). Recorded every 30 seconds by a background task, and immediately after each trade execution.
**portfolio_snapshots** — Portfolio value over time (for P&L chart). Recorded immediately after each trade execution and on backend startup.
- `id` TEXT PRIMARY KEY (UUID)
- `user_id` TEXT (default: `"default"`)
- `total_value` REAL
- `recorded_at` TEXT (ISO timestamp)

**chat_messages** — Conversation history with LLM
- `id` TEXT PRIMARY KEY (UUID)
- `user_id` TEXT (default: `"default"`)
- `role` TEXT (`"user"` or `"assistant"`)
- `content` TEXT
- `actions` TEXT (JSON — trades executed, watchlist changes made; null for user messages)
- `created_at` TEXT (ISO timestamp)
Chat conversation history is held in memory (lost on restart). No DB table needed.

### Default Seed Data

- One user profile: `id="default"`, `cash_balance=10000.0`
- Ten watchlist entries: AAPL, GOOGL, MSFT, AMZN, TSLA, NVDA, META, JPM, V, NFLX
- One initial portfolio snapshot recording $10,000 at startup time

---

Expand All @@ -252,6 +244,7 @@ All tables include a `user_id` column defaulting to `"default"`. This is hardcod
| Method | Path | Description |
|--------|------|-------------|
| GET | `/api/stream/prices` | SSE stream of live price updates |
| GET | `/api/prices/{ticker}/history` | Recent price history for a ticker (from in-memory cache, last 200 points) |

### Portfolio
| Method | Path | Description |
Expand All @@ -264,7 +257,7 @@ All tables include a `user_id` column defaulting to `"default"`. This is hardcod
| Method | Path | Description |
|--------|------|-------------|
| GET | `/api/watchlist` | Current watchlist tickers with latest prices |
| POST | `/api/watchlist` | Add a ticker: `{ticker}` |
| POST | `/api/watchlist` | Add a ticker: `{ticker}`. Returns 200 if already in watchlist (idempotent). |
| DELETE | `/api/watchlist/{ticker}` | Remove a ticker |

### Chat
Expand All @@ -283,21 +276,33 @@ All tables include a `user_id` column defaulting to `"default"`. This is hardcod

When writing code to make calls to LLMs, use cerebras-inference skill to use LiteLLM via OpenRouter to the `openrouter/openai/gpt-oss-120b` model with Cerebras as the inference provider. Structured Outputs should be used to interpret the results.

Note: `openrouter/openai/gpt-oss-120b` is the LiteLLM model string (not a bare OpenRouter slug). Cerebras is selected via `extra_body={"provider": {"order": ["cerebras"]}}` as shown in the cerebras-inference skill.

There is an OPENROUTER_API_KEY in the .env file in the project root.

### How It Works

Conversation history is kept in an in-memory list on the backend (cleared on restart). No database table is needed.

When the user sends a chat message, the backend:

1. Loads the user's current portfolio context (cash, positions with P&L, watchlist with live prices, total portfolio value)
2. Loads recent conversation history from the `chat_messages` table
2. Takes the last 20 messages from the in-memory conversation history
3. Constructs a prompt with a system message, portfolio context, conversation history, and the user's new message
4. Calls the LLM via LiteLLM → OpenRouter, requesting structured output, using the cerebras-inference skill
5. Parses the complete structured JSON response
6. Auto-executes any trades or watchlist changes specified in the response
7. Stores the message and executed actions in `chat_messages`
7. Appends both the user message and assistant response to the in-memory conversation list
8. Returns the complete JSON response to the frontend (no token-by-token streaming — Cerebras inference is fast enough that a loading indicator is sufficient)

### Error Handling

If the LLM call fails (network error, rate limit, malformed response), the backend returns HTTP 200 with:
```json
{"message": "Sorry, I'm having trouble connecting right now. Please try again in a moment.", "trades": [], "watchlist_changes": []}
```
The frontend treats this identically to a normal response — no special error path needed.

### Structured Output Schema

The LLM is instructed to respond with JSON matching this schema:
Expand Down Expand Up @@ -352,12 +357,12 @@ When `LLM_MOCK=true`, the backend returns deterministic mock responses instead o

The frontend is a single-page application with a dense, terminal-inspired layout. The specific component architecture and layout system is up to the Frontend Engineer, but the UI should include these elements:

- **Watchlist panel** — grid/table of watched tickers with: ticker symbol, current price (flashing green/red on change), daily change %, and a sparkline mini-chart (accumulated from SSE since page load)
- **Main chart area** — larger chart for the currently selected ticker, with at minimum price over time. Clicking a ticker in the watchlist selects it here.
- **Watchlist panel** — grid/table of watched tickers with: ticker symbol, current price (flashing green/red on change), daily change %, and a sparkline mini-chart. Sparklines bootstrap from `/api/prices/{ticker}/history` on load and continue accumulating from the SSE stream. An empty/loading state for the first few seconds before the first SSE event is acceptable.
- **Main chart area** — larger chart for the currently selected ticker. Bootstraps from `/api/prices/{ticker}/history` and continues updating via SSE. Clicking a ticker in the watchlist selects it here.
- **Portfolio heatmap** — treemap visualization where each rectangle is a position, sized by portfolio weight, colored by P&L (green = profit, red = loss)
- **P&L chart** — line chart showing total portfolio value over time, using data from `portfolio_snapshots`
- **P&L chart** — line chart showing total portfolio value over time, using data from `GET /api/portfolio/history`
- **Positions table** — tabular view of all positions: ticker, quantity, avg cost, current price, unrealized P&L, % change
- **Trade bar** — simple input area: ticker field, quantity field, buy button, sell button. Market orders, instant fill.
- **Trade bar** — simple input area: ticker field, quantity field (fractional shares supported, e.g. 0.5), buy button, sell button. Market orders, instant fill.
- **AI chat panel** — docked/collapsible sidebar. Message input, scrolling conversation history, loading indicator while waiting for LLM response. Trade executions and watchlist changes shown inline as confirmations.
- **Header** — portfolio total value (updating live), connection status indicator, cash balance

Expand All @@ -375,6 +380,8 @@ The frontend is a single-page application with a dense, terminal-inspired layout

### Multi-Stage Dockerfile

The Next.js frontend is built as a static export (`output: 'export'`). This is intentional and permanent — no SSR, no Node.js runtime, no Next.js API routes. All dynamic behavior goes through FastAPI endpoints.

```
Stage 1: Node 20 slim
- Copy frontend/
Expand Down Expand Up @@ -454,3 +461,4 @@ The container is designed to deploy to AWS App Runner, Render, or any container
- Portfolio visualization: heatmap renders with correct colors, P&L chart has data points
- AI chat (mocked): send a message, receive a response, trade execution appears inline
- SSE resilience: disconnect and verify reconnection

Loading