Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions .claude-plugin/marketplace.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
{
"name": "steve-tools",
"owner":{
"name": "Steve",
"email": "steve@example.com"
},
"plugins": [
{
"name": "independent-reviewer",
"version": "1.0.0",
"source": "./independent-reviewer",
"description": "Carry out an independent review of all changes since last commit.",
"author":
{
"name": "Steve",
"email": "steve@example.com"
}
}

]
}
1 change: 1 addition & 0 deletions .claude/commands/doc-review.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Review the documentation file in the planning folder called $ARGUMENTS and add questions, clarifications or feedback to a new section at the end, along with any opportunities to simplify.
22 changes: 9 additions & 13 deletions .claude/skills/cerebras/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,43 +1,39 @@
---
name: cerebras-inference
description: Use this to write code to call an LLM using LiteLLM and OpenRouter with the Cerebras inference provider
name: Cerebras Inference
description: Use this to write code to call an LLM using LiteLLM with the direct Cerebras API
---

# Calling an LLM via Cerebras

These instructions allow you write code to call an LLM with Cerebras specified as the inference provider.
This method uses LiteLLM and OpenRouter.
These instructions allow you to write code to call an LLM using the direct Cerebras API via LiteLLM.

## Setup

The OPENROUTER_API_KEY must be set in the .env file and loaded in as an environment variable.
The `CEREBRAS_API_KEY` must be set in the `.env` file and loaded as an environment variable.

The uv project must include litellm and pydantic.
`uv add litellm pydantic`

## Code snippets

Use code like these examples in order to use Cerebras.

### Imports and constants

```python
from litellm import completion
MODEL = "openrouter/openai/gpt-oss-120b"
EXTRA_BODY = {"provider": {"order": ["cerebras"]}}
MODEL = "cerebras/qwen-3-235b-a22b-instruct-2507"
```

### Code to call via Cerebras for a text response
### Code to call Cerebras for a text response

```python
response = completion(model=MODEL, messages=messages, reasoning_effort="low", extra_body=EXTRA_BODY)
response = completion(model=MODEL, messages=messages)
result = response.choices[0].message.content
```

### Code to call via Cerebras for a Structured Outputs response
### Code to call Cerebras for a Structured Outputs response

```python
response = completion(model=MODEL, messages=messages, response_format=MyBaseModelSubclass, reasoning_effort="low", extra_body=EXTRA_BODY)
response = completion(model=MODEL, messages=messages, response_format=MyBaseModelSubclass)
result = response.choices[0].message.content
result_as_object = MyBaseModelSubclass.model_validate_json(result)
```
108 changes: 72 additions & 36 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,62 +1,98 @@
# FinAlly — AI Trading Workstation

A visually stunning AI-powered trading workstation that streams live market data, simulates portfolio trading, and integrates an LLM chat assistant that can analyze positions and execute trades via natural language.
An AI-powered trading workstation that streams live market data, supports simulated portfolio trading, and includes an LLM chat assistant that can analyze positions and execute trades on your behalf. Built to look and feel like a Bloomberg terminal with an AI copilot.

Built entirely by coding agents as a capstone project for an agentic AI coding course.
> **Course project**: Built entirely by orchestrated AI coding agents, demonstrating how agentic AI can produce a production-quality full-stack application.

## Features
---

- **Live price streaming** via SSE with green/red flash animations
- **Simulated portfolio** — $10k virtual cash, market orders, instant fills
- **Portfolio visualizations** — heatmap (treemap), P&L chart, positions table
- **AI chat assistant** — analyzes holdings, suggests and auto-executes trades
- **Watchlist management** — track tickers manually or via AI
- **Dark terminal aesthetic** — Bloomberg-inspired, data-dense layout
## What You Get

## Architecture

Single Docker container serving everything on port 8000:
- **Live price streaming** — prices flash green/red on every tick via SSE
- **Sparkline mini-charts** — per-ticker price action accumulated from the live stream
- **Buy/sell shares** — market orders, instant fill, no fees
- **Portfolio heatmap** — treemap sized by weight, colored by P&L
- **P&L chart** — total portfolio value over time, live-updating
- **Positions table** — quantity, avg cost, current price, unrealized P&L
- **AI chat** — ask questions, get analysis, have the AI execute trades and manage your watchlist
- **$10,000 virtual cash** — no login, no signup, start immediately

- **Frontend**: Next.js (static export) with TypeScript and Tailwind CSS
- **Backend**: FastAPI (Python/uv) with SSE streaming
- **Database**: SQLite with lazy initialization
- **AI**: LiteLLM → OpenRouter (Cerebras inference) with structured outputs
- **Market data**: Built-in GBM simulator (default) or Massive API (optional)
---

## Quick Start

```bash
# Clone and configure
# Copy and edit environment variables
cp .env.example .env
# Add your OPENROUTER_API_KEY to .env
# Add your CEREBRAS_API_KEY to .env

# Run with Docker
docker build -t finally .
docker run -v finally-data:/app/db -p 8000:8000 --env-file .env finally
# Start (macOS/Linux)
./scripts/start_mac.sh

# Open http://localhost:8000
# Start (Windows)
./scripts/start_windows.ps1
```

Open [http://localhost:8000](http://localhost:8000).

---

## Environment Variables

| Variable | Required | Description |
|---|---|---|
| `OPENROUTER_API_KEY` | Yes | OpenRouter API key for AI chat |
| `MASSIVE_API_KEY` | No | Massive (Polygon.io) key for real market data; omit to use simulator |
| `LLM_MOCK` | No | Set `true` for deterministic mock LLM responses (testing) |
|----------|----------|-------------|
| `CEREBRAS_API_KEY` | Yes | LLM inference via Cerebras |
| `MASSIVE_API_KEY` | No | Real market data via Massive/Polygon. Uses simulator if absent. |
| `LLM_MOCK` | No | Set `true` for deterministic mock LLM responses (testing/CI) |

## Project Structure
---

## Architecture

Single Docker container, single port (8000):

- **Frontend**: Next.js (TypeScript), static export served by FastAPI
- **Backend**: FastAPI (Python/uv)
- **Database**: SQLite — zero config, auto-initialized on first run
- **Real-time**: Server-Sent Events (SSE)
- **AI**: LiteLLM → Cerebras (`qwen-3-235b-a22b-instruct-2507`)
- **Market data**: Built-in GBM simulator (default) or Massive REST API

```
finally/
├── frontend/ # Next.js static export
├── backend/ # FastAPI uv project
├── planning/ # Project documentation and agent contracts
├── test/ # Playwright E2E tests
├── db/ # SQLite volume mount (runtime)
└── scripts/ # Start/stop helpers
├── frontend/ # Next.js TypeScript project
├── backend/ # FastAPI uv project
├── planning/ # Project documentation
├── scripts/ # Start/stop scripts
├── test/ # Playwright E2E tests
├── db/ # SQLite volume mount (runtime only)
├── Dockerfile
└── docker-compose.yml
```

## License
---

## Development

See [LICENSE](LICENSE).
```bash
# Backend (requires uv)
cd backend
uv run uvicorn app.main:app --reload --port 8000

# Frontend
cd frontend
npm install
npm run dev
```

---

## Testing

```bash
# Backend unit tests
cd backend && uv run pytest

# E2E tests (requires Docker)
cd test && docker compose -f docker-compose.test.yml up --abort-on-container-exit
```
4 changes: 3 additions & 1 deletion backend/app/market/cache.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,11 +60,13 @@ def remove(self, ticker: str) -> None:
"""Remove a ticker from the cache (e.g., when removed from watchlist)."""
with self._lock:
self._prices.pop(ticker, None)
self._version += 1

@property
def version(self) -> int:
"""Current version counter. Useful for SSE change detection."""
return self._version
with self._lock:
return self._version

def __len__(self) -> int:
with self._lock:
Expand Down
9 changes: 6 additions & 3 deletions backend/app/market/simulator.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,9 +144,10 @@ def get_tickers(self) -> list[str]:
# --- Internals ---

def _add_ticker_internal(self, ticker: str) -> None:
"""Add a ticker without rebuilding Cholesky (for batch initialization)."""
if ticker in self._prices:
return
"""Add a ticker without rebuilding Cholesky (for batch initialization).

Assumes no duplicates — callers must check before calling.
"""
self._tickers.append(ticker)
self._prices[ticker] = SEED_PRICES.get(ticker, random.uniform(50.0, 300.0))
self._params[ticker] = TICKER_PARAMS.get(ticker, dict(DEFAULT_PARAMS))
Expand Down Expand Up @@ -240,6 +241,7 @@ async def stop(self) -> None:
logger.info("Simulator stopped")

async def add_ticker(self, ticker: str) -> None:
"""Add a ticker. Must be called after start()."""
if self._sim:
self._sim.add_ticker(ticker)
# Seed cache immediately so the ticker has a price right away
Expand All @@ -249,6 +251,7 @@ async def add_ticker(self, ticker: str) -> None:
logger.info("Simulator: added ticker %s", ticker)

async def remove_ticker(self, ticker: str) -> None:
"""Remove a ticker. Must be called after start()."""
if self._sim:
self._sim.remove_ticker(ticker)
self._cache.remove(ticker)
Expand Down
5 changes: 2 additions & 3 deletions backend/app/market/stream.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,14 +14,13 @@

logger = logging.getLogger(__name__)

router = APIRouter(prefix="/api/stream", tags=["streaming"])


def create_stream_router(price_cache: PriceCache) -> APIRouter:
"""Create the SSE streaming router with a reference to the price cache.

This factory pattern lets us inject the PriceCache without globals.
Returns a fresh APIRouter each call — safe to call multiple times (e.g., in tests).
"""
router = APIRouter(prefix="/api/stream", tags=["streaming"])

@router.get("/prices")
async def stream_prices(request: Request) -> StreamingResponse:
Expand Down
1 change: 1 addition & 0 deletions backend/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ dev = [
"pytest-asyncio>=0.24.0",
"pytest-cov>=5.0.0",
"ruff>=0.7.0",
"httpx>=0.27.0",
]

[build-system]
Expand Down
21 changes: 20 additions & 1 deletion backend/tests/market/test_simulator.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""Tests for GBMSimulator."""

from app.market.seed_prices import SEED_PRICES
from app.market.seed_prices import SEED_PRICES, TICKER_PARAMS
from app.market.simulator import GBMSimulator


Expand Down Expand Up @@ -116,6 +116,25 @@ def test_pairwise_correlation_cross_sector(self):
corr = GBMSimulator._pairwise_correlation("AAPL", "JPM")
assert corr == 0.3

def test_all_ten_default_tickers(self):
"""GBMSimulator initializes cleanly with all 10 default tickers.

Verifies that the 10x10 correlation matrix produces a valid Cholesky
decomposition (i.e., is positive-definite) and that step() returns
correct prices for every ticker.
"""
all_tickers = list(TICKER_PARAMS.keys())
assert len(all_tickers) == 10

sim = GBMSimulator(tickers=all_tickers)
assert sim._cholesky is not None
assert sim._cholesky.shape == (10, 10)

result = sim.step()
assert set(result.keys()) == set(all_tickers)
for ticker, price in result.items():
assert price > 0, f"{ticker} price should be positive"

def test_default_dt_is_reasonable(self):
"""Test that default dt is a reasonable small value."""
assert 0 < GBMSimulator.DEFAULT_DT < 0.0001
Expand Down
Loading