Skip to content

Releases: chaliy/llmsim

v0.2.3

20 Mar 22:07
1bdad58

Choose a tag to compare

Highlights

  • WebSocket mode for Responses API streaming
  • OpenAI thinking/reasoning emulation support
  • Fixed repository URLs for crates.io listing

What's Changed

  • chore: routine maintenance - update deps and align specs (#30) by @chaliy
  • fix: correct repository URLs for crates.io listing (#29) by @chaliy
  • chore: add attribution settings and agent guidance for commits/PRs (#28) by @chaliy
  • feat: add /ship command for full shipping workflow (#27) by @chaliy
  • feat(api): add WebSocket mode for Responses API (#26) by @chaliy
  • feat(api): add OpenAI thinking/reasoning emulation (#25) by @chaliy

Full Changelog: v0.2.2...v0.2.3

v0.2.2

08 Feb 18:37
1a6635e

Choose a tag to compare

Highlights

  • Updated dependencies and model profiles for routine maintenance
  • Fixed CI release workflow to call publish workflow directly

What's Changed

  • chore: routine maintenance - update deps, models, and specs (#23) by @chaliy
  • fix(ci): call publish workflow directly from release workflow (#22) by @chaliy

Full Changelog: v0.2.1...v0.2.2

v0.2.1

08 Feb 05:07
3cc7f58

Choose a tag to compare

Highlights

  • New model profiles: Claude Opus 4.6, GPT-5.3 Codex

What's Changed

  • docs: adopt changelog format with highlights and full changelog link (#21) by @chaliy
  • fix(ci): fix failing build and add CI merge policy (#20) by @chaliy
  • feat(models): add Claude Opus 4.6, GPT-5.3 Codex, and update model profiles (#17) by @chaliy
  • fix(ci): wrap if condition in expression syntax for YAML parsing (#16) by @chaliy

Full Changelog: v0.2.0...v0.2.1

v0.2.0

18 Jan 00:27
738c32e

Choose a tag to compare

Breaking Changes

  • API endpoints now require provider prefix: All provider-specific API endpoints are now prefixed with the provider name. This change improves multi-provider support and SDK compatibility.
    • /v1/chat/completions/openai/v1/chat/completions
    • /v1/responses/openai/v1/responses
    • /v1/models/openai/v1/models
    • When using official SDKs, configure the base URL with the provider prefix:
      # OpenAI Python SDK
      client = OpenAI(base_url="http://localhost:3000/openai/v1", api_key="not-needed")

What's Changed

  • feat(models): Add realistic model profiles from models.dev (#12) by @chaliy
  • feat(api): add OpenResponses API provider with provider-namespaced routes (#11) by @chaliy
  • docs: improve README and AGENTS.md documentation (#10) by @chaliy
  • Add OpenAI responses endpoint support (#6) by @chaliy
  • Add load and stress testing benchmarks (#7) by @chaliy
  • Add real-time stats display to console UI (#5) by @chaliy
  • Create simple JavaScript AI SDK example (#4) by @chaliy
  • Add usage examples for Rust and Python (#2) by @chaliy
  • Create README, license, and contribution files (#3) by @chaliy
  • LLMSim Library and Server (#1) by @chaliy

0.1.0

10 Jan 04:17
17da7f8

Choose a tag to compare

What's Changed

  • Create README, license, and contribution files by @chaliy in #3
  • Add usage examples for Rust and Python by @chaliy in #2
  • Create simple JavaScript AI SDK example by @chaliy in #4
  • Add real-time stats display to console UI by @chaliy in #5
  • Add load and stress testing benchmarks by @chaliy in #7
  • Add OpenAI responses endpoint support by @chaliy in #6
  • ci: add crates.io publishing workflow by @chaliy in #9
  • docs: improve README and AGENTS.md documentation by @chaliy in #10
Screenshot 2026-01-09 at 9 57 49 PM

Full Changelog: https://github.com/chaliy/llmsim/commits/0.1.0