Skip to content

feat: add MiniMax as LLM provider with M2.7 default#1568

Open
octo-patch wants to merge 2 commits intomicrosoft:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as LLM provider with M2.7 default#1568
octo-patch wants to merge 2 commits intomicrosoft:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

@octo-patch octo-patch commented Mar 15, 2026

Summary

Add MiniMax as a new LLM provider for JARVIS/HuggingGPT, alongside the existing OpenAI and Azure OpenAI options. MiniMax provides an OpenAI-compatible chat completions API with models offering up to 204K context length.

Changes

  • Provider detection (awesome_chat.py): Added MiniMax to the provider priority chain (local > azure > minimax > openai) with proper API endpoint construction, key resolution (config file or MINIMAX_API_KEY env var), and temperature clamping
  • Token configuration (get_token_ids.py): Registered MiniMax-M2.7, MiniMax-M2.7-highspeed, MiniMax-M2.5, and MiniMax-M2.5-highspeed with cl100k_base encoding and 204K context window
  • Ready-to-use config (config.minimax.yaml): Pre-configured YAML with MiniMax-M2.7 as default model
  • Documentation (README.md): Added MiniMax as a supported LLM provider with setup instructions

Default Model

MiniMax-M2.7 — Latest flagship model with enhanced reasoning and coding capabilities.

Testing

  • Verified Python syntax and model registration
  • Confirmed MiniMax-M2.7 is the default model in config
  • All existing models preserved as alternatives

Add MiniMax (MiniMax-M2.5, MiniMax-M2.5-highspeed) as a new LLM provider
option alongside OpenAI and Azure OpenAI. MiniMax offers an
OpenAI-compatible API with up to 204K context length.

Changes:
- Add MiniMax provider detection and API endpoint construction in
  awesome_chat.py with priority: local > azure > minimax > openai
- Handle MiniMax temperature constraint (must be > 0) by adjusting
  zero values to 0.01
- Add MiniMax model encodings and context lengths in get_token_ids.py
- Create config.minimax.yaml template for MiniMax configuration
- Update README.md with MiniMax setup instructions
@octo-patch
Copy link
Author

@microsoft-github-policy-service agree

- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to model list
- Set MiniMax-M2.7 as default model in config and docs
- Keep all previous models (M2.5, M2.5-highspeed) as alternatives
- Update token encodings and max context length for new models
@octo-patch octo-patch changed the title feat: add MiniMax as LLM provider feat: add MiniMax as LLM provider with M2.7 default Mar 18, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant