Supported Protocols for Each Model
This page explains the compatibility of different models with various protocols.
Guidelines:
- ✅: Currently recommended and verified primary invocation method.
- Blank: Can be attempted but is not fully verified, potentially unstable, or completely unusable. It is recommended to validate in a test environment before use, as compatibility is not guaranteed.
Note: Actual compatibility may adjust with the evolution of official protocols. Please refer to the latest official documentation and test results.
How to View the List of Supported Models
You can obtain the list of supported models via the /v1/models interface:
curl https://api.umodelverse.ai/v1/models \
-H "Content-Type: application/json"It is recommended to use the Cherry Studio client. After configuring the API address, click the management button to visually view all supported models. For detailed configuration instructions, please refer to the Cherry Studio Configuration Tutorial.
OpenAI / GPT Series
| Model ID | OpenAI Chat Protocol /v1/chat/completions | OpenAI Response Protocol /v1/responses | Gemini Protocol /v1beta/models | Anthropic Protocol /v1/messages | Notes |
|---|---|---|---|---|---|
| gpt-5.1-chat | ✅ | ✅ | Does not support max_tokens parameter, use max_completion_tokens, for images use base64 | ||
| gpt-5.1 | ✅ | ✅ | Does not support max_tokens parameter, use max_completion_tokens, for images use base64 | ||
| gpt-5.1-codex-mini | ✅ | Supports only /v1/responses interface | |||
| gpt-5.1-codex | ✅ | Supports only /v1/responses interface | |||
| gpt-5 | ✅ | ✅ | Does not support max_tokens parameter, use max_completion_tokens, for images use base64 | ||
| gpt-5-chat | ✅ | ✅ | Does not support max_tokens parameter, use max_completion_tokens, for images use base64 | ||
| gpt-5-codex | ✅ | Supports only /v1/responses interface | |||
| gpt-4o-mini | ✅ | ||||
| gpt-4.1-nano | ✅ | ||||
| gpt-4.1-mini | ✅ | ||||
| o1 | ✅ | ✅ | Does not support max_tokens parameter, use max_completion_tokens, for images use base64 | ||
| openai/gpt-5.1-chat | ✅ | ✅ | Does not support max_tokens parameter, use max_completion_tokens, for images use base64 | ||
| openai/gpt-5.1-codex | ✅ | Supports only /v1/responses interface | |||
| openai/gpt-5.1-codex-mini | ✅ | Supports only /v1/responses interface | |||
| openai/gpt-5.1 | ✅ | ✅ | Does not support max_tokens parameter, use max_completion_tokens, for images use base64 | ||
| openai/gpt-4o | ✅ | ✅ | |||
| openai/gpt-5-nano | ✅ | ✅ | Does not support max_tokens parameter, use max_completion_tokens, for images use base64 | ||
| openai/gpt-4.1 | ✅ | ✅ | |||
| openai/gpt-5 | ✅ | ✅ | Does not support max_tokens parameter, use max_completion_tokens, for images use base64 | ||
| openai/gpt-5-mini | ✅ | ✅ | Does not support max_tokens parameter, use max_completion_tokens, for images use base64 | ||
| openai/gpt-oss-20b | ✅ | ||||
| openai/gpt-oss-120b | ✅ | ||||
| o4-mini-deep-research | ✅ | Supports only /v1/responses interface | |||
| o4-mini | ✅ | ✅ | Does not support max_tokens parameter, use max_completion_tokens, for images use base64 |
Claude Series
| Model ID | OpenAI Chat Protocol /v1/chat/completions | OpenAI Response Protocol /v1/responses | Gemini Protocol /v1beta/models | Anthropic Protocol /v1/messages | Notes |
|---|---|---|---|---|---|
| claude-opus-4-1-20250805 | ✅ | ✅ | |||
| claude-sonnet-4-5-20250929-thinking | ✅ | ✅ | Only one of temperature or top_p can be specified, temperature range is [0,1) | ||
| claude-sonnet-4-5-20250929 | ✅ | ✅ | |||
| claude-sonnet-3.7 | ✅ | ✅ | |||
| claude-opus-4.1-thinking | ✅ | ✅ | |||
| claude-opus-4.0-thinking | ✅ | ✅ | |||
| claude-sonnet-4.0-thinking | ✅ | ✅ | |||
| claude-sonnet-3.5 | ✅ | ✅ | |||
| claude-4-opus | ✅ | ✅ | |||
| claude-4-sonnet | ✅ | ✅ | |||
| claude-sonnet-4.5-thinking | ✅ | ✅ | Only one of temperature or top_p can be specified, temperature range is [0,1) | ||
| claude-sonnet-4.5 | ✅ | ✅ | Only one of temperature or top_p can be specified, temperature range is [0,1) |
Parameter Limitation Notice: Claude Sonnet 4.5 only supports specifying one of the
temperatureortop_pparameters, and not both simultaneously. See the official documentation description .
Grok Series
| Model ID | OpenAI Chat Protocol /v1/chat/completions | OpenAI Response Protocol /v1/responses | Gemini Protocol /v1beta/models | Anthropic Protocol /v1/messages | Notes |
|---|---|---|---|---|---|
| grok-4-1-fast-reasoning | ✅ | ||||
| grok-4-1-fast-non-reasoning | ✅ | ||||
| grok-4-fast-reasoning | ✅ | ||||
| grok-4-fast | ✅ | ||||
| grok-4 | ✅ |
Qwen Series
| Model ID | OpenAI Chat Protocol /v1/chat/completions | OpenAI Response Protocol /v1/responses | Gemini Protocol /v1beta/models | Anthropic Protocol /v1/messages | Notes |
|---|---|---|---|---|---|
| Qwen/Qwen3-vl-Plus | ✅ | ||||
| Qwen/Qwen3-235B-A22B-Thinking-2507 | ✅ | ||||
| Qwen/Qwen-Plus-Thinking | ✅ | ||||
| Qwen/Qwen-Plus | ✅ | ||||
| Qwen/Qwen3-Max | ✅ | ||||
| Qwen/Qwen3-VL-235B-A22B-Instruct | ✅ | ||||
| Qwen/Qwen3-VL-235B-A22B-Thinking | ✅ | ||||
| Qwen/Qwen3-32B | ✅ | ||||
| Qwen/Qwen3-30B-A3B | ✅ | ||||
| Qwen/Qwen3-Coder | ✅ | ||||
| Qwen/Qwen3-235B-A22B | ✅ | ||||
| Qwen/QwQ-32B | ✅ | ||||
| qwen/qwen2.5-vl-72b-instruct | ✅ |
Gemini Series (Supports /v1beta/models)
| Model ID | OpenAI Chat Protocol /v1/chat/completions | OpenAI Response Protocol /v1/responses | Gemini Protocol /v1beta/models | Anthropic Protocol /v1/messages | Notes |
|---|---|---|---|---|---|
| gemini-3-pro-preview | ✅ | Recommended to use /v1beta/models interface | |||
| gemini-2.5-pro | ✅ | Recommended to use /v1beta/models interface | |||
| gemini-2.5-flash | ✅ | Recommended to use /v1beta/models interface |
Thinking Process Configuration: Using the Gemini protocol’s
v1beta/modelsinterface, you can enable/disable the thinking process through thethinkingConfigparameter. See Gemini Protocol Compatibility.Safety Level Configuration: Gemini supports configuring content filtering strategies through
safetySettings, covering categories like hate speech, pornography, dangerous content, harassment, and civic integrity. See the Google Official Safety Settings Document .
DeepSeek Series
| Model ID | OpenAI Chat Protocol /v1/chat/completions | OpenAI Response Protocol /v1/responses | Gemini Protocol /v1beta/models | Anthropic Protocol /v1/messages | Notes |
|---|---|---|---|---|---|
| deepseek-ai/DeepSeek-OCR | ✅ | ||||
| deepseek-ai/DeepSeek-V3.1-Think | ✅ | ||||
| deepseek-ai/DeepSeek-V3.2-Exp-Think | ✅ | ||||
| deepseek-ai/DeepSeek-V3.2-Exp | ✅ | ||||
| deepseek-ai/DeepSeek-V3.1-Terminus | ✅ | ||||
| deepseek-ai/DeepSeek-V3.1 | ✅ | ||||
| deepseek-ai/DeepSeek-R1-Distill-Llama-70B | ✅ | ||||
| deepseek-ai/DeepSeek-R1-0528 | ✅ | ||||
| deepseek-ai/DeepSeek-V3-0324 | ✅ | ||||
| deepseek-ai/DeepSeek-R1 | ✅ |
Doubao Series
| Model ID | OpenAI Chat Protocol /v1/chat/completions | OpenAI Response Protocol /v1/responses | Gemini Protocol /v1beta/models | Anthropic Protocol /v1/messages | Notes |
|---|---|---|---|---|---|
| ByteDance/doubao-1-5-pro-32k-250115 | ✅ | ||||
| ByteDance/doubao-1-5-pro-256k-250115 | ✅ | ||||
| ByteDance/doubao-seed-1.6 | ✅ | ||||
| ByteDance/doubao-seed-1.6-thinking | ✅ | ||||
| ByteDance/doubao-1.5-thinking-vision-pro | ✅ |
Baidu Series
| Model ID | OpenAI Chat Protocol /v1/chat/completions | OpenAI Response Protocol /v1/responses | Gemini Protocol /v1beta/models | Anthropic Protocol /v1/messages | Notes |
|---|---|---|---|---|---|
| baidu/ernie-x1-turbo-32k | ✅ | ||||
| baidu/ernie-4.5-turbo-128k | ✅ | ||||
| baidu/ernie-4.5-turbo-vl-32k | ✅ |
Other Models
| Model ID | OpenAI Chat Protocol /v1/chat/completions | OpenAI Response Protocol /v1/responses | Gemini Protocol /v1beta/models | Anthropic Protocol /v1/messages | Notes |
|---|---|---|---|---|---|
| moonshotai/Kimi-K2-Thinking | ✅ | ||||
| moonshotai/Kimi-K2-Instruct-0905 | ✅ | ||||
| moonshotai/Kimi-K2-Instruct | ✅ | ||||
| zai-org/glm-4.6 | ✅ | ||||
| zai-org/glm-4.5v | ✅ | ||||
| zai-org/glm-4.5 | ✅ | ||||
| kat-coder-256k | ✅ |