Gemini Quick Start
The UModelverse platform provides a Models interface compatible with the Google Gemini API. Developers can directly call the Gemini model on Modelverse using the Gemini SDK or other supported tools.
This article will guide you on how to quickly send your first Gemini API request on the UModelverse platform.
Quick Start
Install Google GenAI SDK
Install the SDK for Python language
Use Python 3.9 or later versions, and install the google-genai package using the following pip command:
pip install google-genaiExample
The following example uses the generateContent method to send a request to the UModelverse API via the gemini-2.5-flash model.
Make sure to replace
$MODELVERSE_API_KEYwith your own API Key. Obtain your API Key .
Non-Streaming Call
You can make a call using the following code. Note that we need to specify the API address of Modelverse through http_options.
** python **
from google import genai
from google.genai import types
client = genai.Client(
api_key="<MODELVERSE_API_KEY>",
http_options=types.HttpOptions(
base_url="https://api.umodelverse.ai"
),
)
response = client.models.generate_content(
model="gemini-2.5-flash",
contents=[
{"text": "How does AI work?"},
],
config=types.GenerateContentConfig(
thinking_config=types.ThinkingConfig(thinking_budget=0),
),
)
print(response.text)Parameter Description: Enable Thought Summary
For details, refer to the official document
To enable thought summary, add the following in thinking_config:
config=types.GenerateContentConfig(
thinking_config=types.ThinkingConfig(
include_thoughts=True
)
)** curl **
curl "https://api.umodelverse.ai/v1beta/models/deepseek-ai/DeepSeek-V3.1:generateContent" \
-H "x-goog-api-key: $MODELVERSE_API_KEY" \
-H "Content-Type: application/json" \
-X POST \
-d '{
"contents": [
{
"parts": [
{
"text": "How does AI work?"
}
]
}
],
"generationConfig": {
"thinkingConfig": {
"thinkingBudget": 0
}
}
}'Streaming Call
** python **
from google import genai
from google.genai import types
client = genai.Client(
api_key="<MODELVERSE_API_KEY>",
http_options=types.HttpOptions(
base_url="https://api.umodelverse.ai"
),
)
response = client.models.generate_content_stream(
model="gemini-2.5-flash", contents=["Explain how AI works"]
)
for chunk in response:
print(chunk.text, end="")
** curl **
curl "https://api.umodelverse.ai/v1beta/models/gemini-2.5-flash:GenerateContent?alt=sse" \
-H "Authorization: Bearer $MODELVERSE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"contents": [
{
"role": "user",
"parts": [
{
"text": "Explain how AI works"
}
]
}
]
}'Model ID Description
For more supported Gemini models, please refer to [Get Model List]
For more field details, see the Gemini official document