Multi-model API gateway
Access leading AI models through one clean API.
WeaveAPI gives developers one key, one balance, and one stable OpenAI-compatible endpoint for Qwen, Kimi, GLM, and MiniMax routes.
{
"model": "qwen3.6-flash",
"messages": [
{ "role": "user", "content": "Build a launch plan." }
]
}
Stop wiring every provider into your product. Keep one API surface while routing by model, tier, and fallback policy behind the gateway.
Use default, premium, and backup routes so teams can react to upstream limits or balance issues without changing customer integrations.
We keep operational logs for metering and abuse prevention. Prompt content is not sold, shared, or used to train WeaveAPI models.
Get Started in Minutes
Connect once. Test multiple model routes immediately.
Get a WeaveAPI key
Use one key and one balance across the active catalog instead of juggling provider accounts.
Change the base URL
Keep your OpenAI SDK flow and point requests to the WeaveAPI endpoint.
Route by model name
Pick fast, long-context, reasoning, value, or router paths without exposing upstream complexity.
Model routes
Choose from production-ready model routes.
qwen3.6-flash
Fast default route for chat, coding, and everyday assistant traffic.
qwen3.6-plus
Balanced Qwen route for production chat and tool workflows.
qwen3-coder-plus
Code-focused route for agentic development and editor workflows.
moonshot-v1-auto
Automatic Moonshot route for general-purpose Kimi traffic.
kimi-k2.5
Kimi route for long context, document-heavy, and assistant tasks.
kimi-k2.6
Long-context Kimi route for coding, planning, and research workflows.
glm-5.1
GLM route for structured reasoning and agent workflows.
MiniMax-M2.1
MiniMax route for general assistant and product chat traffic.
MiniMax-M2.5
MiniMax route for balanced response quality and throughput.
MiniMax-M2.7
Value MiniMax route for reliable everyday generation.
Why teams use it
Ship faster without rebuilding every provider integration.
One API contract
Keep the OpenAI-compatible request format your product already supports.
Cleaner model rollout
Add, test, and replace model routes without changing your customer-facing API.
Usage in one place
Manage keys, quota, channels, and logs from a single operator workflow.
Quickstart
Use the SDK flow your app already has.
Change the base URL, send a WeaveAPI key, and choose a routed model name. That is enough for the first integration test.
curl https://weaveapi.dev/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "qwen3.6-flash",
"messages": [
{"role": "user", "content": "Say hello from WeaveAPI."}
]
}'
Start building