OpenAI compatible Swap the base URL and start testing

Multi-model API gateway

Access leading AI models through one clean API.

WeaveAPI gives developers one key, one balance, and one stable OpenAI-compatible endpoint for Qwen, Kimi, GLM, and MiniMax routes.

RouteUsageKeys
Fast qwen3.6-flash Default chat and coding
Long kimi-k2 Large context workflows
Reason glm-5.1 Structured reasoning
Value MiniMax-M2.5 Reliable value route
POST /v1/chat/completions stream: true
{
  "model": "qwen3.6-flash",
  "messages": [
    { "role": "user", "content": "Build a launch plan." }
  ]
}
1API key
10model routes
/v1base path
Why WeaveAPI One contract for a changing model stack.

Stop wiring every provider into your product. Keep one API surface while routing by model, tier, and fallback policy behind the gateway.

Reliability Provider issues should not become product rewrites.

Use default, premium, and backup routes so teams can react to upstream limits or balance issues without changing customer integrations.

Privacy Prompts are routed for completion, not used for training.

We keep operational logs for metering and abuse prevention. Prompt content is not sold, shared, or used to train WeaveAPI models.

Get Started in Minutes

Connect once. Test multiple model routes immediately.

01

Get a WeaveAPI key

Use one key and one balance across the active catalog instead of juggling provider accounts.

02

Change the base URL

Keep your OpenAI SDK flow and point requests to the WeaveAPI endpoint.

03

Route by model name

Pick fast, long-context, reasoning, value, or router paths without exposing upstream complexity.

Model routes

Choose from production-ready model routes.

Qwen logo Qwen/
text-generation

qwen3.6-flash

Fast default route for chat, coding, and everyday assistant traffic.

OpenAI /v1FAST
Qwen logo Qwen/
text-generation

qwen3.6-plus

Balanced Qwen route for production chat and tool workflows.

OpenAI /v1CHAT
Qwen logo Qwen/
code

qwen3-coder-plus

Code-focused route for agentic development and editor workflows.

OpenAI /v1CODE
moonshotai logo moonshotai/
router

moonshot-v1-auto

Automatic Moonshot route for general-purpose Kimi traffic.

OpenAI /v1ROUTER
moonshotai logo moonshotai/
text-generation

kimi-k2.5

Kimi route for long context, document-heavy, and assistant tasks.

OpenAI /v1CHAT
moonshotai logo moonshotai/
text-generation

kimi-k2.6

Long-context Kimi route for coding, planning, and research workflows.

OpenAI /v1CHAT
Zhipu logo zhipu-ai/
reasoning

glm-5.1

GLM route for structured reasoning and agent workflows.

OpenAI /v1REASON
minimax logo minimax/
text-generation

MiniMax-M2.1

MiniMax route for general assistant and product chat traffic.

OpenAI /v1CHAT
minimax logo minimax/
text-generation

MiniMax-M2.5

MiniMax route for balanced response quality and throughput.

OpenAI /v1CHAT
minimax logo minimax/
text-generation

MiniMax-M2.7

Value MiniMax route for reliable everyday generation.

OpenAI /v1VALUE

Why teams use it

Ship faster without rebuilding every provider integration.

01

One API contract

Keep the OpenAI-compatible request format your product already supports.

02

Cleaner model rollout

Add, test, and replace model routes without changing your customer-facing API.

03

Usage in one place

Manage keys, quota, channels, and logs from a single operator workflow.

Quickstart

Use the SDK flow your app already has.

Change the base URL, send a WeaveAPI key, and choose a routed model name. That is enough for the first integration test.

curl https://weaveapi.dev/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "qwen3.6-flash",
    "messages": [
      {"role": "user", "content": "Say hello from WeaveAPI."}
    ]
  }'

Start building

Get one key for the active model catalog.