Docs

Start with the OpenAI-compatible endpoint.

WeaveAPI keeps the request flow familiar: change the base URL, send your API key, choose a routed model name, and ship.

Quickstart

Send your first chat request.

Replace `YOUR_API_KEY` with your WeaveAPI key. The production base URL will be `https://weaveapi.dev/v1` after domain cutover; the current test endpoint is available through the server IP.

curl https://weaveapi.dev/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "qwen3.6-flash",
    "messages": [
      {"role": "user", "content": "Hello from WeaveAPI."}
    ]
  }'
Base URL

/v1

Use the same path style as the OpenAI API for chat completions and model discovery.

stable surface
Auth

Bearer key

Send your WeaveAPI key in the Authorization header for every request.

single key
Models

Routed names

Choose Qwen, Kimi, GLM, MiniMax, or DeepSeek routes based on the workload.

model menu
Streaming

stream: true

Use streaming responses for chat interfaces and agent workflows.

ready