LiteLLM proxy with PicoClaw

LiteLLM exposes an OpenAI-shaped API across dozens of backends. PicoClaw can treat it like any other OpenAI-compatible endpoint—useful when you want fallbacks, spend tracking, or model aliases in one place.

1. When LiteLLM helps

  • Multiple vendor keys behind one gateway
  • Routing rules and budgets per model
  • Staging parity with production model names

2. Deployment

Run LiteLLM in Docker on a small VM or next to PicoClaw on your LAN. Lock down network access; the proxy holds powerful credentials.

3. Point PicoClaw at the proxy

Set api_base to your LiteLLM URL and choose model aliases you defined. Confirm with a single picoclaw agent smoke test before enabling gateways.

4. Related