Local LLM with Ollama and PicoClaw
Connect PicoClaw to a local model running with Ollama so your lightweight AI assistant can run on-device or on your local network without sending prompts to a third-party cloud.
1. Why local LLMs?
- Better privacy: keep prompts and outputs in your environment
- Lower cost: no per-request cloud fees (after hardware setup)
- Works well on Raspberry Pi and homelab nodes (depending on model size)
2. Run Ollama
Install Ollama for your platform and pull a model you want to use. Start the Ollama server, then verify it responds.
3. Configure PicoClaw providers
In configuration, set PicoClaw to use the Ollama provider (or your local model endpoint). Then choose the model name you pulled in Ollama.
# Example: configure your local provider + model
# (Exact keys depend on the provider settings in PicoClaw.)
For full provider details, see Providers.
4. Run your local AI assistant
picoclaw agent -m "Summarise this and suggest next actions"
5. Next steps
- Deploy for always-on automation using Heartbeat
- Run on Raspberry Pi using Raspberry Pi AI assistant guide